In my last post, the files were being written to an IDE hard disks. Now lets see what happens if we write to /tmp instead. Will Solaris cope with ten million files in /tmp? First, if we want to make use of the compression, we need to make a file system:
We make the files (we can use files instead of real disks…):
anton@solaris-devx ~ $ mkfile 100M /tmp/file1
anton@solaris-devx ~ $ mkfile 100M /tmp/file2
and then su to root to make the ZFS file system (mirrored):
# zpool create crazedPool mirror /tmp/file1 /tmp/file2
I should note that for some reason ZFS didn’t make use of the entire file size:
# zfs list crazedPool
NAME USED AVAIL REFER MOUNTPOINT
crazedPool 110K 63.4M 20K /crazedPool
And now the real test. How about a big file? Lets say, 100G?:
anton@solaris-devx dir1 $ time mkfile 100G woot
And what about 10000 files, each 10M in size?:
anton@solaris-devx dir1 $ i="0"
anton@solaris-devx dir1 $ time while [ $i -lt 10000 ]
> mkfile 10M la0$i
So far, so good. So now lets push the envelope off the desk. Or maybe off a cliff. Lets see what happens when we make a 100TB file with ZFS!
anton@solaris-devx dir1 $ ls -l megaFile
-rw------- 1 anton staff 107374182400000 Mar 15 18:05 megaFile
and the compression ratio?:
anton@solaris-devx tmp $ zfs get compressratio crazedPool
NAME PROPERTY VALUE SOURCE
crazedPool compressratio 1.00x -
hmm, not quite what I was expecting!