Start with an old Sun desktop, Solaris 10 08/07, and a cheap USB 2.0 PCI card. Add a powered USB hub and a handful of cheap 1GB flash drives.
Plug it in, watch it light up.
If you've got functioning USB drivers on your Solaris install you should see them recognized by the kernel and the links in /dev should automatically get created. USB events get logged to /var/adm/messages, so a quick check there should tell you if the flash drives are recognized. If you can't figure out what the drives got named in /dev, you should be able to match up the descriptions in /var/adm/messages. In my case, they ended up as c2t0d0, c3t0d0, c4t0d0, c5t0d0.
For this project, I didn't want the volume manager to automatically mount the drives, so I temporarily stopped volume management.
# svcs volfs
STATE STIME FMRI
online 21:50:41 svc:/system/filesystem/volfs:default
#svcadm disable volfs
# svcs volfs
STATE STIME FMRI
disabled 22:38:15 svc:/system/filesystem/volfs:default
I labeled each of the drives
# fdisk -E /dev/rdsk/c2t0d0s2
# fdisk -E /dev/rdsk/c3t0d0s2
# fdisk -E /dev/rdsk/c4t0d0s2
# fdisk -E /dev/rdsk/c5t0d0s2
Then I created a RAIDZ pool called 'test' using all four drives.
zpool create test raidz c2t0d0s2 c3t0d0s2 c4t0d0s2 c5t0d0s2
The pool got built and mounted in a couple seconds.
# zpool status
pool: test
state: ONLINE
scrub: scrub completed with 0 errors on Sun Mar 23 21:55:32 2008
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t0d0s2 ONLINE 0 0 0
c3t0d0s2 ONLINE 0 0 0
c4t0d0s2 ONLINE 0 0 0
c5t0d0s2 ONLINE 0 0 0
errors: No known data errors
The file system shows up as available
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 105K 2.81G 36.7K /test
I'm not sure what I'm going to do with it yet. It certainly isn't something I'll carry around in my pocket.
But if someone would make a USB stick that was the size of a current USB drive, and let me plug 4 or 5 8GB micro SD's in to it, and build the RAID magic into the drive, I'd have a whole bunch of storage in a pocketable form factor and redundancy as a bonus.
For this project, I didn't want the volume manager to automatically mount the drives, so I temporarily stopped volume management.
# svcs volfs
STATE STIME FMRI
online 21:50:41 svc:/system/filesystem/volfs:default
#svcadm disable volfs
# svcs volfs
STATE STIME FMRI
disabled 22:38:15 svc:/system/filesystem/volfs:default
I labeled each of the drives
# fdisk -E /dev/rdsk/c2t0d0s2
# fdisk -E /dev/rdsk/c3t0d0s2
# fdisk -E /dev/rdsk/c4t0d0s2
# fdisk -E /dev/rdsk/c5t0d0s2
Then I created a RAIDZ pool called 'test' using all four drives.
zpool create test raidz c2t0d0s2 c3t0d0s2 c4t0d0s2 c5t0d0s2
The pool got built and mounted in a couple seconds.
# zpool status
pool: test
state: ONLINE
scrub: scrub completed with 0 errors on Sun Mar 23 21:55:32 2008
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t0d0s2 ONLINE 0 0 0
c3t0d0s2 ONLINE 0 0 0
c4t0d0s2 ONLINE 0 0 0
c5t0d0s2 ONLINE 0 0 0
errors: No known data errors
The file system shows up as available
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 105K 2.81G 36.7K /test
I'm not sure what I'm going to do with it yet. It certainly isn't something I'll carry around in my pocket.
But if someone would make a USB stick that was the size of a current USB drive, and let me plug 4 or 5 8GB micro SD's in to it, and build the RAID magic into the drive, I'd have a whole bunch of storage in a pocketable form factor and redundancy as a bonus.