miércoles, marzo 18, 2009

FreeBSD 7.1 + RAID0 - Striping

Bueno, hoy tengo que montar un monstruo en almacenamiento, sera un RAID0 por software, los datos no son criticos y el limitante es el factor dinero...

Informacion de los discos

6 HITACHI de 1TB SATA II

La organizacion

/dev/ad0 de 1TB FreeBSD 7.1 amd64

/ 1G
swap 4G
/var 4G
/tmp 1G
/usr 20G
/home 873G

5 SATAs 1TB cada uno para el raid0 por sofware
/dev/ad1
/dev/ad2
/dev/ad3
/dev/ad10
/dev/ad8

1) Instalar FreeBSD 7.1
(ya sabemos como, asi que no lo explicare, solo decir que se instala lo basico [X] 6 Kernel deve...)

2) Configurar los discos
Segun el manual http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-striping.html

Solo modifico unas cosas en mi caso

Creating a stripe of unformatted ATA disks

  1. Load the geom_stripe.ko module:

    # kldload geom_stripe
  2. Ensure that a suitable mount point exists. If this volume will become a root partition, then temporarily use another mount point such as /mnt:

    # mkdir /mnt
  3. Determine the device names for the disks which will be striped, and create the new stripe device. For example, to stripe two unused and unpartitioned ATA disks, for example /dev/ad2 and /dev/ad3:

    # gstripe label -v st0 /dev/ad1 /dev/ad2 /dev/ad3 /dev/ad10 /dev/ad8
    Metadata value stored on /dev/ad1.
    Metadata value stored on /dev/ad2.
    Metadata value stored on /dev/ad3.
    Metadata value stored on /dev/ad10.
    Metadata value stored on /dev/ad8.
    Done.
  4. Write a standard label, also known as a partition table, on the new volume and install the default bootstrap code:

    (Esta parte la omito, ya que bsdlabel no trabaja con mas de 4TB)
    # bsdlabel -wB /dev/stripe/st0
  5. This process should have created two other devices in the /dev/stripe directory in addition to the st0 device. Those include st0a and st0c. At this point a file system may be created on the st0a device with the newfs utility:

    (Como omiti bsdlabel, el siguiente comando es sin la a al final)
    # newfs -U /dev/stripe/st0

    Many numbers will glide across the screen, and after a few seconds, the process will be complete. The volume has been created and is ready to be mounted.

To manually mount the created disk stripe:

# mount /dev/stripe/st0 /mnt

To mount this striped file system automatically during the boot process, place the volume information in /etc/fstab file. For this purpose, a permanent mount point, named stripe, is created:

# mkdir /home/stripe
# echo "/dev/stripe/st0 /stripe ufs rw 2 2" \
>> /etc/fstab

The geom_stripe.ko module must also be automatically loaded during system initialization, by adding a line to /boot/loader.conf:

# echo 'geom_stripe_load="YES"' >> /boot/loader.conf

Update:
Si queremos hacer RAID0 con ZFS...
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html
http://wiki.freebsd.org/ZFSQuickStartGuide

(desmontar el stripe)
# umount /stripe

(deshacer el stripe)
# gstripe unload /dev/stripe/st0
# gstripe clear -v /dev/ad1
# gstripe clear -v /dev/ad2
# gstripe clear -v /dev/ad3
# gstripe clear -v /dev/ad6
# gstripe clear -v /dev/ad8

(quitar la linea del fstab)
#/dev/stripe/st0 /stripe ufs rw 2 2

20.2.2.1 Single Disk Pool
# zpool create tank ad1 ad2 ad3 ad6 ad8

# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 989M 419M 491M 46% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1g 873G 40K 803G 0% /home
/dev/ad0s1e 989M 12K 910M 0% /tmp
/dev/ad0s1f 19G 2.3G 16G 13% /usr
/dev/ad0s1d 3.9G 34M 3.5G 1% /var
tank 4.5T 128K 4.5T 0% /tank

# zfs create tank/data

# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 989M 419M 491M 46% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1g 873G 40K 803G 0% /home
/dev/ad0s1e 989M 12K 910M 0% /tmp
/dev/ad0s1f 19G 2.3G 16G 13% /usr
/dev/ad0s1d 3.9G 34M 3.5G 1% /var
tank 4.5T 128K 4.5T 0% /tank
tank/data 4.5T 128K 4.5T 0% /tank/data

(para eliminar el ZFS)
# zfs destroy /tank/data

(para eliminar el pool)
# zpool destroy /tank

Nota con ZFS tambien se puede hacer un RAID5 (lo denomina "raidz")
Para RAID5
# zpool create storage raidz ad1 ad2 ad3 ad6 ad8

En este caso la unidad sera mas pequeña

bueno, parece que todo quedo listo...

2 comentarios:

Unknown dijo...

Raid5 y raidz son algo distintos, la diferencia puedes mirar en la wikipedia o documentación de zfs. Por lo demás, buen experimento.

Andrei.

AngelV dijo...

Hola, pues segun leo en el Handbook
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html

"A new data replication model, known as RAID-Z has been added. The RAID-Z model is similar to RAID5 but is designed to prevent data write corruption."

Segun entiendo yo, es que son similares y previene la corrupcion de datos al escribir.

Logico que siempre sera mejor una solucion RAID por hardware...