Bu test Linux işletim sistemiyle yazılımsal RAID'in nasıl gerçekleştirildiğini, spare disk kullanımının önemini göstermekle beraber, temel olarak linux bilgisine sahip olduğunuz varsayılarak hazırlanmıştır. Bu döküman referans alınarak kendi sisteminizde yapacağınız zararlardan siz sorumlu olursunuz.
Code:
$ sudo fdisk -l Disk /dev/sdb: 4294 MB, 4294967296 bytes 255 heads, 63 sectors/track, 522 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9800ca90 Device Boot Start End Blocks Id System /dev/sdb1 1 132 1060258+ fd Linux raid autodetect /dev/sdb2 133 264 1060290 fd Linux raid autodetect /dev/sdb3 265 396 1060290 fd Linux raid autodetect /dev/sdb4 397 522 1012095 fd Linux raid autodetect
SENARYO-1 : spare disk kullanmadan RAID grubu oluşturmak
Spare disk, disklerden birinin arızası durumunda yerine kullanılacak olan boş disk demektir.
Spare disk, disklerden birinin arızası durumunda yerine kullanılacak olan boş disk demektir.
Code:
$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 $ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Oct 3 09:43:15 2011 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Oct 3 09:43:36 2011 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : raidtest:0 (local to host raidtest) UUID : e71a126c:c14b058f:d1cbf2f1:28e66db8 Events : 20 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 49 2 active sync /dev/sdd1
md0 -> raid grubumuza verdiğimiz ad.
Array Size : 2.17 GB -> Array boyutu 3x1GB disk kullanmamıza rağmen 2GB gösteriyor. Çünkü her diskin bir bloğu parity biti için kullanılıyor. Bu, bir disk boyutu kadar kapasitede düşüş olacağını gösterir.
Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) -> RAID grubundaki her bir diskin kapasitesini gösteriyor.
En altta ise kullanılan disklerin durumlarını görüyoruz. "active sync" disklerin arrayda kullanılabilir durumda olduğu anlamına geliyor.
Şimdi disklerden bir veya birkaçının bozulması durumunda neler olabileceğini görelim. --fail parametresi diskin arızalı olduğunu belirtmemizi sağlıyor.
Code:
$ sudo mdadm /dev/md0 --fail /dev/sdd1 mdadm: set /dev/sdd1 faulty in /dev/md0 $ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Oct 3 09:43:15 2011 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Oct 3 09:44:55 2011 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : raidtest:0 (local to host raidtest) UUID : e71a126c:c14b058f:d1cbf2f1:28e66db8 Events : 21 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 0 0 2 removed 3 8 49 - faulty spare /dev/sdd1
Şimdi ikinci bir diski daha fail durumuna getirelim.
Code:
$ sudo mdadm /dev/md0 --fail /dev/sdc1 mdadm: set /dev/sdc1 faulty in /dev/md0 $ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Oct 3 09:43:15 2011 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Oct 3 09:45:32 2011 State : clean, FAILED Active Devices : 1 Working Devices : 1 Failed Devices : 2 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : raidtest:0 (local to host raidtest) UUID : e71a126c:c14b058f:d1cbf2f1:28e66db8 Events : 24 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 1 8 33 - faulty spare /dev/sdc1 3 8 49 - faulty spare /dev/sdd1
SENARYO-2 : spare disk kullanarak RAID grubu oluşturmak
Code:
$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sdb2
Code:
$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Oct 3 09:49:22 2011 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Oct 3 09:49:50 2011 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : raidtest:0 (local to host raidtest) UUID : 45ee20ab:90382c4b:b9e5fe27:22e24641 Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 4 8 49 2 active sync /dev/sdd1 3 8 18 - spare /dev/sdb2
Code:
$ sudo mdadm /dev/md0 --fail /dev/sdd1 mdadm: set /dev/sdd1 faulty in /dev/md0 $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](F) sdb2[3] sdc1[1] sdb1[0] 2117632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [================>....] recovery = 82.4% (873856/1058816) finish=0.0min speed=45992K/sec
Code:
$ sudo mdadm /dev/md0 --remove /dev/sdd1 mdadm: hot removed /dev/sdd1 from /dev/md0 $ sudo mdadm /dev/md0 --add /dev/sdd2 mdadm: added /dev/sdd2 $ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Oct 3 09:49:22 2011 Raid Level : raid5 Array Size : 2117632 (2.02 GiB 2.17 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Oct 3 11:33:40 2011 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : raidtest:0 (local to host raidtest) UUID : 45ee20ab:90382c4b:b9e5fe27:22e24641 Events : 41 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 3 8 18 2 active sync /dev/sdb2 4 8 50 - spare /dev/sdd2
SONUÇ : Kapasiteyi artırayım diye tüm diskleri raid grubuna dahil etmeyin. En az 1 tane spare disk kullanın.