1. start raid
mdadm --assemble --scan
2. start raid in case if you do not have /etc/mdadm.conf (should have raid UUID)
mdadm /dev/md0 --assemble -u e6d85d3d:d20ddcfc:c7cxxxxx:6a0xxxx
3. generate /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
1/20/2012
1/10/2012
[mdadm] 2 how-tos
1. how to cancel sync:
in order to cancel raid sync its necessary to left only one drive as active (we are talking about raid1):
2. how to re-add failed disk into existing array and restart scan :)
We can check the status of the arrays on the system with:
in order to cancel raid sync its necessary to left only one drive as active (we are talking about raid1):
mdadm --fail /dev/md2 /dev/sda3and we will receive something like that (in my case it was md2 and /dev/sda3):
mdadm --detail /dev/md2/dev/sda3 was set as failed.
/dev/md2:
Version : 0.90
Creation Time : Fri Feb 4 21:22:50 2011
Raid Level : raid1
Array Size : 1462766336 (1395.00 GiB 1497.87 GB)
Used Dev Size : 1462766336 (1395.00 GiB 1497.87 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 10 18:37:27 2012
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
UUID : 71b91f85:bdd01b3f:776c2c25:004bd7b2
Events : 0.28394
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 19 1 active sync /dev/sdb3
2 8 3 - faulty spare /dev/sda3
2. how to re-add failed disk into existing array and restart scan :)
- set disk as failed (we already did that in 1st how-to )
- remove device from array
- add device one more time
mdadm --remove /dev/md2 /dev/sda3
mdadm --add /dev/md2/ /dev/sda3
We can check the status of the arrays on the system with:
watch -n .1 cat /proc/mdstator
mdadm --detail /dev/md0
Subscribe to:
Posts (Atom)