Server Topic
   >  Introduction to RAID
   >  Installing a RAID Software
   >  Creating RAID Partitions
   >  Creating the RAID Device
   >  Creating Logical Volumes
   >  Mounting Logical Volumes
   >  Auto Mounting RAID Arrays
   >  Testing a RAID1 Array
   >  Troubleshooting RAID1 Problems

 

How to Test a RAID 1 Array

Checking the Status of a RAID Array

You can check the current status of any RAID device by listing out the contents of the /proc/mdstat file:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0] sdc1[2]
      976759936 blocks [2/1] [U_]

unused devices: <none>

Here, you can see the array is listed as "active" and that both members (sdb1 and sdc1) are part of the array.


Marking a RAID 1 Member as Failed

The mdadm utility also allows you to simulate a error occurring on a RAID member by marking one of the discs as bad, using the syntax:

mdadm --manage --fail <RAID Device> <Partition Member>

For example, to simulate a failure of disc sdc1, we would use:

$ mdadm --manage --fail /dev/md0 /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0

Now, if we examine /proc/mdstat, we can see that partition sdc1 is marked as having failed (F):

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0] sdc1[2](F)
      976759936 blocks [2/1] [U_]

unused devices: <none>

Running mdadm can give even more detail:

mdadm --detail <RAID Device>

The output again shows that /dev/sdc1 has failed:

$ mdadm --detail /dev/md0
/dev/md0:
             Version : 00.90
  Creation Time : Sat Feb 19 16:29:11 2011
        Raid Level : raid1
         Array Size : 976759936 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Raid Devices : 2
    Total Devices : 2
 Preferred Minor : 0
       Persistence : Superblock is persistent

    Update Time : Mon Feb 21 12:42:12 2011
                State : clean, degraded
  Active Devices : 1
Working Devices: 1
  Failed Devices : 1
  Spare Devices : 0

           UUID : a1500c81:a4bdadeb:382ce4fd:28800507 (local to host homeServer)
         Events : 0.72

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0         1      removed

       2       8       33         -      faulty spare   /dev/sdc1

Removing a RAID 1 Member

After failing the RAID member, you can remove it completely using:

mdadm <RAID Device> -r <Device to remove>

For example, to remove our failed /dev/sdc1 member, we would use:

$ mdadm /dev/md0 -r /dev/sdc1
mdadm: hot removed /dev/sdc1

If you check again using mdadm --detail , you'll see that /dev/sdc1 is no longer listed:

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed

You should now check that you can still see your files on the RAID drive:

$ ls /mnt/raid1
Music  lost+found  Images Films Misc

If nothing is listed, then something is wrong in your RAID configuration, so you need to go back and check it before continuing.


Adding Back a RAID 1 Member

To add a RAID member back into the array, use:

mdadm <RAID Device> -a <Device to add into the array>

For example:

$ mdadm /dev/md0 -a /dev/sdc1
mdadm: re-added /dev/sdc1

If you check again using mdadm --detail , you'll see that /dev/sdc1 is shown as "rebuilding":

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       2       8       33        1      spare rebuilding   /dev/sdc1

This means that the newly added member is being synchronised with the data on the other member; if you now check proc/mdstat, you can see the synchronisation progress - as a percentage and a bar chart:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[2] sdb1[0]
      976759936 blocks [2/1] [U_]
      [>....................]  recovery =  1.2% (12409024/976759936) finish=195.2min speed=82312K/sec

unused devices: <none>

Verify that you can still see your files on the RAID drive following the addition of the new RAID member:

$ ls /mnt/raid1
Music  lost+found  Images Films Misc

References and Further Reading:


HomeSite IndexDesktop GuideServer GuideHints and TipsHardware CornerVideo SectionContact Us

 sitelock verified Firefox Download Button