Accidental Reboot / Shutdown Causes Errors In The Hard Drive

I accidentally pressed the hardware reset because it’s too close to the headphone jacks. I’ve got big fingers what can I say. So it’s a couple of seconds later that I noticed that the computer (the monitor) has restarted. Except that the initial restart sequence is throwing me to a console with a bad ata status:DRDY message. I’ve seen this before. It’s a bad shutdown for one or more of the hard drives / raid device.
I checked the status of the RAID device and confirmed it. I’ve got an inactive array. So I issued:

#mdadm /dev/md0 –stop

to stop the array and restart it with:

#mdadm –assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1

but no go, so again i tried this:

#mdadm –assemble –force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1

I check the status of the array after I started it.

#mdadm –detail /dev/md0

root@desktop:~# mdadm –detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 6 02:59:21 2016
Raid Level : raid5
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 976630272 (931.39 GiB 1000.07 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Dec 4 22:59:12 2016
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : desktop:0 (local to host desktop)
UUID : e3284314:1258cac3:1b6c243c:748be2fe
Events : 2849

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 0 0 4 removed

One of the devices comprising this array is removed. Maybe due to corrupt data or something else. I figured i could do more if I reboot with this degraded array into my desktop. So I did. I used the Gnome SMART checker (it’s what I call it). Refreshed the status and the devices checks out.

So figuring out that the device is not really bad (according to SMART). I added it back to the array with:

#mdadm –manage –add /dev/md0 /dev/sdd1

I checked the status of the array with:

#mdadm –detail /dev/md0

/dev/md0:
Version : 1.2
Creation Time : Tue Sep 6 02:59:21 2016
Raid Level : raid5
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 976630272 (931.39 GiB 1000.07 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Dec 4 23:11:48 2016
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Rebuild Status : 4% complete

Name : desktop:0 (local to host desktop)
UUID : e3284314:1258cac3:1b6c243c:748be2fe
Events : 2853

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 spare rebuilding /dev/sdd1

All I have to do now is wait for it to rebuild the device back to the array.

/dev/md0:
Version : 1.2
Creation Time : Tue Sep 6 02:59:21 2016
Raid Level : raid5
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 976630272 (931.39 GiB 1000.07 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Dec 4 23:14:22 2016
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : desktop:0 (local to host desktop)
UUID : e3284314:1258cac3:1b6c243c:748be2fe
Events : 2890

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
3 8 49 2 active sync /dev/sdd1

It took just 10 minutes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s