Iomega StorCenter px12-350r

Post Reply
makar
Posts: 2
Joined: Thu Jan 11, 2018 4:08 pm

Iomega StorCenter px12-350r

Post by makar » Thu Jan 11, 2018 6:34 pm

Hello ,

We have 4 x 3TB drives on raid5

We've got one failed drive at first, the day after the second drive became not recognizable
and after rebooting the nas the power led stays blinking blue and the server is not accessible by browser.

After restarting there is now 2HD red.

Could you please advise how to proceed to repair/recover the drives?
using putty or winscp for ssh, sftp, scp? any special application like RAID reconstructor?..

Or what do you think about this video ? : https://www.youtube.com/watch?v=TQq-AoeCq2o


Thanks in advance for help!

makar
Posts: 2
Joined: Thu Jan 11, 2018 4:08 pm

Re: Iomega StorCenter px12-350r

Post by makar » Wed Jan 17, 2018 12:24 pm

please have look at this that i managed to get :

Code: Select all

root@iomega:/# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid5 sda2[0](F) sdc2[3] sdb2[2]
8727856128 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/2] [__UU]

md0 : active raid1 sdb1[0] sdc1[3]
20980816 blocks super 1.0 [4/2] [U__U]

 

 

root@iomega:/# fdisk -l | head -50

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x49f56d30

Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT

Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x1c37f46c

Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT

Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x4dd6479e

Device Boot Start End Blocks Id System
/dev/sdc1 1 267350 2147483647+ ee EFI GPT

Disk /dev/sdd: 1031 MB, 1031798784 bytes
32 heads, 62 sectors/track, 1015 cylinders
Units = cylinders of 1984 * 512 = 1015808 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdd1 1 980 972129 83 Linux

Disk /dev/md0: 21.4 GB, 21484355584 bytes
2 heads, 4 sectors/track, 5245204 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000


Disk /dev/md1: 8937.3 GB, 8937324675072 bytes
2 heads, 4 sectors/track, -2113003264 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Mijzelf
Posts: 6194
Joined: Mon Jun 16, 2008 10:45 am

Re: Iomega StorCenter px12-350r

Post by Mijzelf » Fri Jan 19, 2018 11:37 am

One of the disks, maybe the 2nd slot, is dead, and not detected by the firmware. Further one disk, maybe the first slot, is failing.

Code: Select all

 md1 : active raid5 sda2[0](F) sdc2[3] sdb2[2]
    8727856128 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/2] [__UU]
Depending on the nature of the failure it might be possible to get the array up again, by re-creating the array with exactly the same parameters as they were created originally, specifying the members in the right sequence (sda2 missing sdb2 sdc2, although I'd re-look at /proc/mdstat after a reboot to see if the devices names are changed), adding a '--assume-clean' to let the raid manager know it should *not* recalculate any xor blocks.

To know the original parameters you can execute

Code: Select all

mdadm --examine /dev/sd[abc]2
If the data it important, you'd better create a low-level copy of the disk which is called sda, maybe the first slot. It is failing, and copying your data away from a raid array gives a larger stress to a disk than a one-pass low level copy. So the odds that sda will die during copying files from the array are bigger than that it will die during a low-level copy.

Post Reply