General NAS-Central Forums

Welcome to the NAS community
It is currently Mon Sep 25, 2017 5:05 pm

All times are UTC




Post new topic Reply to topic  [ 20 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: Sat Sep 04, 2010 12:07 am 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
Hi all,
I have 4TB ix4-200d setup as Raid 5 with 2.7TB as storage. few days back I have the red blinking light on the unit front. so pulled the drive which gave the error message out and plugged it in again and restarted and when the unit restarted i got an error message that all drive need to be initialized, so i turn off the unit and pulled all the drives out and plugged them into a server with windows 2008 installed.

obviously windows cannot see the drives since they are linux, i have been trying since then to recover the data. the data is very valuable to me and i cannot lost it. I have been reading alot these days and tried everything i can think of. I DID NOT WRITE ANYTHING ON THE DRIVES, it is recovery software that i am using and trying to recover to another drive with NTFS partition.

here is what I have tried so far:
1. I tried with Raid constructor, I tried different configuration but I cannot figure out the right parameters for the Raid 5 (Block Size, Parity ... etc). any idea what they should be? I searched everywhere on google but i cannot seems to find any good source.

2. I am now trying with UFS explorer, the software managed to find SGI XFS partition with size 2.7TB which to me seems correct. I bought a license for the software and for the plugin to build the raid since I cannot do that on an evaluation software. I am currently running a scan for that XFS partition but from the look of it, it will take days to complete the test. I left the test already running as this is the only thing I can do for now.


Now my question is, is what I am doing correct? Is there any easier way to do this? If there is a success rate for this what would that be?

I am trying to avoid sending the drives to a lab since this is very expensive and out of my budget, added to that I live in the middle east.

Any help would be greatly appreciated.


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 9:28 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
I don't know you box, but I suppose it's just another linux box running software raid. In that case you can use a Linux live CD, or -USB stick to rescue your data.

In next tutorial I'll assume you are using Ubuntu.

Connect all the disks to your PC. Via Sata or an USB-Sata converter, or a combination if that is convenient, it's all OK.
Boot the box from a Live CD. Open a terminal (command prompt) and type
Code:
cat /proc/partitions
This will show all recognized partitions. You'll have to find your raidpartitions from here. I'll assume they are sdb1, sdc1, sdd1 and sde1.
Download and install the software raid manager:
Code:
sudo apt-get update
sudo apt-get install mdadm

Assemble your raid array:
Code:
sudo mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --force

Now create a mountpoint and mount the array (readonly):
Code:
sudo mkdir /mnt/raidarray
sudo mount /dev/md0 /mnt/raidarray -o ro

Now you can use the gui to copy your files. (You'll find them in /mnt/raidarray)

If you want you can also use your Live CD to sync your new disk.


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 2:13 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
Thank you for the info.

I forgot to mention that this is the Iomega NAS device with 4 X 1TB Seagate HDD.

Now i was trying to follow your steps mentioned above and I when I reached the step of
Code:
sudo apt-get update
and beyond I keep getting an error from the fedora live CD that the current user is not in the sudoers user list. i am the only user beside the root with admin privileges.

Am I doing something wrong here?


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 2:28 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
Quote:
Am I doing something wrong here?

Not really. On a Ubuntu box there is no login for root, so you use sudo (superuser do) to do things only a root may do. On Fedora you can just login as root, and skip the 'sudo'.

Maybe you don't need to install mdadm on Fedora. Just try (as root)
Code:
mdadm --help
If it's not available, you'll have to use yum, I think:
Code:
yum update
yum install mdadm


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 5:23 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
I managed to be a superuser by using "su -"
the command
Code:
sudo mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --force

resulted in an error:
Quote:
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde --force
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has no superblock - assembly aborted


my
Code:
cat /proc/partitions
gave the following results:
Quote:
[root@localhost ~]# cat /proc/partitions
major minor #blocks name

7 0 8 loop0
7 1 944 loop1
7 2 674852 loop2
7 3 3145728 loop3
7 4 524288 loop4
8 0 488386584 sda
8 16 976762584 sdb
8 32 976762584 sdc
8 48 976762584 sdd
8 64 976762584 sde
253 0 975585280 dm-0
253 1 2040254 dm-1
253 2 975585280 dm-2
253 3 2040254 dm-3
253 4 975585280 dm-4
253 5 2040254 dm-5
253 6 975585280 dm-6
253 7 2040254 dm-7
253 8 487304192 dm-8
253 9 487302144 dm-9
253 10 3145728 dm-10
253 11 3145728 dm-11
9 127 2040128 md127


Any ideas?
BTW I am getting a warnning in the system that a hard disk report health problems. I assume this the one which was giving the problems before but I have no idea which physical drive is that.


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 5:51 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
Normally a disk is called sdx, and its partitions sdx1, sdx2, ...
Seeing your partition dump, I think that somehow the partitions are already recognized as parts of a raid, and get another 'type'. The sizes (#blocks) give away that your raid partitions are dm-0, dm-2, dm-4 and dm-6. I suppose that the NAS also reserved some space for itself in dm-1, 3, 5 and 7, and this array is auto-assembled in md127.

So you could try
Code:
mdadm --assemble /dev/md0 /dev/dm-0 /dev/dm-2 /dev/dm-4 /dev/dm-6 --force


Quote:
I am getting a warnning in the system that a hard disk report health problems. I assume this the one which was giving the problems before but I have no idea which physical drive is that.

Maybe you can find the devicename using smartctl. Smartctl can read the S.M.A.R.T. status of a disk. The command is
Code:
smartctl -d marvell -H /dev/sdx

If it doesn't work, omit the '-d marvell'
After you've found the devicename, you can find the device by letting it work:
Code:
dd if=/dev/sdx of=/dev/null

This will copy the contents of the disk to /dev/null, which is a black hole. So the disk will be working hard. After you have identified the physical disk, you can stop it with ctrl-c.
Quote:
I managed to be a superuser by using "su -"

Yeah, sorry. I should have told you.


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:02 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
I tried the command but I get the following:
Code:
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-0 /dev/dm-2 /dev/dm-4 /dev/dm-6 --force
mdadm: cannot open device /dev/dm-0: Device or resource busy
mdadm: /dev/dm-0 has no superblock - assembly aborted


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:12 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
I open the disk utilities in Fedora and there I could see that there is a Raid-1 array
If I try to create a raid 5 frommy disk in the disk utilities will lose the data?


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:15 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
Can you post the output of
Code:
cat /proc/mdstat


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:19 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
Code:
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 dm-5[0] dm-3[2] dm-7[1] dm-1[3]
      2040128 blocks [4/4] [UUUU]
     
unused devices: <none>


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:34 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
So the raid1 array is indeed build of dm-n partitions. I suppose it's swapspace for the nas.
But it's not using dm-0, so I don't see why that device is busy.
The second error (mdadm: /dev/dm-0 has no superblock - assembly aborted) makes more sense. It could be dm-0 is the defective partition. So try
Code:
mdadm --assemble /dev/md0 /dev/dm-2 /dev/dm-4 /dev/dm-6 --force

Quote:
I open the disk utilities in Fedora and there I could see that there is a Raid-1 array
If I try to create a raid 5 frommy disk in the disk utilities will lose the data?

I don't know. I don't know the Fedora disk utilities, and it's generally spoken *not* a good idea to create an array. The array already exists, so you should only assemble it.


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:46 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
I tried that but no luck. same error.
I tried different combinations as well all with the same error
Code:
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-2 /dev/dm-4 /dev/dm-6 --force
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: /dev/dm-2 has no superblock - assembly aborted
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-0 /dev/dm-4 /dev/dm-6 --force
mdadm: cannot open device /dev/dm-0: Device or resource busy
mdadm: /dev/dm-0 has no superblock - assembly aborted
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-2 /dev/dm-0 /dev/dm-6 --force
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: /dev/dm-2 has no superblock - assembly aborted
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-2 /dev/dm-4 /dev/dm-0 --force
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: /dev/dm-2 has no superblock - assembly aborted
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-4 /dev/dm-2 /dev/dm-6 --force
mdadm: cannot open device /dev/dm-4: Device or resource busy
mdadm: /dev/dm-4 has no superblock - assembly aborted
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-6 /dev/dm-4 /dev/dm-2 --force
mdadm: cannot open device /dev/dm-6: Device or resource busy
mdadm: /dev/dm-6 has no superblock - assembly aborted
[root@localhost ~]# mdadm --assemble /dev/md0 /dev/dm-4 /dev/dm-6 /dev/dm-2 --force
mdadm: cannot open device /dev/dm-4: Device or resource busy
mdadm: /dev/dm-4 has no superblock - assembly aborted


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 6:59 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
Can you post the output of
Code:
cat /proc/mounts

this will show all mounted devices.


Top
 Profile  
 
PostPosted: Sat Sep 04, 2010 9:09 pm 
Offline

Joined: Fri Sep 03, 2010 10:34 pm
Posts: 10
Here is what I have done so far. I put back all the drives in the Iomega box and started the machine. through SSH I gain access to the root account. I run
Code:
cat /proc/mounts

and the results are
Quote:
/proc# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root.old /initrd ext2 rw 0 0
none / tmpfs rw 0 0
/dev/md0 /boot ext2 rw,noatime 0 0
/dev/loop0 /mnt/apps ext2 ro 0 0
/dev/loop1 /etc ext2 rw,noatime 0 0
/dev/loop2 /oem cramfs ro 0 0
proc /proc proc rw 0 0
none /proc/bus/usb usbfs rw 0 0
none /proc/fs/nfsd nfsd rw 0 0
none /sys sysfs rw 0 0
devpts /dev/pts devpts rw 0 0
tmpfs /mnt/apps/lib/init/rw tmpfs rw,nosuid 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0


and I run
Code:
cat /proc/partitions

and the results are
Code:
major minor  #blocks  name

   7     0     261573 loop0
   7     1       5120 loop1
   7     2        464 loop2
   8     0  976762584 sda
   8     1    2040254 sda1
   8     2  974722329 sda2
   8    16  976762584 sdb
   8    17    2040254 sdb1
   8    18  974722329 sdb2
   8    32  976762584 sdc
   8    33    2040254 sdc1
   8    34  974722329 sdc2
   8    48  976762584 sdd
   8    49    2040254 sdd1
   8    50  974722329 sdd2
  31     0        640 mtdblock0
  31     1         64 mtdblock1
  31     2       2192 mtdblock2
  31     3       2192 mtdblock3
  31     4      32768 mtdblock4
   9     0    2040128 md0
   9     1 2924166528 md1
 253     0 2924165120 dm-0


Does that help?


Top
 Profile  
 
PostPosted: Sun Sep 05, 2010 7:35 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
The box has two raid arrays, md0, raid0 build of sdx1, and md1 (could be raid5, about 3TB out of 4 partitions of about 1TB each) build of sdx2. (A part of) The firmware is on md0, which is mounted on /boot.
Your raid5 array is assembled, but it seems not to be mounted. However, there are 3 loopdevices mounted, which could have done it indirectly (I don't think so, seeing the filesystem and mountpoints, but loop0 *could* point to your array). Further there is again a dm-0, which is, according to it's size, another incarnation of md1.
Can you post the output of
Code:
losetup -a
which will tell more about the loop devices,
Code:
dmesg
which can tell more about the boot- and assemble- and mount process,
Code:
cat /proc/mdstat
which can tell more about md1 and maybe dm-0.

Further you can just try to mount md1 and dm-0. As long as you do it readonly it's harmless.
First clear the dmesg buffer (after you've stored the current contents)
Code:
dmesg -c
create a mountpoint and mount
Code:
mkdir /tmp/mountpoint
mount /dev/md1 /tmp/mountpoint -o ro
When it fails call dmesg again to see what the problem is, and repeat for /dev/dm-0.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 20 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group