General NAS-Central Forums

Welcome to the NAS community
It is currently Thu Dec 14, 2017 2:34 am

All times are UTC




Post new topic Reply to topic  [ 24 posts ]  Go to page Previous  1, 2
Author Message
PostPosted: Mon Sep 29, 2014 9:44 am 
Offline

Joined: Wed Sep 24, 2014 7:06 pm
Posts: 13
Darn...

Code:
 mdadm --assemble /dev/md1 /dev/sdd2 /dev/sdc2 /dev/sdb2 /dev/sde2 --run
mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
mdadm: Not enough devices to start the array.



Code:
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 UUID=14839fe5:3e983466:0798556a:0bcc3b00
ARRAY /dev/md/1 metadata=1.0 UUID=ef425eb8:07a58e7e:f78faf4a:bc7d80d9 name=storage:1
   spares=2

# This file was auto-generated on Fri, 26 Sep 2014 16:47:27 +0200
# by mkconf $Id$
root@ubuntutest:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : inactive sdb2[1] sde2[3](S) sdc2[0](S) sdd2[2]
      3898888768 blocks super 1.0
       
md0 : active raid1 sdb1[1] sde1[3] sdd1[2]
      2040128 blocks [4/3] [_UUU]
     
unused devices: <none>


mdadm.conf shows sde2[3](S) sdc2[0](S)
is there a way to include them into the array, i.e. remove the [S]pare flag?


Top
 Profile  
 
PostPosted: Mon Sep 29, 2014 10:58 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6090
Quote:
mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
Ouch! I/O errors are hardware errors. Can you have a look at dmesg?

Your md0 is also missing it's member 0, which was the original sda, which had a damaged paritition table. Maybe that disk is dying, just like (current) sde.

Do you have one or two spare 1TB (or bigger) disk(s)? When you have 2 failing disks the array is down, and can probably not be assembled. But a binary copy of that disk might just work, or at least be forced back in the array.

Quote:
mdadm.conf shows sde2[3](S) sdc2[0](S)
is there a way to include them into the array, i.e. remove the [S]pare flag?
It is possible to re-create the array in the same order, using the same parameters, telling mdadm the array is clean, so it doesn't sync the array, but only take it as is. AFAIK it's not possible to remove the spare flag.


Top
 Profile  
 
PostPosted: Mon Sep 29, 2014 11:07 am 
Offline

Joined: Wed Sep 24, 2014 7:06 pm
Posts: 13
So, been doing some stuff... looking at http://www.kossboss.com/sparetoactive

Did the following:
Code:
mdadm -Cv /dev/md1 --assume-clean --level=5 --chunk=64 --raid-devices=4 /dev/sdb2 /dev/sde2 /dev/sdc2 /dev/sdd2
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdb2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Wed Sep  8 23:22:31 2010
mdadm: /dev/sde2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Wed Sep  8 23:22:31 2010
mdadm: /dev/sdc2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Wed Sep  8 23:22:31 2010
mdadm: /dev/sdd2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Wed Sep  8 23:22:31 2010
mdadm: size set to 974591104K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.


So now we need to mount... right?

Code:
root@ubuntutest:/# mount /dev/md1 /mnt/md1
mount: you must specify the filesystem type


Need to crack my brain on that one... must be EXT2.


Top
 Profile  
 
PostPosted: Mon Sep 29, 2014 11:31 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6090
mdadm --create wrote:
mdadm: Defaulting to version 1.2 metadata
/proc/mdstat wrote:
/dev/sdb2:
Magic : a92b4efc
Version : 1.0
man mdadm wrote:
The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".
So now you have changed the metadata format from 1.0 to 1.2, overwriting the start of all raid members.

I think the best you can do is re-create the array again, specifying the metadata to 1.0, and hope the first 68 kB (?) is only logical volume group metadata.


Top
 Profile  
 
PostPosted: Mon Sep 29, 2014 11:59 am 
Offline

Joined: Wed Sep 24, 2014 7:06 pm
Posts: 13
Mijzelf wrote:
mdadm --create wrote:
mdadm: Defaulting to version 1.2 metadata
/proc/mdstat wrote:
/dev/sdb2:
Magic : a92b4efc
Version : 1.0
man mdadm wrote:
The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".
So now you have changed the metadata format from 1.0 to 1.2, overwriting the start of all raid members.

I think the best you can do is re-create the array again, specifying the metadata to 1.0, and hope the first 68 kB (?) is only logical volume group metadata.


Darn... that needs tpo be corrected...
Redone the build:

Code:
mdadm -Cv /dev/md1 --assume-clean --level=5 --chunk=64 --raid-devices=4 --metadata=1.0 /dev/sdb2 /dev/sde2 /dev/sdc2 /dev/sdd2
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdb2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Mon Sep 29 12:46:16 2014
mdadm: /dev/sde2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Mon Sep 29 12:46:16 2014
mdadm: /dev/sdc2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Mon Sep 29 12:46:16 2014
mdadm: /dev/sdd2 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Mon Sep 29 12:46:16 2014
mdadm: size set to 974722176K
Continue creating array? y
mdadm: array /dev/md1 started.

Code:
root@ubuntutest:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid5 sdd2[3] sdc2[2] sde2[1] sdb2[0]
      2924166528 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
     
md0 : active raid1 sdb1[1] sdd1[2] sde1[3]
      2040128 blocks [4/3] [_UUU]
     
unused devices: <none>


Still need to specify fstype running
Code:
mount /dev/md1 /mnt/md1


Top
 Profile  
 
PostPosted: Mon Sep 29, 2014 1:53 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6090
Yes, the raid array still contains a volumegroup. So vgscan & vgdisplay.

If the volumegroup is damaged now, maybe you can use binwalk to find the internal filesystem.
Code:
binwalk /dev/md1
When that gives the offset of the internal filesystem, you can use a loopdevice to get a device node there:
Code:
losetup -o offset /dev/md1
losetup -a
The loopdevice can be mounted. (Assuming that the logical volume is a contiguous block)


Top
 Profile  
 
PostPosted: Mon Sep 29, 2014 2:00 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6090
Have you rearranged the disks? Else you have a problem:
Quote:
md1 : inactive sdb2[1] sde2[3](S) sdc2[0](S) sdd2[2]
Quote:
md1 : active raid5 sdd2[3] sdc2[2] sde2[1] sdb2[0]
The roles of the devices have changed.


Top
 Profile  
 
PostPosted: Tue Sep 30, 2014 11:20 am 
Offline

Joined: Wed Sep 24, 2014 7:06 pm
Posts: 13
Seems like the order has changed.
I have discussed the issues internally. After 2 day's of trying we decided to terminate the project and take loss off (some) data for granted.
It appears there's a backup from a few weeks ago, so we'll deal accordingly.

As for the device... I will make a bonfire and dump it in it.

Thanks for the help and effort buddy, much appreciated.


Top
 Profile  
 
PostPosted: Tue Sep 30, 2014 12:44 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6090
Arjan wrote:
As for the device... I will make a bonfire and dump it in it.
Really? According to the information you gave the box is fine, only the disks are failing.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 24 posts ]  Go to page Previous  1, 2

All times are UTC


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group