General NAS-Central Forums

Welcome to the NAS community
It is currently Fri Feb 23, 2018 12:40 am

All times are UTC




Post new topic Reply to topic  [ 3 posts ] 
Author Message
PostPosted: Sun Dec 24, 2017 10:44 am 
Offline

Joined: Tue Dec 05, 2017 11:00 pm
Posts: 4
I have a NSA310 with 1TB HDD.
I connected an external eSATA 1TB HDD to it in order to set up a RAID1.
My external HDD was formatted in NTFS with some data on it which I didn't need.
After connecting it, I could see it in the Volumes screen - so I had 2 volumes.
I deleted the external HDD volume from the WEB GUI. All went well.
Reading the manual, I saw there are two ways to setup a RAID1.
I decided to go for the migrate button option.
I clicked the migrate button and saw a progress bar. Around 25%, I got a message that migration FAILED.
Now my main internal volume is DOWN.
Any suggestions what to do?

EDIT:
I disassembled the internal HD and installed it in my PC with Ubuntu live USB.
I followed the guide here https://kb.zyxel.com/KB/searchArticle!g ... 08&lang=EN to mount raid in linux in a degraded state but when I try to mount /dev/md0 I get an error:
Code:
wrong fs type, bad option, bad superblock on /dev/md0 ...


I then tried also to connect my external HD via USB to my PC and assemble the array with
Code:
mdadm -A -f /dev/md0 /dev/sdb2 /dev/sdd2
(as mentioned in the guide linked above) and then I get an error:
Code:
mdadm: No superblock found on /dev/sdd2 (Expected magic a92b4efc, got 00000000
mdadm: No RAID superlblock on /dev/sdd2
mdadm: /dev/sdd2 has no superblock - assembly aborted


At this stage, my highest priority is to get my data backed up.

EDIT2:
I put the internal HD back in the NSA310and connected also the external eSATA HD.
Using telnet, I ran the following commands:
Code:
/ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sun Dec 24 10:09:27 2017
     Raid Level : raid1
     Array Size : 976245816 (931.02 GiB 999.68 GB)
  Used Dev Size : 976245816 (931.02 GiB 999.68 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun Dec 24 16:49:31 2017
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : nsa310:0  (local to host nsa310)
           UUID : 86773c77:e603edb6:af5b8e25:790a3221
         Events : 84

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       0        0        1      removed

/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda2[0]
      976245816 blocks super 1.0 [2/1] [U_]
     
unused devices: <none>

/ # cat /proc/partitions
major minor  #blocks  name

   7        0     139264 loop0
   8        0  976762584 sda
   8        1     514048 sda1
   8        2  976245952 sda2
  31        0       1024 mtdblock0
  31        1        512 mtdblock1
  31        2        512 mtdblock2
  31        3        512 mtdblock3
  31        4      10240 mtdblock4
  31        5      10240 mtdblock5
  31        6      48896 mtdblock6
  31        7      10240 mtdblock7
  31        8      48896 mtdblock8
   9        0  976245816 md0
   8       16  976762584 sdb
   8       17     514048 sdb1
   8       18  976245952 sdb2


/ # cat /proc/mounts
rootfs / rootfs rw 0 0
/proc /proc proc rw,relatime 0 0
/sys /sys sysfs rw,relatime 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
devpts /dev/pts devpts rw,relatime,mode=600 0 0
/dev/mtdblock6 /zyxel/mnt/nand yaffs2 ro,relatime 0 0
/dev/sda1 /zyxel/mnt/sysdisk ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /ram_bin ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /usr ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /lib/security ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /lib/modules ext2 ro,relatime,errors=continue 0 0
/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/var tmpfs rw,relatime,size=5120k 0 0
/dev/mtdblock4 /etc/zyxel yaffs2 rw,relatime 0 0
/dev/mtdblock4 /usr/local/apache/web_framework/data/config yaffs2 rw,relatime 0 0

/ # fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x9a291b17

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          64      514048+   8  AIX
/dev/sda2              65      121601   976245952+  20  Unknown

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x62a0771f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          64      514048+  83  Linux
/dev/sdb2              65      121601   976245952+  20  Unknown

/ # dmesg | tail
#######################################
#              HD1 awaked by fdisk !        #
#######################################
---> HD1 back to green on, off blink

#######################################
#              HD0 awaked by mdadm  !        #
#######################################
---> HD0 back to green on, off blink
EXT4-fs (md0): bad geometry: block count 244061472 exceeds size of device (244061454 blocks)


I also ran the disk scan from the web gui and here's the results:
Code:
e2fsck 1.41.14 (22-Dec-2010)
The filesystem size (according to the superblock) is 244061472 blocks
The physical size of the device is 244061454 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? no
/dev/md0 contains a file system with errors check forced.
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found. Fix? no
Inode 42864944 was part of the orphaned inode list. IGNORED.
Deleted inode 42865096 has zero dtime. Fix? no
Deleted inode 42866326 has zero dtime. Fix? no
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: -(604160--606207) -(21823488--21825535) -(29214720--29218815) -(29980672--29982719) -(56428544--56430591) -(60133376--60135423) -(71368704--71370751) -(79259648--79261695) -(80005120--80007167) -(81410048--81412095) -(81580032--8
1580042) -(116928328--116930375) -(117053440--117055487) -(118153216--118155263) -(121217024--121219071) -(121223168--121225215) -(131280896--131282943) -(144838656--144840703) -(152547328--152549375) -(159481856--159483903) -165834375 -(165834647--165834
653) -(165834690--165834694) -165840295 -165870581 -(165877472--165877695) -(165878528--165878783) -(165902939--165902956) -(171508368--171508382) -(171514118--171514120) -(171581440--171583487) -(171621376--171621887) -(171931648--171933695) -(172110848-
-172111871) -(174749696--174751743) -(178661376--178663423) -(199927808--199931903) -(201056256--201058303) -(207196160--207198207) -(210489344--210491391)
Fix? no
Inode bitmap differences: -42864944 -42865096 -42866326
Fix? no
/dev/md0: ********** WARNING: Filesystem still has errors **********
/dev/md0: 186413/61022208 files (2.6% non-contiguous) 180982752/244061472 blocks



EDIT3:

I managed to mount sda2 into a temp directory and I'm now able to access my data.
I had to stop the array with mdadm --stop /dev/md0 first
I have a feeling that the fix is really simple, but I got no clue what to do.
Currently i'm backing up my data and if I get no reply by the time I'm done, I'm thinking of wiping the internal HD and re-create the volume.

Help would be highly appreciated!


Top
 Profile  
 
PostPosted: Mon Dec 25, 2017 9:07 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6172
I vaguely remember I've read about this problem before. The cause is that by converting the array from JBOD to RAID1 the raid header has to grow a bit. I *think* the firmware tries to shrink the filesystem a bit, and ignores an error message, and then enlarges the header. Your e2fsck error is clear, the physical device is too small to contain the filesystem. That is because the size of /dev/md0 is /dev/sda2 - raidheader. After stopping /dev/md0 you could mount /dev/sda2 because now you had the full size of /dev/sda2.

You could try to shrink the filesystem a bit. If you can get it 18k smaller, everything should work again, I think. (To be clear, you have to shrink the filesystem, not the partition,)
The tool resize2fs should be able to do the job. But I don't know if it is on the nas.


Top
 Profile  
 
PostPosted: Tue Dec 26, 2017 10:19 pm 
Offline

Joined: Tue Dec 05, 2017 11:00 pm
Posts: 4
Thanks for your help.
I saw your message too late.
Eventually after backing up my data to a 3rd HD, I decided to wipe the 2 HDs and start from scratch.
Now my RAID1 works.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC


Who is online

Users browsing this forum: Google [Bot] and 60 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group