General NAS-Central Forums

Welcome to the NAS community
It is currently Wed Oct 18, 2017 7:26 am

All times are UTC




Post new topic Reply to topic  [ 22 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: Sun Dec 04, 2016 3:10 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Hi folks,

I have a big issue with my NAS540.

I did a firmware update 2 days ago. Everything worked fine and after the NAS rebooted I put in my new 8TB HDD.

I already have 3x3TB inside my NAS540 and all are connected via JBOD to one single big volume. So I added the 8TB HDD and enlarged my volume by the new HDD via JBOD. This took a long time so I checked the status on the next morning.

I realised that I wasnt able to connect to my NAS anymore, either via Webservice nor via Network in Windows. But I was able to ping the NAS540 (192.168.2.101) and got a response without any errors.


I shut down my NAS and switched off all cables (LAN and power) for about 10 minutes. I removed all HDDs and booted again. Wow - I was able to connect to my NAS via Web now. I enabled SSH and Telnet and rebooted the NAS. Now I put in again all my HDDs and booted. Bäm! I was also able to connect to my NAS via Web. Jackpot? No :(

I got the message:

"Volume Down. " When i clicked on "OK" I was relinked tot he Storage manager and all my 4 HDDs are available but my 15,45TB Volume was DOWN.


Why the hell is all my data lost only because I put in my new HDD? I did this serverel times and now everything is destroyed. 8.5TB data loss?

Please help me with that issue :-(


Top
 Profile  
 
PostPosted: Sun Dec 04, 2016 6:06 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
What gives
Code:
cat /proc/mdstat
cat /proc/partitions
cat /proc/mounts


Quote:
now everything is destroyed. 8.5TB data loss?

Please help me with that issue
You are aware that using a non-redundant array like RAID0 or JBOD is not very safe for your data?


Top
 Profile  
 
PostPosted: Sun Dec 04, 2016 6:26 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Yeah i know but the HDDs are all very new :(


Code:
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active linear sda3[0] sdd3[3] sdc3[2] sdb3[1]
      16588825344 blocks super 1.2 64k rounding

md1 : active raid1 sdb2[5] sdd2[7] sdc2[6] sda2[4]
      1998784 blocks super 1.2 [4/4] [UUUU]

md0 : active raid1 sdd1[7] sdb1[5] sdc1[6] sda1[4]
      1997760 blocks super 1.2 [4/4] [UUUU]

unused devices: <none>
~ # cat /proc/partitions
major minor  #blocks  name

   7        0     154624 loop0
  31        0        256 mtdblock0
  31        1        512 mtdblock1
  31        2        256 mtdblock2
  31        3      10240 mtdblock3
  31        4      10240 mtdblock4
  31        5     112640 mtdblock5
  31        6      10240 mtdblock6
  31        7     112640 mtdblock7
  31        8       6144 mtdblock8
   8        0 2930266584 sda
   8        1    1998848 sda1
   8        2    1999872 sda2
   8        3 2926266368 sda3
   8       16 2930266584 sdb
   8       17    1998848 sdb1
   8       18    1999872 sdb2
   8       19 2926266368 sdb3
   8       32 2930266584 sdc
   8       33    1998848 sdc1
   8       34    1999872 sdc2
   8       35 2926266368 sdc3
   8       48 7814026584 sdd
   8       49    1998848 sdd1
   8       50    1999872 sdd2
   8       51 7810026496 sdd3
  31        9     102424 mtdblock9
   9        0    1997760 md0
   9        1    1998784 md1
  31       10       4464 mtdblock10
   9        2 16588825344 md2
253        0     102400 dm-0
253        1 16588718080 dm-1



mounts:

Code:
rootfs / rootfs rw 0 0
/proc /proc proc rw,relatime 0 0
/sys /sys sysfs rw,relatime 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
devpts /dev/pts devpts rw,relatime,mode=600 0 0
ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0
/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0



I see my 3x3TB HDDs and my 8TB HDD there on sba sbb sbc sbd - so I just have to mount them?


Top
 Profile  
 
PostPosted: Sun Dec 04, 2016 7:10 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
equanox wrote:
Yeah i know but the HDDs are all very new :(
That doesn't matter. A disk is a mechanical device which can die at any moment. Actually the odds that a brand new disk will fail is bigger than the odds that a 3 months old disk will die.
But it seems your problems are not caused by a dying disk.

Code:
md2 : active linear sda3[0] sdd3[3] sdc3[2] sdb3[1]
      16588825344 blocks super 1.2 64k rounding
The JBOD array is fine. So I guess something went wrong with the enlarging of the filesystem.

You can try to mount manually, to see the errors:
Code:
mkdir /tmp/mountpoint
mount /dev/md2 /tmp/mountpoint
dmesg | tail


Quote:
I see my 3x3TB HDDs and my 8TB HDD there on sba sbb sbc sbd - so I just have to mount them?
No. That are the disks. The disks are partitioned (sda1, sda2, ...), and the partitions are merged to RAID arrays. You have to mount the array.


Top
 Profile  
 
PostPosted: Sun Dec 04, 2016 7:18 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Hey,

thank you for your fast reponses. I'm a noob on linux and raid topics - so please forgive me :S

Code:
~ # mkdir /tmp/mountpoint
~ # mount /dev/md2 /tmp/mountpoint
mount: unknown filesystem type 'LVM2_member'



Didnt work. Is it my mistake?

And another question: Is all my data lost? :(


Top
 Profile  
 
PostPosted: Sun Dec 04, 2016 8:52 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
Mount doesn't succeed in 'guessing' the filesystem. It should be ext4. You can specify that:
Code:
mount -t ext4 /dev/md2 /tmp/mountpoint
and/or you can try to repair the filesystem
Code:
e2fsck /dev/md2


Quote:
And another question: Is all my data lost? :(
Don't know. I have no experience in failed ext4 enlarging. I *think* the files are not moved, which means you should at least be able to recover them using low-level recovery, with something like PhotoRec. But as you are talking about 8.5 TB lost, I guess the original array was full. That means that you'll have some fragmentation. Fragmented files are almost impossible to recover, without help of a filesystem.


Top
 Profile  
 
PostPosted: Sun Dec 04, 2016 10:14 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Mijzelf wrote:
Mount doesn't succeed in 'guessing' the filesystem. It should be ext4. You can specify that:
Code:
mount -t ext4 /dev/md2 /tmp/mountpoint
and/or you can try to repair the filesystem
Code:
e2fsck /dev/md2


Quote:
And another question: Is all my data lost? :(
Don't know. I have no experience in failed ext4 enlarging. I *think* the files are not moved, which means you should at least be able to recover them using low-level recovery, with something like PhotoRec. But as you are talking about 8.5 TB lost, I guess the original array was full. That means that you'll have some fragmentation. Fragmented files are almost impossible to recover, without help of a filesystem.


Thanks for your help again!

I tried your suggestions:

Code:
~ # mount -t ext4 /dev/md2 /tmp/mountpoint
mount: /dev/md2 is already mounted or /tmp/mountpoint busy
~ # e2fsck /dev/md2
e2fsck 1.42.12 (29-Aug-2014)
/dev/md2 is in use.
e2fsck: Cannot continue, aborting.


Top
 Profile  
 
PostPosted: Mon Dec 05, 2016 8:35 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
Oh? Did you reboot in between? Check your mounts again. Maybe some operation (repair, enlarge or mount) is pending?
Code:
ps | grep "/dev/md2"


Top
 Profile  
 
PostPosted: Mon Dec 05, 2016 5:57 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Hi :-)

Code:

~ # ps | grep "/dev/md2"
 4726 root      2648 S    grep /dev/md2
~ #



What do I have to do now? Try to mount again?


Top
 Profile  
 
PostPosted: Mon Dec 05, 2016 7:07 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
It won't hurt.


Top
 Profile  
 
PostPosted: Mon Dec 05, 2016 7:48 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Same issue again :S

Code:
~ # mount -t ext4 /dev/md2 /tmp/mountpoint
mount: /dev/md2 is already mounted or /tmp/mountpoint busy


Should I restard the NAS ? Its running since 2 days now.


Top
 Profile  
 
PostPosted: Mon Dec 05, 2016 8:16 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
What gives 'dmesg | tail' directly after an attempt to mount?


Top
 Profile  
 
PostPosted: Mon Dec 05, 2016 8:30 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Mijzelf wrote:
What gives 'dmesg | tail' directly after an attempt to mount?



Really thank you for your help !


Code:
~ # mount -t ext4 /dev/md2 /tmp/mountpoint
mount: /dev/md2 is already mounted or /tmp/mountpoint busy
~ # dmesg | tail
[ 3502.018470] bz status = 0
[ 3502.021097] bz_timer_status = 0
[ 4494.007779]
[ 4494.007782] ****** disk(0:0:0:0) spin down at 419401 ******
[ 4494.739983]
[ 4494.739986] ****** disk(1:0:0:0) spin down at 419474 ******
[ 4495.462891]
[ 4495.462894] ****** disk(2:0:0:0) spin down at 419546 ******
[ 4496.631212]
[ 4496.631215] ****** disk(3:0:0:0) spin down at 419663 ******
~ #


Top
 Profile  
 
PostPosted: Tue Dec 06, 2016 12:14 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6048
OK, the feedback from 'mount' is not a misinterpreted kernel error. The last kernel logs have nothing to do with the mount.

You should at least reboot, to see if this behaviour is reproducable, and if it is, we have to find out how /dev/md2 can be mounted when
Code:
cat /proc/mounts
doesn't show it.


Top
 Profile  
 
PostPosted: Wed Dec 07, 2016 5:39 pm 
Offline

Joined: Sun Dec 04, 2016 3:08 pm
Posts: 11
Hi Mijzelf,

I rebooted my NAS and repeated the procedure:

Code:
~ # cat /proc/mounts
rootfs / rootfs rw 0 0
/proc /proc proc rw,relatime 0 0
/sys /sys sysfs rw,relatime 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
devpts /dev/pts devpts rw,relatime,mode=600 0 0
ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
/dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0
/dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
~ #
~ #
~ #
~ #
~ # mkdir /tmp/mountpoint
~ # mount /dev/md2 /tmp/mountpoint
mount: unknown filesystem type 'LVM2_member'
~ # mount -t ext4 /dev/md2 /tmp/mountpoint
mount: /dev/md2 is already mounted or /tmp/mountpoint busy



So I'm getting the same "busy error" as before. Any Solutions? :?


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 22 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group