General NAS-Central Forums

Welcome to the NAS community
It is currently Mon Sep 25, 2017 11:31 am

All times are UTC




Post new topic Reply to topic  [ 17 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: Sat Sep 02, 2017 12:20 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Hi, I noticed my NAS was showing empty when connecting remotely. Logging into the admin panel I see that the internal drive is showing a red cylinder symbol and states Inactive and the only option available on the page is 'delete'. No external drives are listed. On the front of the NAS the second drive LED is red.

I have attempted to tenet in to check the drive status but i cannot connect through PuTTy (port 22 or 23?). I enabled telnet in the admin console and also tried the UL backdoor - but i get "Not Found The requested URL /zyxel/cgi-bin/remote_help-cgi was not found on this server".

Any ideas or do i just replace the 2nd drive and hope it repairs itself?


Top
 Profile  
 
PostPosted: Sun Sep 03, 2017 11:33 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
jCG wrote:
I have attempted to tenet in to check the drive status but i cannot connect through PuTTy (port 22 or 23?). I enabled telnet in the admin console and also tried the UL backdoor - but i get "Not Found The requested URL /zyxel/cgi-bin/remote_help-cgi was not found on this server".
The url has changed. Have a look at Update NSA-300 series Firmware 4.60.
Telnet is on port 23.


Top
 Profile  
 
PostPosted: Fri Sep 08, 2017 5:14 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Thanks. I have executed commands suggested on another post...

Code:
~ $ cat /proc/mounts
rootfs / rootfs rw 0 0
/proc /proc proc rw,relatime 0 0
/sys /sys sysfs rw,relatime 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
devpts /dev/pts devpts rw,relatime,mode=600 0 0
/dev/mtdblock8 /zyxel/mnt/nand yaffs2 ro,relatime 0 0
/dev/sda1 /zyxel/mnt/sysdisk ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /ram_bin ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /usr ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /lib/security ext2 ro,relatime,errors=continue 0 0
/dev/loop0 /lib/modules ext2 ro,relatime,errors=continue 0 0
/dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
/dev/ram0 /usr/local/var tmpfs rw,relatime,size=5120k 0 0
/dev/mtdblock4 /etc/zyxel yaffs2 rw,relatime 0 0
/dev/mtdblock4 /usr/local/apache/web_framework/data/config yaffs2 rw,relatime 0                         0


Code:
~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : inactive sda2[0](S)
      2929765376 blocks super 1.2

unused devices: <none>


Code:
~ $ cat /proc/partitions
major minor  #blocks  name

   7        0     143360 loop0
   8        0 2930266584 sda
   8        1     498688 sda1
   8        2 2929766400 sda2
   8       16 2930266584 sdb
   8       17     498688 sdb1
   8       18 2929766400 sdb2
  31        0       1024 mtdblock0
  31        1        512 mtdblock1
  31        2        512 mtdblock2
  31        3        512 mtdblock3
  31        4      10240 mtdblock4
  31        5      10240 mtdblock5
  31        6      48896 mtdblock6
  31        7      10240 mtdblock7
  31        8      48896 mtdblock8


Top
 Profile  
 
PostPosted: Fri Sep 08, 2017 5:25 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Also:
Code:
~ $ cd /i-data
/i-data $ ls -la
drwxrwxrwx    3 root     root             0 Jan 26  2015 .
drwxr-xr-x   19 root     root             0 Sep  2 01:51 ..
lrwxrwxrwx    1 root     root            19 Sep  2 01:51 .system -> /i-data/md0/.system
lrwxrwxrwx    1 root     root            19 Sep  2 01:51 .zyxel -> /i-data/md0/.system
drwxrwxrwx    2 root     root             0 Sep  2 01:50 d60edf9e
lrwxrwxrwx    1 root     root            25 Sep  2 01:50 md0 -> /etc/zyxel/storage/sysvol


Code:
/i-data $ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mtdblock8           48896     44240      4656  90% /zyxel/mnt/nand
/dev/sda1               482922    475988      6934  99% /zyxel/mnt/sysdisk
/dev/loop0              138829    122797     16032  88% /ram_bin
/dev/loop0              138829    122797     16032  88% /usr
/dev/loop0              138829    122797     16032  88% /lib/security
/dev/loop0              138829    122797     16032  88% /lib/modules
/dev/ram0                 5120         4      5116   0% /tmp/tmpfs
/dev/ram0                 5120         4      5116   0% /usr/local/etc
/dev/ram0                 5120         4      5116   0% /usr/local/var
/dev/mtdblock4           10240      1512      8728  15% /etc/zyxel
/dev/mtdblock4           10240      1512      8728  15% /usr/local/apache/web_framework/data/config


Top
 Profile  
 
PostPosted: Sat Sep 09, 2017 8:37 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
OK, this is the information I can extract from your dumps:
Quote:
Code:
   8        0 2930266584 sda
   8        1     498688 sda1
   8        2 2929766400 sda2
   8       16 2930266584 sdb
   8       17     498688 sdb1
   8       18 2929766400 sdb2
The box has 3 TB disks, which have at least a readable partition table.
Quote:
Code:
~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : inactive sda2[0](S)
      2929765376 blocks super 1.2

unused devices: <none>
There is one raid array, which is inactive. It contains only a single partition, which is marked 'Spare'. I *think* that means it is supposed to be a raid1 array, as spares only make sense on a redundant array.

The big question is now: Why is the other disk not part of the array, and why do you have a spare disk? As soon as an array is degraded, a spare should automatically be added to the array. As you have only 2 disks, this means the array *is* degraded when you have a spare, and so the spare can't exist for long.

Can you post the output of
Code:
mdadm --examine /dev/sd[ab]2
mdadm --detail /dev/md0


Top
 Profile  
 
PostPosted: Sat Sep 09, 2017 1:18 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Thanks, outputs:

Code:
~ # mdadm --examine /dev/sd[ab]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 388a618a:18f0bfa1:d420bd93:25861841

    Update Time : Sat Sep  2 00:46:32 2017
       Checksum : 102f543b - correct
         Events : 1593526


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 379216fa:fa7546ab:ae3db216:ffb99c1e

    Update Time : Sun Jan 22 08:25:10 2017
       Checksum : e8a64f5c - correct
         Events : 68


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)


Code:
~ # mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.


Top
 Profile  
 
PostPosted: Sat Sep 09, 2017 3:18 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
This is one of the unexplained mysteries of raid.

You built your array on Thu Nov 28 12:16:15 2013. Both raid headers agree with that. But on Jan 22 this year for some reason /dev/sdb2 was updated for the last time. That can happen. Sometimes the raid manager kicks a disk from the array, because of a bad sector, or something like that.

In that case that is written in the header of the remaining array members, so that the raidmanager on reboot knows which members to assemble. The header of /dev/sda2 was updated for the last time at Sat Sep 2 00:46:32. I guess that was shortly before your array went offline.

But now the amazing part: as far as that header concerns the array contained 2 disks at that time:
Quote:
Code:
   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
That shouldn't be possible, unless you have been exchanging disks at Jan 22 and Sep 2.

As far as I can see your array is degraded. /dev/sdb2 is not updated since January, so unless you haven't written anything to the NAS since then, you shouldn't try to assemble it in the array.

The only option now is to remove the 'Spare' state of /dev/sda2, and re-assemble the array:
Code:
mdadm --stop /dev/md0
mdadm --assemble --force /dev/md0 /dev/sda2 --run
I hope then /proc/mdstat will show the array active again. If so, type 'reboot' to get it mounted.


Top
 Profile  
 
PostPosted: Sat Sep 09, 2017 4:29 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Mijzelf, Thank you again. I did see the second drive LED was orange or red earlier in the year, and at that time I checked the disk health (I think using the SMART package in the admin panel) and it said it was healthy - and I couldn't see anything was wrong.

So if there is a problem with the second drive - i guess I don't know if it is permanent/fatal, or just a a one-off data error that can be corrected. If I attempt to reassemble the array and drive 2 is faulty is there any risk to my data (which I hope is all on drive 1) - or is RAID1 robust enough to manage this safely - or is it safer for me to buy a new drive 2, or take the drive out and connect to another computer to check it?


Top
 Profile  
 
PostPosted: Sun Sep 10, 2017 11:56 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
You can't re-assemble the array with the 2nd disk included. No matter how small the original fail was, the disk isn't update since January, and so it simply doesn't 'fit' anymore.

You have to assemble the array with only 1 disk, and after that you can add the 2nd disk as if it was a new disk, which involves duplicating the whole contents (including unused space).

And yes, it is possible the disk is perfectly healthy. The problem could be a bad sector, which will be replaced by one of the spare sectors on the first write to that sector. In SMART that is called 'pending read errors' or something like that.


Top
 Profile  
 
PostPosted: Sun Sep 10, 2017 4:47 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Thanks. It showed active:
Code:
~ # mdadm --stop /dev/md0
mdadm: stopped /dev/md0
~ # mdadm --assemble --force /dev/md0 /dev/sda2 --run
mdadm: /dev/md0 has been started with 1 drive (out of 2).
~ #
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda2[0]
      2929765240 blocks super 1.2 [2/1] [U_]

unused devices: <none>

I then rebooted and the dashboard showed inactive and nothing looks to have changed...
Code:
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : inactive sda2[0](S)
      2929765376 blocks super 1.2

unused devices: <none>

Code:
~ # mdadm --examine /dev/sda2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 388a618a:18f0bfa1:d420bd93:25861841

    Update Time : Sat Sep  2 00:46:32 2017
       Checksum : 102f543b - correct
         Events : 1593526


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)


Code:
~ # mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.

Code:
~ # cd /i-data
/i-data # ls -la
drwxrwxrwx    3 root     root             0 Jan 26  2015 .
drwxr-xr-x   19 root     root             0 Sep 10 06:21 ..
lrwxrwxrwx    1 root     root            19 Sep 10 06:21 .system -> /i-data/md0/.system
lrwxrwxrwx    1 root     root            19 Sep 10 06:21 .zyxel -> /i-data/md0/.system
drwxrwxrwx    2 root     root             0 Sep 10 06:21 d60edf9e
lrwxrwxrwx    1 root     root            25 Sep 10 06:21 md0 -> /etc/zyxel/storage/sysvol


Top
 Profile  
 
PostPosted: Sun Sep 10, 2017 6:04 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
OK. Don't expect me to be a raid guru. I'm just guessing. Educated.
Quote:
Code:
    Update Time : Sat Sep  2 00:46:32 2017
Apparently your single disk assembly didn't update the raid header. So actually nothing has changed.
Yet you proved that the array can be assembled, as it was active before rebooting.

I think you should remove the second disk, as the firmware tries to assemble a 2 disk array, and apparently that results in an inactive array.

Don't know which disk is the second. And I think either disk will be assembled to a (degraded) raid array. If that succeeds, the array will be mounted and the header updated. So you can't recognize it from it's 'Update time'. But the 'Device Role' should be the same. 'Active device 0' for disk 1, and 'Active device 1' for disk 2.
(And in case of a single disk it will always be /dev/sda)


Top
 Profile  
 
PostPosted: Sun Sep 10, 2017 10:02 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
Mijzelf, Thanks again. OK I have removed disc 2 (its the one on the right marked on the case) and disc one is mounted as degraded. When connecting remotely I can see all shares / files, but for some reason using the Web File Browser I can't see all my shares.

Via Telnet i get:
Code:
root@Skynet:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda2[0]
      2929765240 blocks super 1.2 [2/1] [U_]

unused devices: <none>

Code:
root@Skynet:~# mdadm --examine /dev/sda2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 388a618a:18f0bfa1:d420bd93:25861841

    Update Time : Sun Sep 10 11:38:51 2017
       Checksum : 103a7b18 - correct
         Events : 1594032


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)

Code:
root@Skynet:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun Sep 10 11:39:36 2017
          State : active, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : NSA325-v2:0
           UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
         Events : 1594059

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       0        0        1      removed

Code:
root@Skynet:~# cd /i-data
root@Skynet:/i-data# ls -la
total 4
drwxrwxrwx  3 root root    0 Jan 26  2015 .
drwxr-xr-x 19 root root    0 Sep 10 11:26 ..
lrwxrwxrwx  1 root root   19 Sep 10 11:25 .system -> /i-data/md0/.system
lrwxrwxrwx  1 root root   19 Sep 10 11:25 .zyxel -> /i-data/md0/.system
drwxrwxrwx 20 root root 4096 Jan 26  2015 d60edf9e
lrwxrwxrwx  1 root root   25 Sep 10 11:25 md0 -> /etc/zyxel/storage/sysvol

So would you recommend I check / wipe / format the other disc before reinsterting?


Top
 Profile  
 
PostPosted: Mon Sep 11, 2017 9:40 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
jCG wrote:
So would you recommend I check / wipe / format the other disc before reinsterting?

At least check the SMART values.

And I think it's enough to delete the partition to let the NAS accept the disk as 'new'.


Top
 Profile  
 
PostPosted: Tue Sep 12, 2017 6:40 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 9
I connected the second drive to a windows PC, confirmed SMART healthy and deleted the volumes. I then reinserted into the NAS and rebooted.
The dashboard showed two discs and degraded status. I clicked the repair icon and a screen with an hour glass appeared, then after a few seconds it returned to the Storage-Volume screen (should I see repair in progress?). I left it for 1/2 hr or so, and when I came back I had to logon again - the volume is inactive again and the second drive LED is now red...
Code:
~ # mdadm --examine /dev/sd[ab]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 388a618a:18f0bfa1:d420bd93:25861841

    Update Time : Tue Sep 12 07:57:23 2017
       Checksum : 103cfeb2 - correct
         Events : 1599282


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 379216fa:fa7546ab:ae3db216:ffb99c1e

    Update Time : Sun Jan 22 08:25:10 2017
       Checksum : e8a64f5c - correct
         Events : 68


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)
~ #


Top
 Profile  
 
PostPosted: Wed Sep 13, 2017 7:48 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6039
Quote:
Code:
/dev/sdb2:
<snip>
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
<snip>
  Creation Time : Thu Nov 28 12:16:15 2013
The NAS just recognized the old array, again. Apparently it's not enough to just 'delete the volumes'.

You can execute
Code:
dd if=/dev/zero of=/dev/sdb2 bs=16M count=1
dd if=/dev/zero of=/dev/sdb bs=1M count=1
reboot
The first line will overwrite the first 16MB of /dev/sdb2, deleting the raid header, the second line will overwrite the partition table on /dev/sdb, so the NAS will see a new, empty disk, after reboot.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 17 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: Majestic-12 [Bot] and 52 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group