General NAS-Central Forums

Welcome to the NAS community
It is currently Tue Oct 17, 2017 4:55 pm

All times are UTC




Post new topic Reply to topic  [ 19 posts ]  Go to page Previous  1, 2
Author Message
PostPosted: Fri Sep 15, 2017 4:01 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 10
Many thanks Mijzelf!

After deleting the header info I was able to run a repair and the array and disks are showing healthy - both drives have green LEDs.

Reviewing the SMART detailed report my new concern is that disk1 is showing a load_cycle_count (193) raw value of 959,527 with a normalised value of 001 (the second drive is 22,290 - normalised value 193) - i think this is not good as I read they are certified to 600,000 with a MTTF of 1M.


Top
 Profile  
 
PostPosted: Sat Sep 16, 2017 9:48 am 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6047
Indeed, that is worrying. And strange. Both members of a RAID1 array are supposed to be used identically.

Your array is 4 years old, and the 2nd disk has been inactive for 8 months. That 8 months is not enough to explain the difference, I think.

Is disk 1 a WD green, while disk 2 is something else? Some WD greens had an aggressive spindown policy, which is not suitable for a NAS. There is some tool around to prevent that, google for wdidle.

Further it's not recommended (by me, some people disagree) to have a short spindown time configured in the NAS. The rationale is that the NAS is mostly used in 'batches'. If you access it, you will in most cases access it again in a few minutes. As spinup/down is the main wear for a harddisk, it's recommended to not let it spindown in between. I put these values to one hour, or two, if supported. That still means that the disk is spinned down for 18-20 hours a day, while it only spins up once or maybe twice a day.


Top
 Profile  
 
PostPosted: Tue Sep 26, 2017 8:23 pm 
Offline

Joined: Sat Sep 02, 2017 12:09 pm
Posts: 10
Both drives are WD reds.

The NAS has been running fine since the 15th, but the second drive LED has gone amber again - but both appear healthy and both are active:

Code:
root@Skynet:~# mdadm --examine /dev/sd[ab]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 388a618a:18f0bfa1:d420bd93:25861841

    Update Time : Tue Sep 26 11:07:53 2017
       Checksum : 104cd34c - correct
         Events : 1681459


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
           Name : NSA325-v2:0
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : cf5440d0:5f22d52b:42e3d236:31670f7e

    Update Time : Tue Sep 26 11:07:53 2017
       Checksum : c04d73ad - correct
         Events : 1681459


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)
root@Skynet:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Nov 28 12:16:15 2013
     Raid Level : raid1
     Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
  Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Sep 26 11:08:44 2017
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : NSA325-v2:0
           UUID : d60edf9e:b7d8cd57:708232d1:957ab18d
         Events : 1681471

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       18        1      active sync   /dev/sdb2


Can I run SMART from Telnet so I can easily paste the results from the command window?


Top
 Profile  
 
PostPosted: Wed Sep 27, 2017 8:35 pm 
Offline

Joined: Mon Jun 16, 2008 10:45 am
Posts: 6047
jCG wrote:
both appear healthy and both are active
'cat /proc/mdstat' might give other info. AFAIK mdadm here shows information as stored in headers on the disk, while /proc/mdstat gives live info from the kernel. (As all files in /proc. That's not a real directory, but a peek in the kernel).
Quote:
Can I run SMART from Telnet so I can easily paste the results from the command window?
Yes, maybe. The command would be something like 'smartctl --all /dev/sda', but I don't know if the shell can find the binary. Maybe it's in /usr/local/zy-pkgs/bin or -/sbin


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ]  Go to page Previous  1, 2

All times are UTC


Who is online

Users browsing this forum: No registered users and 53 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group