rmar wrote:How can I know which one of the HDD can be damaged?
The log is talking about ata2. So I *guess* this is the disk in slot2.
I really start to hate this NAS. I've some and that ones is always with troubles.
The firmware seems a bit fragile. But on the other hand, there are thousands (millions?) outside which never give problems.
The hardware is fine, at least from my sample. I have a 2Big2 in the basement, as my main server. It is running for 3 years now, without missing a beat. To be honest, it's not running Lacie OS, but Debian (Squeeze. One of these days I'll have to dig it out and upgrade the OS).
Do you think that has anything common with this actual problem?
Don't think so. Of course a dying disk (or sata port) can give all kinds of problems, but the user settings are on a raid1 array, so it is rather robust.
And one more thing, do you suggest to take out the HDD and do dd_rescue in a normal PC, correct?
Yes. Of course it can be done on the box itself, but then you'll need to inject an OS, as you can't run Lacie OS and have two disks inserted which cannot join.
rmar wrote:i was thinking about my issue, and do you think it could be something regarding the SATA board?
It can't be excluded, but it's not my first suspect. Disks tend to die. Of course a sata port can also die, but that happens less often. Further it seems the other (raid1) partitions are assembled and mounted fine, which make me think the disk has some bad sectors, which only is a problem when *that* sector is accessed. A defective sata port wouldn't behave that way, I think.
How can I mount the RAID out of the box with this HDD?
As it's raid0, you'll have to connect both disks. That can be sata, esata, usb or a combination. Assuming this are disks sdb and sdc, you can assemble the array with
Code: Select all
mdadm --assemble /dev/md0 /dev/sdb2 /dev/sdc2
(If md0 is already in use you'll have to use another one)
Then mount it:
Code: Select all
mount -t xfs /dev/md0 /tmp/mountpoint -o ro