Thought I'd at least try and help someone before proceeding further.
That's the attitude!
I've been helped over the years through posting on forums, so I feel it my duty to stand and deliver where and when possible.
Mijzelf wrote:Pity. I don't see any advantage in RAID0 above JBOD for these devices. RAID0 can give a performance gain, but in a NAS like this the bottleneck is either the network or the processor. When 1 disk dies you still can recover some data from a JBOD, but not from a RAID0. Did tech support say why they did so?
I didn't press the Techie on why RAID0 vs JBOD. And yes, I agree wholeheartedly on the possibility of recoverability in JBOD. I wish
I could saturate what I have here
. Though many buy the gigabit outfitted devices thinking otherwise, it just ain't gonna happen on this type of "consumer' oriented devices. Now when FC goodies begin dropping in price .............
When you compare your partitions to this list
it makes sense. The 3 ext3 partitions (part #6-8) start at x-1-1, because they are in a linked extended partition list. At x-0-1 is a link to the next extended partition. ext3 #3 seems to big to be part#8, but maybe part#9 is inside. XFS 4#1 is a primary partition starting at x-0-1, which is about 2 TB, as expected. XFS 4#2 is junk.
You know/understand more concerning the RAID aspect of this predicament than, knock on wood. I can look at a single disk and not be confused, but this is a wee bit different to me. I won't have time until this weekend to let TestDisk make a full run on 'em. From the 50% I coded above to 56% it(TD) had located another two XFS 4 partitions and for the life of me at this moment I don't remember their sizes. TD races to 50% then slows way way down in processing the drives and I had to get to bed.
I'll move the S2S and these EBD disks over to another machine this weekend. One I'm not all too concerned with leaving running and post the TD findings. Maybe, hopefully you'll see this post and reply which partitions I should write.