How was the raid organized? I suppose raid5 with one parity, right?
The raid manager has some strong constraints on the timing of it's member disks. When a disk doesn't react in time on a command, the disk has to be dropped from the array. The disk could still be healthy, but due to wear it sometimes
needs some more time.
Now your array has dropped 2 disks, which means the array is down, as it needs at least 4 disks to run. You have replaced one disk, but unfortunately that doesn't help, because you copied it in it's down status, so it needs to resync. And there is no way to resync, as the parity is down too.
I *think* you should be able to restore the data, as probably only a few sectors of the dropped disk will be out of sync. So if you tell the raid manager to assume the disks are in sync, the internal filesystem might mount just fine. With some luck the damage is in some slack space, or in a system file.
Connect the 4 or 5 disks to a Linux box (sata, e-sata, usb-to-sata, a combination, ...)
Find out the data partition devicenames.
mdadm --assemble /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 [/dev/sdf2] --run --force
Of course you'll have to provide your own device names. If that works, mark the array as read-only
mdadm --readonly /dev/md0
Now try to mount it.
mount -o ro /dev/md0 /mnt/mountpoint