As it is commonly accepted, NAS (Network Attached Storage), is a storage device including several hard drives and connecting via the network. However, there exist such devices, for example Western Digital My Book Studio Edition II, which have much in common with NAS but, on the other hand, do not match the NAS definition because these data storage devices use Local Area Network interface instead of network. As regards an operating system, it is good old Linux with its RAID driver (md).
In practice when restoring data, only a few would ensure that the data has been really retrieved, however, such a verification is mentioned in most instructions about data recovery (e.g. in data recovery guide first actions).
So the following situation may happen:
- you recover files and folder tree seems reasonably correct,
- you format the hard disk, discard it, or make the disk useless in some other way,
- you see that the recovered data is corrupted.
It is needed to try to extract data once again, but unfortunately you have already lost the initial hard disk content.
To sum up the above – it is needed to verify the extracted data first and after checking start to write data on the hard disk which was used in the data recovery.
For RAID4, it is impossible to process two write operations in parallel.
For RAID 5, this probability is not zero unless this is a RAID 5 consisting of three disks. Such a probability in RAID 5 is computed using the formula below:
(N-2)/N * ((N-3)/(N-1))
where N denotes number of disks in RAID 5, including parity.
Let’s see how the probability depends on the number of member disks:
- 30% for RAID 5 of 5 member disks
- 40% for RAID 5 of 6 member disks
- 47% for RAID 5 of 7 member disks
- 53% for RAID 5 of 8 member disks.
It can be seen that the larger number of member disks, the greater probability that two write operations will be proceeded in parallel.
Let’s summarize the above – as for read speed, RAID 4 is very similar to RAID 5, as write speed is concerned – only arrays of three disks are the same while a RAID 5 with more than 3 member disks beats a RAID 4. Actually, RAID4 is a rather unusual array layout, so even those web resources which help configure your array such as www.raid-calculator.com , don’t speak about RAID 4 at all. In my personal opinion it does have some merit.
A couple of days ago I came across a bit funny web site – www.raid-failure.com on which one can get different types of predictions for a particular RAID configuration. All you need to do is just fill the form with the following parameters:
in case of a RAID0
- how many member disks are in the RAID array,
- how old are the disks,
- needed optimism level used for prediction (similar to you believe that all will be OK, or you assume that things are bad).
For a RAID 5
- how many member disks are there in the RAID,
- size of a member disk,
- URE values according to vendors.
- how many groups are in RAID,
- how many member disks are in one group.
This free online RAID array failure estimator displays different predictions for different array types. If you have a RAID 0, then you will obtain a probability the RAID doesn’t fail within one to five years. For a RAID5, this calculator gives a probability to complete the rebuild and not encounter a failure; in case of a RAID10/50/60, you find out a probability to survive multiple disk failures.
When using RAID 1 it is required to read data stored on the second member disk as well. If you do not check the second disk by reading data off it on a regular basis, then it may lead to that a bad sector appears on the second member disk of the RAID1. In this case your RAID 1 doesn’t provide redundancy any longer and on top of that you do not know it. It should be noted that the same is true for hot spare drives as well.
In most cases this problem occurs if your RAID 1 is not heavily used most of the time. To reduce the risk one should read data from both drives and compare it. Heavily loaded arrays don’t experience such issue.
It is known that in RAID 1 used by Windows OS sectors are usually requested from the first disk. This fact makes Windows RAID 1 not reliable.
There are the following few types of data recovery.
“Data recovery“, referring either to any kind of recovery, or to the extraction of files from a filesystem that does not normally mount.
RAID recovery, which reconstructs RAID configuration. RAID recovery does not know about files, folders, filesystems, or partitions.
Partition recovery, seeking to re-create the partition table in order to have accidentally deleted partitions reappear as they were.
Unlike all other data recovery types we discuss here, the partition recovery really does change the content of the hard drive. The partition recovery should not be mixed with
a recovery partition, a feature found mostly on laptops.
Photo recovery, which aims to extract the digital images files contained on any storage. The photo recovery does not rely upon the filesystem, looking at data headers.
Raw recovery, the broader version of the photo recovery method, which does not limit the recovery to just the digital photo files. In raw recovery, all possible file headers are identified and all possible files recovered.
Photo and raw recovery differ from other recovery types in that the file names and folders are lost during the recovery. One should keep in mind that file fragmentation is ignored in raw (and photo) recovery, which can sometimes lead to recovered files being damaged. The raw recovery should not be mixed with a raw file system recovery, a kind of a data loss event.
Here I mostly talk about older hardware, items one regrets not having tossed into the bin.
The capacity limit on a hard drive did exist because the older LBA standard only allowed 28 bits for a sector address. Hence, there was a limit of 2^28 sectors on a drive. With 512 bytes per sector, this works out to 268435456 sectors, or 137,438,953,472 bytes, which equals exactly 128 gigabytes.
Note that 128 GB (137,438,953,472 bytes) is in fact slightly above 137 billion bytes. The hardware makers, unlike software makers, use decimal units when speaking about the hard drive size. Because of this, drive manufacturers can advertise an imaginary 128GB hard disk as having size of 137GB (see The difference between hard drive and file system size).
One can encounter a 128GB limit because the mainboard is badly old (something 2002-ish), or when using an OS massively not up to date (similar to Windows 2000 or Windows XP prior to SP2).
The corrective actions include
- Flashing the latest available motherboard BIOS. This may or may not work, depending on the specific motherboard.
- Updating the operating system. If all other components are up to speed, that usually solves the issue.
Most often, the reduction in capacity results from an old hardware or software, which is not compatible with new hard disks, there is one notable exception – the Host Protected Area, or HPA in short (discussed in my previous post).
The HPA is a feature of a hard drive allowing the OS hide a part of the capacity of the hard drive. HPA was implemented to allow for backward compatibility, so that one can use hard drive in an old PC which does not support large disks. If activated unintentionally, the HPA fools the OS to behave like the hard disk is smaller than it should be.
If this condition is not desired, you can reset the HPA, usually with vendor-supplied software.
Vendors of laptops may give a recovery disc or may not. Makers who don’t give the CD to their users generally put the original Windows OS installation to the special partition along with a tool which is capable of deploying this OS installation back to the drive.
Such a partition containing the OS is placed in the end of the drive and hidden from the Windows by the means of HPA (Host Protected Area, described nicely e.g. at www.disk-space-guide.com).
The recovery process for a laptop with Host Protected Area can be the following:
- you press certain keys during startup sequence
- BIOS discards HPA limitations
- Windows is loaded off the appeared recovery partition
- a special tool starts from this partition. This tool formats the drive and copies the factory Windows installation to the disk.
When the recovery is complete, HPA is set back. After this process is complete the laptop is as good as new from the store, software-wise.
Rumor is it is impossible to boot an operating system from the software RAID array.
This is not exactly true. All the widespread boot loaders, namely ones from Windows NT/2000 and so on, Linux LILO, and GRUB, would successfully use a RAID 1.
It should be noted that further steps should be undertaken for bootable mirror. Since the mirroring does not cover the Master Boot Record, you need to copy the MBR sector manually to the shadow drive in the mirror. Otherwise when the boot drive crashes, you are left with an unbootable system.
Surely there will be no loss of data – all you need is to just mount the drive into a known good system and read it, but you get no automatic recovery if the hard disk dies, requiring a reboot.
The new Western Digital drives like WDBAAF0020HBK My Book Essential 2TB External USB feature a built-in hardware-based AES encryption. These drives are sometimes called “Self Encrypting Drive”, or SED.
Surprisingly, the content written to the WD MyBook is scrambled even when the password is not set. Once the USB to SATA bridge stops working, the cipher keys are lost and data cannot be recovered despite the fact the storage by itself is working fine. Considering that in practice a failure of the encryption chip looks higher probability than the disk actually getting into wrong hands, the always-on protection looks like not a very good idea.
How came designers choose to implement the encryption in such a way?
The rationale behind this choice is a speed of changing or resetting a password. If one has a policy of “no password = no encryption”, when the password is set or changed, the entire hard drive needs to be encrypted again, taking several hours. And this even before we start looking into complex issues like something along the lines of several overlapping power failures. The same consideration also applies to password removal.
So designers choose the faster option. The master encryption key is generated once during the production and flashed into controller’s EEPROM. All the data on the disk is encrypted using this master key, all the time, regardless if the user password is set. When user requests a password to be set, the master key is encrypted using a password.
The data on the drive being already encrypted, you cannot read data not having the master key, and the master key is not available unless you have the valid password.
In this setup if the encryption module goes bad, the content of the disk is lost forever.
As a side effect, this approach eliminates the need for secure erase.