Computer Science Labs
Need Help?

Call: 0871 2316806
or Email Us
 

RAID Server Technical Support:


Tech Support




Computer Science Labs Ltd., provides emergency specialist support and assistance in cases of Hard Disk Drive, Hardware or Software faults resulting in RAID failure, Server failure or NAS failure.

Nationwide Professionals.



We provide Professional 24x7 support in London, Manchester, Leeds, Birmingham, Glasgow, Edinburgh, Belfast and throughout the UK.


call image




We Support ALL RAID Systems.

Data recovery, fault repair and system restoration from:

ALL RAID
Configured Servers; RAID 5, RAID 0, RAID 1, RAID 6, RAID 10 etc. ANY; OS Windows, Mac, XFS, Linux ANY Machine -Dell, HP, IBM, Supermicro, SNAP Servers etc.

Onsite RAID data retrieval and restoration throughout the UK and europe. We can successfully recover and rebuild your server at reasonable and competitive Rates.

RAID and Server Overview.

In the past decade, most businesses have turned from single hard disk data storage to high speed multiple hard disks storage systems such as RAID (Redundant Array of Independent Disk). RAID systems are configured both to increase data performance and safeguard mission-critical applications. It combines multiple disks into a single logical unit in two ways:

A Hardware RAID is created by a RAID controller, and appears as one hard drive to any operating system.

A Software RAID is created by the operating system's hard disk drive, and is visible as a RAID only to this operating system.

The most common RAID configurations are: RAID 0, RAID 1 and RAID 5. Some hardware RAID's may also use more complex and expensive RAID controllers which support RAID 6, RAID 5E, RAID 5EE, RAID 10, and so on.

Because of the strong promotional emphasis on the fault tolerance and auto-rebuild functions of RAIDs, users often have the perception that RAIDs will never fail. Thus, up-to-date data backups may not have been taken when users find their server has failed. To the surprise of many, RAIDs could and often did fail.

The individual magnetic storage media in RAID systems suffer from the same types of failures as do conventional hard drives in personal computers and workstations. As the complexity of many server operation systems increases, this may result in additional data loss situations:

Software Failures

  • Accidental deletions
  • Accidental reformatting
  • Missing partitions
  • RAID hard drive firmware corruption
  • Overwritten RAID configuration
  • Overwritten RAID setting
  • RAID configuration corruption
  • Virus attack
  • Unbootable system
  • Accidental reconfiguration of RAID drives


  • Hardware Failures

  • Single or multiple hard drive bad sectors
  • Single or multiple hard drive
  • electronic/PCB failure
  • Single or multiple hard drive head assembly failure
  • Single or multiple hard drive head crash
  • RAID Controller malfunction
  • Accidental replacement/swap of media components


  • Though raid disk arrays offer increased redundancy, capacity and performance over standard disk systems, once failed, they are often complex and more difficult to recover. It may be far from a trivial task for a user to recover the data on their own as even the most experienced system engineers who are familiar with standard RAID configurations, may lack the necessary skill sets to rescue a corrupted or inaccessible RAID volume. However, by using the RAID Scope application, the chance for vital data to be recovered will be greatly improved.

    In all the RAID failure scenarios mentioned above, the system configuration must be discovered and restored in order to read and extract data files such as documents and databases etc. Our technicians will capture specific information about the RAID system i.e. the operating system, hardware configuration, failure type and will transfer the corrupt system files for analysis by our technical support department to ascertain the specific configuration needed to rebuild the data.

    RAIDScope gives the user access to highly skilled RAID engineers who are able to recover data from all types of server configurations including RAID 5/5E/5EE, RAID 0, RAID 1, RAID10, RAID 6 and from all operating systems.

    RAID Technology Explained:



    One of the greatest assets of a business is its operational and customer data. Millions of pounds are spent in compliance solutions in order to backup, replicate and store data, all in an attempt to militate against data loss.

    Backups and replication don’t actually protect against losing data, they are ways to recover from a system outage or disk data loss. The best way of to guard against possible data loss is to implement a disk config solution based on RAID technology.

    RAID (redundant array of independent disks) was first described in a paper by U.C. Berkley in 1988. and today there many such implementations of the same concept. .

    RAID 0;
    Is known as disk striping, this is where a stripe of data is written equally across a group of disks. If one of these disks should fail, all of the data on the group of disks is lost. While not a safe way to protect data, it does deliver higher performance compared to an equal number of independent disks. RAID 0 is rarely used alone in business solutions, but is frequently used with other RAID levels to provide faster performance. RAID 0 is now readily found in home budget storage solutions. .

    RAID 1;
    This is where the same data is written to two disks. If a disk fails, data is read off the other disk. When the failed disk is replaced, the data on the surviving disk is used to recreate the pair. All of this happens with no loss of data for the host applications. RAID Level 1 is one of the most commonly used RAID levels and performs very well for reads and writes.

    RAID 3;
    RAID Level 3 uses an error correcting code called parity to protect against the loss of a single disk. Data is written in parallel in bytes to the data disks (at least two) while parity is written to a dedicated disk. The disk spindles are synchronized (each byte of a stripe of data, and that data’s parity, occupies the same area on each disk) which increases throughput by minimizing disk head movement. When a data disk fails, the data from the dedicated parity disk is used to recreate the data to serve host requests and to rebuild the failed drive when replaced. If the parity disk should fail, the data disks are used to recreate parity and written to the replaced parity disk. RAID Level 3 is best for large sequential data access (i.e., video streaming). Performance for small, random access of the data is slow since every I/O requires activity on every disk. RAID 3 is rarely used today since better performance and identical protection can be achieved with RAID level 5. .

    RAID 4;
    RAID Level 4 is similar to RAID level 3 (striped parity with a dedicated parity disk) except the data is written in blocks, not bytes. Writing blocks of data increases random access performance, since an I/O may only require access to one disk instead of every disk in the group like with RAID 3. But the dedicated parity disk can be a bottleneck for writes. Recovery for a lost drive works the same as RAID level 3. RAID level 4 is not widely adopted. .

    RAID 5;
    RAID Level 5, like RAID levels 3 and 4, uses parity to protect the data from a single disk failure. Unlike levels 3 and 4, the parity is rotated or distributed across all of the drives in the volume. Read performance is substantially better than for a single disk because there is independent access to each disk. As with levels 3 and 4, write performance can be impacted due to the complexity of parity processing but with parity being striped across all the drives, there is no single disk bottleneck with RAID 5. .

    The major advantage of RAID 5 configs is that they are scalable, as more disks provide more independent access. In the case of a disk failure, data from the lost drive is computed from parity (using an arithmetic function (XOR)) stored on the other drives in the disk group. .

    There are three methods to implement RAID: Software, Controllers and Storage arrays.

    Software RAID - RAID implemented on a server by software uses internal drives or external JBOD (just a bunch of disks). The software, usually a logical volume manager, manages all of the mirroring of data or parity calculations. The overhead associated with the parity calculations can be excessive and may cause applications to run slowly. Software RAID is good for a single server and is NOT recommended for I/O intensive applications. Software RAID is usually used in conjunction with a storage array to create “PLAID” RAID. .

    RAID Controller - Another way to implement RAID on a server is to use RAID controllers. These are processor cards that can be added to a server in order to offload the overhead of the RAID functionality from the CPUs. RAID controllers are a far better solution for a single server than software RAID solutions since server CPUs spend no processing power calculation parity or managing the mirrored data. Like software RAID, RAID controllers use either internal drives or JBOD. A server-based RAID controller can fail and be a single point of failure. .

    Storage Array- A storage array usually consists of two high-performance, redundant RAID controllers and trays of disks. All pieces of the array are redundant and built to withstand the rigors of a production environment with many servers accessing the storage at the same time. They support multiple RAID levels and different drive types and speeds. Storage arrays also usually have snapshots, volume copy and the ability to replicate from one array to another. If the servers need high performance, large capacities or superior performance, storage arrays are the right choice. RAID is a necessary building block for any company’s data protection needs. Without RAID, even a small glitch in a disk drive could cause data loss. Thankfully, with software RAID, server-based RAID controllers and external storage arrays, all companies—from the smallest to the largest—can find a RAID solution to protect their data. .

    Setting Higher Ambient temperature levels in Server Rooms:

    A new study from researchers at the University of Toronto provides real-world data relating on the impact and implications of raising the temperature in data centres. .

    The study included over a dozen data centres at three different organizations, and covering a broad range of reliability issues. .

    The effect of high ambient temperatures on system reliability are smaller than often assumed; .

    • For specific reliability issues namely DRAM failures and node outages, there is no direct correlation relating to relatively higher temperatures. • For those error conditions that show a correlation i.e. latent sector errors in disks and disk failures, the correlation is much weaker than expected,” • For (device internal) temperatures below 50C, errors tend to grow linearly with temperature, rather than exponentially, as existing models suggest. .

    The results are strong evidence that most organizations could actually run their data centres hotter than they currently are without making significant sacrifices in system reliability.” .

    The findings have implications for data centres who want to cut energy bills involved in cooling, and could broaden the use of free cooling (the use of fresh air instead of air conditioners to cool servers).

    Most data centres operate in a temperature range between 68 and 72 F , and some are as cold as 55 degrees. Raising the baseline temperature inside the data centre – known as a set point – can save money by reducing the amount of energy used for air conditioning. It’s been estimated that data centre managers could save 4 % in energy costs for every degree of upward change in the set point. Feeling the heat. .

    There are several reasons for caution however. Nudging the thermostat higher is only appropriate for companies with a strong understanding of the cooling conditions in their facility. Warmer set points may allow less time to recover from a cooling failure. The other major issue, which was reinforced by the Toronto study, is the challenge of managing server fan activity, fans tend to kick in as the temperature rises, nullifying gains from turning down the cooling. The Toronto study said that heat may be less important than temperature fluctuation in reducing hardware failures. “Even failure conditions, such as node outages, that did not show a correlation with temperature, did show a clear correlation with the variability in temperature,” the authors wrote. “Efforts in controlling such factors might be more important in keeping hardware failure rates low, than keeping temperatures low.” .

    James Hamilton, a researcher at Amazon Web Services, says the new data is valuable in updating the industry’s understanding of the relationship between temperature and hardware. .

    “An often quoted study reports the failure rate of electronics doubles with every 10C increase of temperature (MIL-HDBK 217F),” .

    Hamilton writes on his blog. “This data point is incredibly widely used by the military, NASA space flight program, and in commercial electronic equipment design. .I’m sure the work is excellent but it is a very old study, wasn’t focused on a large data centre environment, and the rule of thumb that has emerged from this is a linear model of failure to heat.”
     
           
      © Copyright Computer Science Labs (2017) Terms & Conditions Sitemap