Reliability, availability and serviceability
Reliability, availability and serviceability (RAS), also known as reliability, availability, and maintainability (RAM), is a computer hardware engineering term involving reliability engineering, high availability, and serviceability design. The phrase was originally used by International Business Machines (IBM) as a term to describe the robustness of their mainframe computers.[1][2]
Computers designed with higher levels of RAS have many features that protect data integrity and help them stay available for long periods of time without failure[3] This data integrity and uptime is a particular selling point for mainframes and fault-tolerant systems.
Definitions
While RAS originated as a hardware-oriented[citation needed] term, systems thinking has extended the concept of reliability-availability-serviceability to systems in general, including software:[4]
- Reliability can be defined as the probability that a system will produce correct outputs up to some given time t.[5] Reliability is enhanced by features that help to avoid, detect and repair hardware faults. A reliable system does not silently continue and deliver results that include uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption, for example: by retrying an operation for transient (soft) or intermittent errors, or else, for uncorrectable errors, isolating the fault and reporting it to higher-level recovery mechanisms (which may failover to redundant replacement hardware, etc.), or else by halting the affected program or the entire system and reporting the corruption. Reliability can be characterized in terms of mean time between failures (MTBF), with reliability = exp(-t/MTBF).[5]
- Availability means the probability that a system is operational at a given time, i.e. the amount of time a device is actually operating as the percentage of total time it should be operating. High-availability systems may report availability in terms of minutes or hours of downtime per year. Availability features allow the system to stay operational even when faults do occur. A highly available system would disable the malfunctioning portion and continue operating at a reduced capacity. In contrast, a less capable system might crash and become totally nonoperational. Availability is typically given as a percentage of the time a system is expected to be available, e.g., 99.999 percent ("five nines").
- Serviceability or maintainability is the simplicity and speed with which a system can be repaired or maintained; if the time to repair a failed system increases, then availability will decrease. Serviceability includes various methods of easily diagnosing the system when problems arise. Early detection of faults can decrease or avoid system downtime. For example, some enterprise systems can automatically call a service center (without human intervention) when the system experiences a system fault. The traditional focus has been on making the correct repairs with as little disruption to normal operations as possible.
Note the distinction between reliability and availability: reliability measures the ability of a system to function correctly, including avoiding data corruption, whereas availability measures how often the system is available for use, even though it may not be functioning correctly. For example, a server may run forever and so have ideal availability, but may be unreliable, with frequent data corruption.[6]
Failure types
Physical faults can be temporary or permanent:
- Permanent faults lead to a continuing error and are typically due to some physical failure such as metal electromigration or dielectric breakdown.
- Temporary faults include transient and intermittent faults.
- Transient (a.k.a. soft) faults lead to independent one-time errors and are not due to permanent hardware faults: examples include alpha particles flipping a memory bit, electromagnetic noise, or power-supply fluctuations.
- Intermittent faults occur due to a weak system component, e.g. circuit parameters degrading, leading to errors that are likely to recur.[5]
Failure responses
Transient and intermittent faults can typically be handled by detection and correction by e.g., ECC codes or instruction replay (see below). Permanent faults will lead to uncorrectable errors which can be handled by replacement by duplicate hardware, e.g., processor sparing, or by the passing of the uncorrectable error to high level recovery mechanisms. A successfully corrected intermittent fault can also be reported to the operating system (OS) to provide information for predictive failure analysis.
Hardware features
This section possibly contains original research. Source needed for the list of hardware features (maybe only a subsystem level source is required)? Most statements inside a subsystem bullet point don't have any source. (May 2024) |
Example hardware features for improving RAS include the following, listed by subsystem:
- Processor:
- Processor instruction error detection (e.g. residue checking of results[7]) with instruction retry e.g. alternative processor recovery in IBM mainframes,[8] or "Instruction replay technology" in Itanium systems.[9]
- Processors running in lock-step to perform master-checker or voting schemes.
- Machine Check Architecture and ACPI Platform Error Interface to report errors to the OS.
- Memory:
- Parity or ECC (including single device correction) protection of memory components (cache and main memory); bad cache line disabling; memory scrubbing; memory sparing, memory mirroring;[10] bad page offlining; redundant bit steering; redundant array of independent memory (RAIM).
- I/O:
- Cyclic redundancy check checksums for data transmission/retry and data storage, e.g. PCI Express (PCIe) Advanced Error Reporting (AER),[11] redundant I/O paths.
- Storage:
- RAID configurations for hard disk drive and solid-state drive storage.
- Journaling file systems for file repair after crashes.
- Checksums on both data and metadata, and background scrubbing.
- Self-Monitoring, Analysis, and Reporting Technology for hard disk drive and solid-state drive.
- Power/cooling:
- Duplicating components to avoid single points of failure, e.g., power-supplies.
- Over-designing the system for the specified operating ranges of clock frequency, temperature, voltage, vibration.
- Temperature sensors to throttle operating frequency when temperature goes out of specification.
- Surge protector, uninterruptible power supply, auxiliary power.
- System:
- Hot swapping of components: CPUs, RAMs, hard disk drives and solid-state drives.
- Predictive failure analysis to predict which intermittent correctable errors will lead eventually to hard non-correctable errors.
- Partitioning/domaining of computer components to allow one large system to act as several smaller systems.
- Virtual machines to decrease the severity of operating system software faults.
- Redundant I/O domains[12] or I/O partitions[13] for providing virtual I/O to guest virtual machines.
- Computer clustering capability with failover capability, for complete redundancy of hardware and software.
- Dynamic software updating to avoid the need to reboot the system for a kernel software update, for example Ksplice under Linux.
- Independent management processor for serviceability: remote monitoring, alerting and control.
Fault-tolerant designs extended the idea by making RAS to be the defining feature of their computers for applications like stock market exchanges or air traffic control, where system crashes would be catastrophic. Fault-tolerant computers (e.g., see Tandem Computers and Stratus Technologies), which tend to have duplicate components running in lock-step for reliability, have become less popular, due to their high cost. High availability systems, using distributed computing techniques like computer clusters, are often used as cheaper alternatives.[citation needed]
See also
- Machine Check Architecture (MCA)
- Machine-check exception (MCE)
- High availability (HA)
- Redundancy (engineering)
- Integrated logistics support
- RAMS (reliability, availability, maintainability and safety)
References
- ^ Siewiorek, Daniel P.; Swarz, Robert S. (1998). Reliable computer systems: design and evaluation. Taylor & Francis. p. 508. ISBN 9781568810928.. "The acronym RAS (reliability, accessibility and serviceability) came into widespread acceptance at IBM as the replacement for the subset notion of recovery management."
- ^ Data Processing Division, International Business Machines Corp., 1970 (1970). "Data processor, Issues 13-17".
{{cite journal}}
:|author=
has generic name (help); Cite journal requires|journal=
(help)CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)- "The dependability [...] experienced by other System/370 users is the result of a strategy based on RAS (Reliability-Availability-Serviceability)" - ^ Siewert, Sam (Mar 2005). "Big iron lessons, Part 2: Reliability and availability: What's the difference?" (PDF).
- ^
For example:
Laros III, James H. (4 September 2012). Energy-Efficient High Performance Computing: Measurement and Tuning. SpringerBriefs in Computer Science. et al. Springer Science & Business Media (published 2012). p. 8. ISBN 9781447144922. Retrieved 2014-07-08.
Historically, Reliability Availability and Serviceability (RAS) systems were commonly provided by vendors on mainframe class systems. [...] The RAS system shall be a systematic union of software and hardware for the purpose of managing and monitoring all hardware and software components of the system to their individual potential.
- ^ a b c E.J. McClusky & S. Mitra (2004). "Fault Tolerance" in Computer Science Handbook 2ed. ed. A.B. Tucker. CRC Press.
- ^
Spencer, Richard H.; Floyd, Raymond E. (11 July 2011). Perspectives on Engineering. Bloomington, Indiana: AuthorHouse (published 2011). p. 33. ISBN 9781463410919. Retrieved 2014-05-05.
[...] a system server may have excellent availability (runs forever), but continues to have frequent data corruption (not very reliable).
- ^ Daniel Lipetz & Eric Schwarz (2011). "Self Checking in Current Floating-Point Units. Proceedings of 2011 20th IEEE Symposium on Computer Arithmetic" (PDF). Archived from the original (PDF) on 2012-01-24. Retrieved 2012-05-06.
- ^ L. Spainhower & T. A. Gregg (September 1999). "IBM S/390 parallel enterprise server G5 fault tolerance: a historical perspective. IBM Journal of Research and Development. Volume 43 Issue 5" (PDF). CiteSeerX 10.1.1.85.5994.
- ^ "Intel Instruction Replay Technology Detects and Corrects Errors". Retrieved 2012-12-07.
- ^ HP. "Memory technology evolution: an overview of system memory technologies Technology brief, 9th edition (page 8)" (PDF). Archived from the original (PDF) on 2011-07-24.
- ^ Intel Corp. (2003). "PCI Express Provides Enterprise Reliability, Availability, and Serviceability".
- ^ "Best Practices for Data Reliability with Oracle VM Server for SPARC" (PDF). Retrieved 2013-07-02.
- ^ "IBM Power Redundancy considerations". Retrieved 2013-07-02.
External links
- Itanium Reliability, Availability and Serviceability (RAS) Features Overview of RAS features in general and specific features of the Itanium processor.
- POWER7 System RAS Key Aspects of Power Systems Reliability, Availability, and Serviceability. Daniel Henderson, Jim Mitchell, and George Ahrens. February 10, 2012 Overview of RAS features in Power processors.
- Intel Corp. Reliability, Availability, and Serviceability for the Always-on Enterprise (appendix B) and Intel Xeon Processor E7 Family: supporting next generation RAS servers. White paper. Overview of RAS features in Xeon processors.
- zEnterprise 196 System Overview. IBM Corp. (Chapter 10) Overview of RAS features of IBM z196 processor and zEnterprise 196 server.
- Maximizing Application Reliability and Availability with the SPARC M5-32 Server RAS features of Oracle’s SPARC M5-32 server