Memory safety

From Wikipedia, the free encyclopedia

Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers.[1] For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences.[1] In contrast, C and C++ allow arbitrary pointer arithmetic with pointers implemented as direct memory addresses with no provision for bounds checking,[2] and thus are potentially memory-unsafe.[3]

In November 2022, the NSA reminded the public why "memory safe" languages are not completely memory-safe [4] as all programming languages must call unmanaged third-party code (OS, CPUs, etc.). As a result, being "memory safe" is a concept that has not been fully-implemented so far (Java, C# and others are written in C/C++, and run on a C/C++ operating system).

History[edit]

Memory errors were first considered in the context of resource management and time-sharing systems, in an effort to avoid problems such as fork bombs.[5] Developments were mostly theoretical until the Morris worm, which exploited a buffer overflow in fingerd.[6] The field of computer security developed quickly thereafter, escalating with multitudes of new attacks such as the return-to-libc attack and defense techniques such as the non-executable stack[7] and address space layout randomization. Randomization prevents most buffer overflow attacks and requires the attacker to use heap spraying or other application-dependent methods to obtain addresses, although its adoption has been slow.[6] However, deployments of the technology are typically limited to randomizing libraries and the location of the stack.

Impact[edit]

In 2019, a Microsoft security engineer reported that 70 percent of all security vulnerabilities were caused by memory safety issues.[8] In 2020, a team at Google similarly reported that 70 percent of all "severe security bugs" in Google Chromium were caused by memory safety problems. Many other high-profile vulnerabilities and exploits in critical software have ultimately stemmed from a lack of memory safety, including Heartbleed[9] and a long-standing privilege escalation bug in sudo.[10] The pervasiveness and severity of vulnerabilities and exploits arising from memory safety issues have led several security researchers to describe identifying memory safety issues as "shooting fish in a barrel".[11]

Approaches[edit]

Most modern, high-level programming languages are memory-safe by default. Automatic memory management in the form of garbage collection is the most common technique for preventing memory safety problems, since it prevents common memory safety errors like use-after-free for all data allocated within the language runtime.[12] When combined with automatic bounds checking on all array accesses and no support for raw pointer arithmetic, garbage collected languages provide strong memory safety guarantees (though the guarantees may be weaker for low-level operations explicitly marked unsafe, such as use of a foreign function interface). However, the performance overhead of garbage collection makes these languages unsuitable for certain performance-critical applications.[1]

For languages that use manual memory management, memory safety cannot be guaranteed by the runtime. Instead, memory safety properties must either be guaranteed by the compiler via static program analysis and automated theorem proving or carefully managed by the programmer at runtime.[12] For example, the Rust programming language implements a borrow checker to ensure memory safety,[13] while C and C++ provide no memory safety guarantees. The substantial amount of software written in C and C++ has motivated the development of external static analysis tools like Coverity, which offers static memory analysis for C.[14]

DieHard,[15] its redesign DieHarder,[16] and the Allinea Distributed Debugging Tool are special heap allocators that allocate objects in their own random virtual memory page, allowing invalid reads and writes to be stopped and debugged at the exact instruction that causes them. Protection relies upon hardware memory protection and thus overhead is typically not substantial, although it can grow significantly if the program makes heavy use of allocation.[17] Randomization provides only probabilistic protection against memory errors, but can often be easily implemented in existing software by relinking the binary.

The memcheck tool of Valgrind uses an instruction set simulator and runs the compiled program in a memory-checking virtual machine, providing guaranteed detection of a subset of runtime memory errors. However, it typically slows the program down by a factor of 40,[18] and furthermore must be explicitly informed of custom memory allocators.[19][20]

With access to the source code, libraries exist that collect and track legitimate values for pointers ("metadata") and check each pointer access against the metadata for validity, such as the Boehm garbage collector.[21] In general, memory safety can be safely assured using tracing garbage collection and the insertion of runtime checks on every memory access; this approach has overhead, but less than that of Valgrind. All garbage-collected languages take this approach.[1] For C and C++, many tools exist that perform a compile-time transformation of the code to do memory safety checks at runtime, such as CheckPointer[22] and AddressSanitizer which imposes an average slowdown factor of 2.[23]

Types of memory errors[edit]

Many different types of memory errors can occur:[24][25]

  • Access errors: invalid read/write of a pointer
    • Buffer overflow – out-of-bound writes can corrupt the content of adjacent objects, or internal data (like bookkeeping information for the heap) or return addresses.
    • Buffer over-read – out-of-bound reads can reveal sensitive data or help attackers bypass address space layout randomization.
    • Race condition – concurrent reads/writes to shared memory
    • Invalid page fault – accessing a pointer outside the virtual memory space. A null pointer dereference will often cause an exception or program termination in most environments, but can cause corruption in operating system kernels or systems without memory protection, or when use of the null pointer involves a large or negative offset.
    • Use after free – dereferencing a dangling pointer storing the address of an object that has been deleted.
  • Uninitialized variables – a variable that has not been assigned a value is used. It may contain an undesired or, in some languages, a corrupt value.
    • Null pointer dereference – dereferencing an invalid pointer or a pointer to memory that has not been allocated
    • Wild pointers arise when a pointer is used prior to initialization to some known state. They show the same erratic behaviour as dangling pointers, though they are less likely to stay undetected.
  • Memory leak – when memory usage is not tracked or is tracked incorrectly
    • Stack exhaustion – occurs when a program runs out of stack space, typically because of too deep recursion. A guard page typically halts the program, preventing memory corruption, but functions with large stack frames may bypass the page.
    • Heap exhaustion – the program tries to allocate more memory than the amount available. In some languages, this condition must be checked for manually after each allocation.
    • Double free – repeated calls to free may prematurely free a new object at the same address. If the exact address has not been reused, other corruption may occur, especially in allocators that use free lists.
    • Invalid free – passing an invalid address to free can corrupt the heap.
    • Mismatched free – when multiple allocators are in use, attempting to free memory with a deallocation function of a different allocator[26]
    • Unwanted aliasing – when the same memory location is allocated and modified twice for unrelated purposes.

References[edit]

  1. ^ a b c d Dhurjati, Dinakar; Kowshik, Sumant; Adve, Vikram; Lattner, Chris (1 January 2003). "Memory Safety Without Runtime Checks or Garbage Collection" (PDF). Proceedings of the 2003 ACM SIGPLAN Conference on Language, Compiler, and Tool for Embedded Systems. ACM: 69–80. doi:10.1145/780732.780743. ISBN 1581136471. S2CID 1459540. Retrieved 13 March 2017.
  2. ^ Koenig, Andrew. "How C Makes It Hard To Check Array Bounds". Dr. Dobb's. Retrieved 13 March 2017.
  3. ^ Akritidis, Periklis (June 2011). "Practical memory safety for C" (PDF). Technical Report - University of Cambridge. Computer Laboratory. University of Cambridge, Computer Laboratory. ISSN 1476-2986. UCAM-CL-TR-798. Retrieved 13 March 2017.
  4. ^ NSA, USA. "NSA Releases Guidance on How to Protect Against Software Memory Safety Issues". U.S. National Security Agency. Retrieved 17 January 2023.
  5. ^ Anderson, James P. "Computer Security Planning Study" (PDF). 2. Electronic Systems Center. ESD-TR-73-51. {{cite journal}}: Cite journal requires |journal= (help)
  6. ^ a b van der Veen, Victor; dutt-Sharma, Nitish; Cavallaro, Lorenzo; Bos, Herbert (2012). "Memory Errors: The Past, the Present, and the Future" (PDF). Lecture Notes in Computer Science. 7462 (RAID 2012): 86–106. doi:10.1007/978-3-642-33338-5_5. ISBN 978-3-642-33337-8. Retrieved 13 March 2017.
  7. ^ Wojtczuk, Rafal. "Defeating Solar Designer's Non-executable Stack Patch". insecure.org. Retrieved 13 March 2017.
  8. ^ "Microsoft: 70 percent of all security bugs are memory safety issues". ZDNET. Retrieved 21 September 2022.
  9. ^ "CVE-2014-0160". Common Vulnerabilities and Exposures. Mitre. Archived from the original on 24 January 2018. Retrieved 8 February 2018.
  10. ^ Goodin, Dan (4 February 2020). "Serious flaw that lurked in sudo for 9 years hands over root privileges". Ars Technica.
  11. ^ "Fish in a Barrel". fishinabarrel.github.io. Retrieved 21 September 2022.
  12. ^ a b Crichton, Will. "CS 242: Memory safety". stanford-cs242.github.io. Retrieved 22 September 2022.
  13. ^ "References". The Rustonomicon. Rust.org. Retrieved 13 March 2017.
  14. ^ Bessey, Al; Engler, Dawson; Block, Ken; Chelf, Ben; Chou, Andy; Fulton, Bryan; Hallem, Seth; Henri-Gros, Charles; Kamsky, Asya; McPeak, Scott (1 February 2010). "A few billion lines of code later". Communications of the ACM. 53 (2): 66–75. doi:10.1145/1646353.1646374. Retrieved 14 March 2017.
  15. ^ Berger, Emery D.; Zorn, Benjamin G. (1 January 2006). "DieHard: Probabilistic Memory Safety for Unsafe Languages" (PDF). Proceedings of the 27th ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM: 158–168. doi:10.1145/1133981.1134000. S2CID 8984358. Retrieved 14 March 2017.
  16. ^ Novark, Gene; Berger, Emery D. (1 January 2010). "DieHarder: Securing the Heap" (PDF). Proceedings of the 17th ACM Conference on Computer and Communications Security. ACM: 573–584. doi:10.1145/1866307.1866371. S2CID 7880497. Retrieved 14 March 2017.
  17. ^ "Memory Debugging in Allinea DDT". Archived from the original on 2015-02-03.
  18. ^ Gyllenhaal, John. "Using Valgrind's Memcheck Tool to Find Memory Errors and Leaks". computing.llnl.gov. Archived from the original on 7 November 2018. Retrieved 13 March 2017.
  19. ^ "Memcheck: a memory error detector". Valgrind User Manual. valgrind.org. Retrieved 13 March 2017.
  20. ^ Kreinin, Yossi. "Why custom allocators/pools are hard". Proper Fixation. Retrieved 13 March 2017.
  21. ^ "Using the Garbage Collector as Leak Detector". www.hboehm.info. Retrieved 14 March 2017.
  22. ^ "Semantic Designs: CheckPointer compared to other safety checking tools". www.semanticdesigns.com. Semantic Designs, Inc.
  23. ^ "AddressSanitizerPerformanceNumbers". GitHub.
  24. ^ Gv, Naveen. "How to Avoid, Find (and Fix) Memory Errors in your C/C++ Code". Cprogramming.com. Retrieved 13 March 2017.
  25. ^ "CWE-633: Weaknesses that Affect Memory". Community Weakness Enumeration. MITRE. Retrieved 13 March 2017.
  26. ^ "CWE-762: Mismatched Memory Management Routines". Community Weakness Enumeration. MITRE. Retrieved 13 March 2017.