Jump to content

Input–output memory management unit

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jcm (talk | contribs) at 02:40, 16 October 2016 (Removed gratuitous nonsense assumption that the world is always x86 based). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Comparison of the I/O memory management unit (IOMMU) to the memory management unit (MMU).

In computing, an input–output memory management unit (IOMMU) is a memory management unit (MMU) that connects a direct-memory-access–capable (DMA-capable) I/O bus to the main memory. Like a traditional MMU, which translates CPU-visible virtual addresses to physical addresses, the IOMMU maps device-visible virtual addresses (also called device addresses or I/O addresses in this context) to physical addresses. Some units also provide memory protection from faulty or malicious devices.

An example IOMMU is the graphics address remapping table (GART) used by AGP and PCI Express graphics cards on Intel Architecture and AMD computers.

On the x86 architecture, prior to splitting the functionality of northbridge and southbridge between the CPU and Platform Controller Hub (PCH), I/O virtualization was not performed by the CPU but instead by the chipset.[1][2]

Advantages

The advantages of having an IOMMU, compared to direct physical addressing of the memory, include[citation needed]:

  • Large regions of memory can be allocated without the need to be contiguous in physical memory – the IOMMU maps contiguous virtual addresses to the underlying fragmented physical addresses. Thus, the use of vectored I/O (scatter-gather lists) can sometimes be avoided.
  • Devices that do not support memory addresses long enough to address the entire physical memory can still address the entire memory through the IOMMU, avoiding overheads associated with copying buffers to and from the peripheral's addressable memory space.
    • For example, x86 computers can address more than 4 gigabytes of memory with the Physical Address Extension (PAE) feature in an x86 processor. Still, an ordinary 32-bit PCI device simply cannot address the memory above the 4 GiB boundary, and thus it cannot directly access it. Without an IOMMU, the operating system would have to implement time-consuming bounce buffers (also known as double buffers[3]).
  • Memory is protected from malicious devices that are attempting DMA attacks and faulty devices that are attempting errant memory transfers because a device cannot read or write to memory that has not been explicitly allocated (mapped) for it. The memory protection is based on the fact that OS running on the CPU (see figure) exclusively controls both the MMU and the IOMMU. The devices are physically unable to circumvent or corrupt configured memory management tables.
    • In virtualization, guest operating systems can use hardware that is not specifically made for virtualization. Higher performance hardware such as graphics cards use DMA to access memory directly; in a virtual environment all memory addresses are re-mapped by the virtual machine software, which causes DMA devices to fail. The IOMMU handles this re-mapping, allowing the native device drivers to be used in a guest operating system.
  • In some architectures IOMMU also performs hardware interrupt re-mapping, in a manner similar to standard memory address re-mapping.
  • Peripheral memory paging can be supported by an IOMMU. A peripheral using the PCI-SIG PCIe Address Translation Services (ATS) Page Request Interface (PRI) extension can detect and signal the need for memory manager services.

For system architectures in which port I/O is a distinct address space from the memory address space, an IOMMU is not used when the CPU communicates with devices via I/O ports. In system architectures in which port I/O and memory are mapped into a suitable address space, an IOMMU can translate port I/O accesses.

Disadvantages

The disadvantages of having an IOMMU, compared to direct physical addressing of the memory, include:[4]

  • Some degradation of performance from translation and management overhead (e.g., page table walks).
  • Consumption of physical memory for the added I/O page (translation) tables. This can be mitigated if the tables can be shared with the processor.

Virtualization

When an operating system is running inside a virtual machine, including systems that use paravirtualization, such as Xen, it does not usually know the host-physical addresses of memory that it accesses. This makes providing direct access to the computer hardware difficult, because if the guest OS tried to instruct the hardware to perform a direct memory access (DMA) using guest-physical addresses, it would likely corrupt the memory, as the hardware does not know about the mapping between the guest-physical and host-physical addresses for the given virtual machine. The corruption is avoided because the hypervisor or host OS intervenes in the I/O operation to apply the translations, incurring a delay in the I/O operation.

An IOMMU can solve this problem by re-mapping the addresses accessed by the hardware according to the same (or a compatible) translation table that is used to map guest-physical address to host-physical addresses.[5]

Published specifications

  • AMD has published a specification for IOMMU technology.[6][7]
  • Intel has published a specification for IOMMU technology as Virtualization Technology for Directed I/O, abbreviated VT-d.[8]
  • Information about the Sun IOMMU has been published in the Device Virtual Memory Access (DVMA) section of the Solaris Developer Connection.[9]
  • The IBM Translation Control Entry (TCE) has been described in a document entitled Logical Partition Security in the IBM eServer pSeries 690.[10]
  • The PCI-SIG has relevant work under the terms I/O Virtualization (IOV)[11] and Address Translation Services (ATS).
  • ARM defines its version of IOMMU as System Memory Management Unit (SMMU)[12] to complement its Virtualization architecture.[13]

See also

References

  1. ^ "Intel platform hardware support for I/O virtualization". intel.com. 2006-08-10. Archived from the original on 2007-01-20. Retrieved 2014-06-07.
  2. ^ "Desktop Boards: Compatibility with Intel Virtualization Technology (Intel VT)". intel.com. 2014-02-14. Retrieved 2014-06-07.
  3. ^ "Physical Address Extension — PAE Memory and Windows". Microsoft Windows Hardware Development Central. 2005. Retrieved 2008-04-07.
  4. ^ Muli Ben-Yehuda; Jimi Xenidis; Michal Ostrowski (2007-06-27). "Price of Safety: Evaluating IOMMU Performance" (PDF). Proceedings of the Linux Symposium 2007. Ottawa, Ontario, Canada: IBM Research. Retrieved 2013-02-28. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  5. ^ "Xen FAQ: In DomU, how can I use 3D graphics". Retrieved 2006-12-12.
  6. ^ "AMD I/O Virtualization Technology (IOMMU) Specification Revision 2.0" (PDF). amd.com. 2011-03-24. Retrieved 2014-01-11.
  7. ^ "AMD I/O Virtualization Technology (IOMMU) Specification Revision 2.62" (PDF). amd.com. 2015-03-02. Retrieved 2016-01-05.
  8. ^ "Intel Virtualization Technology for Directed I/O (VT-d) Architecture Specification" (PDF). Retrieved 2016-02-17.
  9. ^ "DVMA Resources and IOMMU Translations". Retrieved 2007-04-30.
  10. ^ "Logical Partition Security in the IBM eServer pSeries 690". Retrieved 2007-04-30.
  11. ^ "I/O Virtualization specifications". Retrieved 2007-05-01.
  12. ^ "ARM SMMU". Retrieved 2013-05-13.
  13. ^ "ARM Virtualization Extensions". Retrieved 2013-05-13.