Direct Rendering Manager

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Direct Rendering Manager
Original author(s) kernel.org & freedesktop.org
Developer(s) kernel.org & freedesktop.org
Written in C
Type
License
Website dri.freedesktop.org/wiki/DRM

The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. DRM exposes an API that user space programs can use to send commands and data to the GPU, and perform operations such as configuring the mode setting of the display. DRM was first developed as the kernel space component of the X Server's Direct Rendering Infrastructure,[1] but since then it has been used by other graphic stack alternatives such as Wayland.

User space programs can use the DRM API to command the GPU to do hardware accelerated 3D rendering, video decoding as well as GPGPU computing.

Overview[edit]

The Linux Kernel already had an API called fbdev allowing to manage the framebuffer of a graphics adapter,[2] but it couldn't be used to handle the needs of modern 3D accelerated GPU based video cards. These type of cards usually require setting and managing a command queue in the card's memory (Video RAM) to dispatch commands to the GPU, and also they need a proper management of the buffers and free space of the Video RAM itself.[3] Initially user space programs (such as the X Server) directly managed these resources, but these programs usually acted as if they were the only ones with access to the card's resources. When two or more programs tried to control the same video card at the same time, and set its resources each one in its own way, most times they ended catastrophically.[3]

Access to video card without DRM
Without DRM
Access to video card with DRM
With DRM
DRM allows multiple programs concurrently access to the 3D video card avoiding collisions

When the Direct Rendering Manager was first created, the purpose was that multiple programs using resources from the video card can cooperate through it. The DRM gets an exclusive access to the video card, and it's responsible for initializing and maintaining the command queue, the VRAM and any other hardware resource. The programs that want to use the GPU send their requests to DRM, which acts as an arbitrator and takes care to avoid possible conflicts.

Since then, the scope of DRM has been expanded over the years to cover more functionality previously handled by user space programs, such as framebuffer managing and mode setting, memory sharing objects and memory synchronization.[4][5] Some of these expansions had carried its own specific names, such as Graphics Execution Manager (GEM) or Kernel Mode-Setting (KMS), and the terminology prevails when the functionality they provide is specifically alluded. But they are really parts of the whole kernel DRM subsystem.

Software architecture[edit]

A process using the Direct Rendering Manager of the Linux Kernel to access a 3D accelerated graphics card

The Direct Rendering Manager resides in kernel space, so the user space programs must use kernel system calls to request its services. However, DRM doesn't define its own customized system calls. Instead, it follows the Unix principle "everything is a file" to expose the GPUs through the filesystem name space using device files under the /dev hierarchy. Each GPU detected by DRM is referred as a DRM device, and a device file /dev/dri/cardX (where X is a sequential number) is created to interface with it.[6][7] User space programs that want to talk to the GPU must open the file and use ioctl calls to communicate with DRM. Different ioctls correspond to different functions of the DRM API.

A library called libdrm was created to facilitate the interface of user space programs with the DRM subsystem. This library is merely a wrapper that provides a function written in C for every ioctl of the DRM API, as well as constants, structures and other helper elements.[8] The use of libdrm not only avoids exposing the kernel interface directly to user space, but presents the usual advantages of reusing and sharing code between programs.

Direct Rendering Manager architecture details: DRM core and DRM driver (including GEM and KMS) interfaced by libdrm

DRM consists of two parts: a generic "DRM core" and a specific one ("DRM Driver") for each type of supported hardware.[9] DRM core provides the basic framework where different DRM drivers can register, and also provides to user space a minimum set of ioctls with common, hardware-independent functionality. A DRM driver, on the other hand, implements the hardware-dependent part of the API, specific to the type of GPU it supports; it should provide the implementation to the remainder ioctls not covered by DRM core, but it may also extend the API offering additional ioctls with extra functionality only available on such hardware.[6] When a specific DRM driver provides an enhanced API, user space libdrm is also extended by an extra library libdrm-driver that can be used by user space to interface with the additional ioctls.

API[edit]

The DRM core exports several interfaces to user-space applications, generally intended to be used through corresponding libdrm wrapper functions. In addition, drivers export device-specific interfaces for use by user-space drivers & device-aware applications through ioctls and sysfs files. External interfaces include: memory mapping, context management, DMA operations, AGP management, vblank control, fence management, memory management, and output management.

Translation Table Maps[edit]

Translation Table Maps memory manager developed by Tungsten Graphics.

Graphics Execution Manager[edit]

Due to the increasing size of video memory and the growing complexity of graphics APIs such as OpenGL, the strategy of reinitializing the graphics card state at each context switch was too expensive, performance-wise. Also, modern Linux desktops needed an optimal way to share off-screen buffers with the compositing manager. This leads to the development of new methods to manage graphics buffers inside the kernel. The Graphics Execution Manager (GEM) emerged as one of these methods.[5]

GEM provides an API with explicit memory management primitives.[5] Through GEM, a user space program can create, handle and destroy memory objects living in the GPU's video memory. These objects, called "GEM objects",[10] are persistent from the user space program's perspective, and don't need to be reloaded every time the program regains control of the GPU. When a user space program needs a chunk of video memory (to store a framebuffer, texture or any other data required by the GPU[11]), it requests the allocation to the DRM driver using the GEM API. The DRM driver keeps track of the used video memory, and is able to comply with the request if there is free memory available, returning a "handle" to user space to further refer the allocated memory in coming operations.[5][10] GEM API also provides operations to populate the buffer and to release it when is no more needed.

GEM also allows two or more user space processes using the same DRM device (hence the same DRM driver) to share a GEM object.[12] GEM handles are local 32-bit integers unique to a process but repeatable in other processes, therefore not suitable for sharing. What is needed is a global namespace, and GEM provides one through the use of global handles called GEM names. A GEM name refers to one, and only one, GEM object managed by the same DRM driver, by using a unique 32 bit integer. GEM provides an operation, flink, to obtain a GEM name from a GEM handle. The process can then pass this GEM name —this 32-bit integer— to another process using any IPC mechanism available. The GEM name can be used by the receiving process to obtain a local GEM handler pointing to the original GEM object.

Unfortunately, the use of GEM names to share buffers is not secure.[13][14][15] A malicious third party process accessing the same DRM device could try and guess a GEM name of a buffer shared by another two processes, simply by probing 32-bit integers. Once a GEM name is found, its contents can be accessed and modified, violating the confidenciality and integrity of the buffer's information. This drawback was overcome later by the introduction of DMA-BUF support into DRM.

AGP, PCIe and other graphics cards contain an IOMMU called Graphics address remapping table (GART) which can be used to map various pages of system memory into the GPU's address space. The result is that, at any time, an arbitrary (scattered) subset of the system's RAM pages are accessible to the GPU.[4]

KMS driver (Kernel mode setting)[edit]

There must be a "DRM master" in user-space, this program has exclusive access to KMS.

The KMS driver is the component which is solely responsible for the mode setting. It is the device driver for a display controller, and it can be distinguished from the device driver of a rendering accelerator. Due to the fact that dies of modern GPUs found on graphics cards for desktop computers integrate "processing logic", "display controller" and "hardware video acceleration" SIP cores, non-technical people don't distinguish between these three very different components. SoCs on the other hand, regularly mix SIP from different developers, and, for example, ARM's Mali SIP does not feature a display controller. For historical reasons, the DRM and the KMS of the Linux kernel were amalgamated into one component. They were split in 2013 for technical reasons.[16]

The video Embedded Linux Conference 2013 – Anatomy of an Embedded KMS driver on YouTube explains what a KMS driver is.

Render nodes[edit]

A render node is a character device that exposes a GPU's off-screen rendering and GPGPU capabilities to unprivileged programs, without exposing any display manipulation access. This is the first step in an effort to decouple the kernel's interfaces for GPUs and display controllers from the obsolete notion of a graphics card.[16] Unprivileged off-screen rendering is presumed by both Wayland and Mir display protocols — only the compositor is entitled to send its output to a display, and rendering on behalf of client programs is outside the scope of these protocols.

Universal plane[edit]

Patches for universal plane were submitted by Intel's Matthew. D. Roper in May 2014. The idea behind universal plane is to expose all types of hardware planes to userspace via one consistent Kernel–user space API.[17] Universal plane brings framebuffers (primary planes), overlays (secondary planes) and cursors (cursor planes) together under the same API. No more type specific ioctls, but common ioctls shared by them all.[18]

Universal plane prepares the way for Atomic mode setting and nuclear pageflip.

Hardware support[edit]

DRM is to be used by user-mode graphics devices driver, like e.g. AMD Catalyst or Mesa 3D. User-space programm use the Linux System Call Interface to access DRM. DRM augments the Linux System Call Interface with own system calls.[19]

The Linux DRM subsystem includes free and open source drivers to support hardware from the 3 main manufacturers of GPUs for desktop computers (AMD. NVIDIA and Intel), as well as from a growing number of mobile GPU and System on a chip (SoC) integrators. The quality of each driver highly varies, depending on the degree of cooperation by the manufacturer and other matters.

DRM drivers
Driver Since kernel Supported hardware Status/Notes
radeon 2.4.1 AMD (formerly ATi) Radeon GPU series, including R100, R200, R300, R400, Radeon X1000, HD 2000, HD 4000, HD 5000 ("Evergreen"), HD 6000 ("Northern Islands"), HD 7000/HD 8000 ("Southern Islands") and Rx 200 series
i915 2.6.9 Intel GMA 830M, 845G, 852GM, 855GM, 865G, 915G, 945G, 965G, G35, G41, G43, G45 chipsets. Intel HD and Iris Graphics HD Graphics 2000/3000/2500/4000/4200/4400/4600/P4600/P4700/5000, Iris Graphics 5100, Iris Pro Graphics 5200 integrated GPUs.
nouveau 2.6.33[20] NVIDIA Tesla, Fermi, Kepler, Maxwell based GeForce GPUs, Tegra K1 SoC
exynos 3.2 Samsung ARM-based Exynos SoCs
gma500 3.3 (from staging) Intel GMA 500 and other Imagination Technologies (PowerVR) based graphics GPUs experimental 2D KMS-only driver
ast 3.5 ASpeed Technologies 2000 series experimental
shmobile 3.7 Renesas SH Mobile
tegra 3.8 Nvidia Tegra20, Tegra30 SoCs
omapdrm 3.9 Texas Instruments OMAP5 SoCs
msm 3.12.[21][22] Qualcomm's Adreno A2xx/A3xx/A4xx GPU families (Snapdragon SOCs)[23]
armada 3.13[24] Marvell Armada 510 SoCs
sti 3.17 STMicroelectronics SoC stiH41x series
imx 3.19[25][26] (from staging) Freescale i.MX SoCs
amdgpu[27] 4.2[28][29] AMD GCN 1.2 ("Volcanic Islands") microarchitecture GPUs, including Radeon R9 285 ("Tonga") and Radeon Rx 300 series ("Fiji"),[30] as well as "Carrizo" integrated APUs

There is also a number of drivers for old, obsolete hardware detailed in the next table for historical purposes. Some of them still remains in the kernel code, but others have been already removed.

Historic DRM drivers
Driver Since kernel Supported hardware Status/Notes
gamma 2.3.18 3Dlabs GLINT GMX 2000 Removed since 2.6.14[31]
ffb 2.4 Creator/Creator3D (used by Sun Microsystems Ultra workstations) Removed since 2.6.21[32]
tdfx 2.4 3dfx Banshee/Voodoo3+
mga 2.4 Matrox G200/G400/G450
r128 2.4 ATI Rage 128
i810 2.4 Intel i810
i830 2.4.20 Intel 830M/845G/852GM/855GM/865G Removed since 2.6.39[33] (replaced by i915 driver)
sis 2.4.17 SiS 300/630/540
via 2.6.13[34] VIA Unichrome / Unichrome Pro
savage 2.6.14[35] S3 Graphics Savage 3D/MX/IX/4/SuperSavage/Pro/Twister

Development[edit]

The Direct Rendering Manager is developed within the Linux kernel, and its source code resides in the /drivers/gpu/drm directory of the Linux source code. The subsystem maintainter is Dave Airlie, with other maintainers taking care of specific drivers.[36] As usual in the Linux kernel development, DRM submaintainers and contributors send their patches with new features and bug fixes to the main DRM maintainer which integrates them into its own Linux repository. The DRM maintainer in turn submits all of these patches that are ready to be mainlined to Linus Torvalds whenever a new Linux version is going to be released. Torvalds, as top maintainer of the whole kernel, holds the last word on whether a patch is suitable or not for inclusion in the kernel.

For historical reasons, the source code of the libdrm library is maintained under the umbrella of the Mesa project.[37]

History[edit]

In 1999, while developing DRI for XFree86, Precision Insight created the first version of DRM for the 3dfx video cards, as a linux kernel patch included within the Mesa source code.[38] Later that year, the DRM code was mainlined in Linux kernel 2.3.18 under the /drivers/char/drm/ directory for character devices.[39] During the following years the number of supported video cards grew. When Linux 2.4.0 was released in January 2001 there was already support for Creative Labs GMX 2000, Intel i810, Matrox G200/G400 and ATI Rage 128, in addition to 3dfx Voodoo3 cards,[40] and that list expanded during the 2.4.x series, with drivers for ATI Radeon cards, some SiS video cards and Intel 830M and subsequent integrated GPUs.

The split of DRM into two components, DRM core and DRM driver, called DRM core/personality split was done during the second half of 2004,[41] and merged into kernel version 2.6.11.[42] This split allowed multiple DRM drivers for multiple devices to work simultaneously, opening the way to multi-GPU support.

The increasing complexity of video memory management led to several approaches to solving this issue. The first attempt was the Translation Table Maps (TTM) memory manager, developed by Thomas Hellstrom (Tungsten Graphics) in collaboration with Eric Anholt (Intel) and Dave Airlie (Red Hat).[4] TTM was proposed for inclusion into mainline kernel 2.6.25 in November 2007,[4] and again in May 2008, but was ditched in favor of a new approach called Graphics Execution Manager (GEM).[43] GEM was first developed by Keith Packard and Eric Anholt from Intel as simpler solution for memory management for their i915 driver.[5] Intel's GEM also provides control execution flow for their i915 —and later— GPUs, but no other driver has attempted to use the whole GEM API beyond the memory management specific ioctls.

GEM was well received and merged into the Linux kernel version 2.6.28.[44]

Recent developments[edit]

Render nodes[edit]

In 2013, as part of GSoC, David Herrmann developed the multiple render nodes feature.[45] His code was added to the Linux kernel version 3.12 as an experimental feature[46][47][48][49][50] and enabled by default since Linux 3.17.[51]

See also[edit]

References[edit]

  1. ^ "Linux kernel/drivers/gpu/drm/README.drm". kernel.org. Retrieved 2014-02-26. 
  2. ^ Uytterhoeven, Geert. "The Frame Buffer Device". Kernel.org. Retrieved 28 January 2015. 
  3. ^ a b White, Thomas. "How DRI and DRM Work". Retrieved 22 July 2014. 
  4. ^ a b c d Corbet, Jonathan (6 November 2007). "Memory management for graphics processors". LWN.net. Retrieved 23 July 2014. 
  5. ^ a b c d e Packard, Keith; Anholt, Eric (13 May 2008). "GEM - the Graphics Execution Manager". dri-devel mailing list. Retrieved 23 July 2014. 
  6. ^ a b Kitching, Simon. "DRM and KMS kernel modules". Retrieved 23 July 2014. 
  7. ^ Herrmann, David. "Splitting DRM and KMS device nodes". Retrieved 23 July 2014. 
  8. ^ "libdrm README". Retrieved 23 July 2014. 
  9. ^ Airlie, Dave. "New proposed DRM interface design". dri-devel maling list. Retrieved 30 January 2015. 
  10. ^ a b Barnes, Jesse; Pinchart, Laurent; Vetter, Daniel. "Memory management". Linux DRM Developer's Guide. Retrieved 31 January 2015. 
  11. ^ Vetter, Daniel. "i915/GEM Crashcourse by Daniel Vetter". Intel Open Source Technology Center. Retrieved 31 January 2015.  "GEM essentially deals with graphics buffer objects (which can contain textures, renderbuffers, shaders, or all kinds of other state objects and data used by the gpu)"
  12. ^ Vetter, Daniel (4 May 2011). "GEM Overview". Retrieved 13 February 2015. 
  13. ^ Perens, Martin; Ravier, Timothée. "DRI-next/DRM2: A walkthrough the Linux Graphics stack and its security" (PDF). Retrieved 13 February 2015. 
  14. ^ Packard, Keith (28 September 2012). "DRI-Next". Retrieved 13 February 2015.  "GEM flink has lots of issues. The flink names are global, allowing anyone with access to the device to access the flink data contents."
  15. ^ Herrmann, David. "DRM Security". The 2013 X.Org Developer's Conference (XDC2013) Proceedings. Retrieved 13 February 2015.  "gem-flink doesn't provide any private namespaces to applications and servers. Instead, only one global namespace is provided per DRM node. Malicious authenticated applications can attack other clients via brute-force "name-guessing" of gem buffers"
  16. ^ a b "Splitting DRM and KMS nodes". David Herrmann. 2013-09-01. 
  17. ^ "Universal plane support". 2014-05-07. 
  18. ^ "From pre-history to beyond the global thermonuclear war". 2014-06-05. 
  19. ^ "Initial amdgpu driver release". 2015-04-20. 
  20. ^ Skeggs, Ben. "drm/nouveau: Add DRM driver for NVIDIA GPUs". Retrieved 27 January 2015. 
  21. ^ "Merge the MSM driver from Rob Clark". freedesktop.org. 2013-08-28. Retrieved 2014-06-25. 
  22. ^ Larabel, Michael. "Snapdragon DRM/KMS Driver Merged For Linux 3.12". Phoronix. Retrieved 26 January 2015. 
  23. ^ Edge, Jake. "An update on the freedreno graphics driver". LWN.net. Retrieved 23 April 2015. 
  24. ^ King, Russell. "Armada DRM support for Linux kernel 3.13". 
  25. ^ Corbet, Jonathan. "3.19 Merge window part 2". LWN.net. Retrieved 9 February 2015. 
  26. ^ Zabel, Philipp. "drm: imx: Move imx-drm driver out of staging". Retrieved 9 February 2015. 
  27. ^ Deucher, Alex. "Initial amdgpu driver release" (Mailing list). Retrieved 21 April 2015. 
  28. ^ Larabel, Michael. "Linux 4.2 DRM Updates: Lots Of AMD Attention, No Nouveau Driver Changes". Phoronix. Retrieved 31 August 2015. 
  29. ^ Corbet, Jonathan. "4.2 Merge window part 2". LWN.net. Retrieved 31 August 2015. 
  30. ^ Deucher, Alex. "[PATCH 00/11] Add Fiji Support" (Mailing list). Retrieved 31 August 2015. 
  31. ^ Airlie, Dave. "drm: remove the gamma driver". Retrieved 27 January 2015. 
  32. ^ Miller, David S. "[DRM]: Delete sparc64 FFB driver code that never gets built". Retrieved 27 January 2015. 
  33. ^ Bergmann, Arnd. "drm: remove i830 driver". Retrieved 27 January 2015. 
  34. ^ Airlie, Dave. "drm: Add via unichrome support". Retrieved 27 January 2015. 
  35. ^ Airlie, Dave. "drm: add savage driver". Retrieved 27 January 2015. 
  36. ^ "List of maintainers of the linux kernel". Kernel.org. Retrieved 14 July 2014. 
  37. ^ "libdrm git repository". Retrieved 23 July 2014. 
  38. ^ "First DRI release of 3dfx driver.". Mesa 3D. Retrieved 15 July 2014. 
  39. ^ "Import 2.3.18pre1". The History of Linux in GIT Repository Format 1992-2010 (2010). Retrieved 15 July 2014. 
  40. ^ Torvalds, Linus. "Linux 2.4.0 source code". Kernel.org. Retrieved 29 July 2014. 
  41. ^ Airlie, Dave. "drm core/personality split" (Mailing list). Retrieved 30 January 2015. 
  42. ^ Torvalds, Linus. "Linux 2.6.11-rc1" (Mailing list). Retrieved 30 January 2015. 
  43. ^ Corbet, Jonathan (28 May 2008). "GEM v. TTM". LWN.net. Retrieved 10 February 2015. 
  44. ^ "Linux 2.6.28". KernelNewbies.org. Retrieved 23 July 2014. 
  45. ^ Herrmann, David. "DRM Render- and Modeset-Nodes". Retrieved 21 July 2014. 
  46. ^ Corbet, Jonathan. "3.12 merge window, part 2". LWN.net. Retrieved 21 July 2014. 
  47. ^ "drm: implement experimental render nodes". 
  48. ^ "drm/i915: Support render nodes". 
  49. ^ "drm/radeon: Support render nodes". 
  50. ^ "drm/nouveau: Support render nodes". 
  51. ^ Corbet, Jonathan. "3.17 merge window, part 2". LWN.net. Retrieved 7 October 2014. 

External links[edit]