This article relies too much on references to primary sources. (March 2010) (Learn how and when to remove this template message)
Nvidia Optimus is a computer GPU switching technology created by Nvidia which, depending on the resource load generated by client software applications, will seamlessly switch between two graphics adapters within a computer system in order to provide either maximum performance or minimum power draw from the system's graphics rendering hardware.
A typical platform includes both a lower-performance integrated graphics processor by Intel and a high-performance one by Nvidia. Optimus saves battery life by automatically switching the power of the discrete graphics processing unit (GPU) off when it is not needed and switching it on when needed again. The technology mainly targets mobile PCs such as notebooks.[a] When an application is being launched that is determined to benefit from the performance of the discrete GPU, the discrete GPU is powered up and the application is served by a rendering context via that GPU. Otherwise the application is served by a rendering context that uses the integrated GPU. Switching between the graphics processors is designed to be completely seamless and to happen "behind the scenes".
When a user launches an application, the graphics driver tries to determine whether the application would benefit from the discrete GPU. If so, the GPU is powered up from an idle state and is passed all rendering calls. Even in this case, though, the integrated graphics processor (IGP) is used to output the final image. When less demanding applications are used, the IGP takes sole control, allowing for longer battery life and less fan noise. Under Windows the Nvidia driver also provides the option to manually select the GPU in the right-click menu upon launching an executable.
Within the hardware interface layer of the Nvidia GPU driver, the Optimus Routing Layer provides intelligent graphics management. The Optimus Routing Layer also includes a kernel-level library for recognizing and managing specific classes and objects associated with different graphics devices. This Nvidia innovation performs state and context management, allocating architectural resources as needed for each driver client (i.e., application). In this context-management scheme, each application is not aware of other applications concurrently using the GPU.
By recognizing designated classes, the Optimus Routing Layer can help determine when the GPU can be utilized to improve rendering performance. Specifically, it sends a signal to power-on the GPU when it finds any of the following three call types:
- DX Calls: Any 3D game engine or DirectX application will trigger these calls
- DXVA Calls: Video playback will trigger these calls (DXVA = DirectX Video Acceleration)
- CUDA Calls: CUDA applications will trigger these calls
Predefined profiles also assist in determining whether extra graphics power is needed. These can be managed using the Nvidia Control Panel.
Optimus avoids usage of a hardware multiplexer and prevents glitches associated with changing the display driver from IGP to GPU by transferring the display surface from the GPU frame buffer over the PCI Express bus to the main memory-based framebuffer used by the IGP. The Optimus Copy Engine is a new alternative to traditional DMA transfers between the GPU framebuffer memory and main memory used by the IGP.
This section needs to be updated.December 2015)(
The binary Nvidia driver added partial Optimus support May 3, 2013 in the 319.17. As of May 2013, power management for discrete card is not supported, which means it cannot save battery by turning off Nvidia graphic card completely.
The open-source project Bumblebee tries to provide support for graphics-chip switching. As in the Windows implementation, by default all applications run through the integrated graphics processor. As of 2013[update] one can only run a program with improved graphical performance on the discrete GPU by explicitly invoking it as such: for example, by using the command line or through specially configured shortcut icon. Automatic detection and switching between graphics processors is not yet available.
Work in progress on a graphical interface - bumblebee-ui - aims to allow more convenient starting of programs for improved graphical performance when necessary.
The Bumblebee Project continues to evolve as more necessary software changes are made to the graphics architecture of Linux. To make most use of it, it is best to use a recent Linux distribution. As of 2013[update], Bumblebee software repositories are available for Arch Linux, Debian, Fedora, Gentoo, Mandriva, OpenSuSE and Ubuntu. The source package can be used for other distributions.
An attempt by Nvidia to support Optimus through DMA BUF, a Linux kernel-mechanism for sharing buffers across hardware (potentially GPUs), was rebuffed by kernel developers in January 2012 due to license incompatibility between the GPL-licensed kernel-code and the proprietary-licensed Nvidia blob.
When no software mechanism exists for switching between graphics adapters, the system cannot use the Nvidia GPU at all, even if an installed graphics driver would support it.
Modern Optimus Support
Many linux distributions now support Nvidia offloading, where the nvidia card does all rendering. Since the internal laptop display is physically connected to the intel driver, the nvidia card renders to the intel display memory. To avoid tearing, the xorg server has a mechanism called Prime Synchronization to time these buffer updates to avoid tearing, similar to vsync; the nvidia driver must be loaded as a kernel module for this to work. This is not usually activated by default.
Unlike bumblebee, this offloading solution allows multi-monitor graphics. The disadvantage is that toggling the nvidia card requires a logout.
The leading implementation of this approach is Ubuntu's 'prime-select' package, which has a command line and graphical tool to turn the nvidia card off. Unlike Windows, this is not done dynamically, and the user must restart the login session for the change to take effect.
Ubuntu's prime-select script is available on Ubuntu derivatives, which in some cases add their own graphical tools. The prime-offload approach has been ported or reimplemented in arch and fedora.
In 2016, Nvidia announced GL Vendor Neutral Dispatch, meaning both intel and nvidia drivers can be simultaneously installed. This has greatly simplified the process of switching modes, although it took until 2018 until distributions started taking advantage.
Some older and high-end laptops contain a BIOS setting to manually select the state of the hardware multiplexer to switch output between the two video devices. In this case, a Linux user can place the laptop in hardware configurations where there is only once graphics device. This avoids the complexities of running two graphics drivers, but if offers no power savings.
Since driver version 435 the proprietary driver supports render offloading of a single window. It creates a virtual display where the dGPU renders to, which will be displayed in the window on the main screen for offloaded application. As of October 2019 this requires usage of the xorg development branch, since needed modifications are not yet released.
- "Optimus Technology". Nvidia. Retrieved 10 April 2016.
- Lee, Terence (23 April 2011). "NVIDIA To Launch Desktop Optimus / Synergy at COMPUTEX". Retrieved 10 April 2016.
- Pop, Sebastian (26 April 2011). "NVIDIA Optimus Lands on Desktops". Retrieved 10 April 2016.
- "Bumblebee Daemon". GitHub. 22 April 2013. Retrieved 10 April 2016.
- "Bumblebee version 3.0 "Tumbleweed" release". 20 January 2012. Retrieved 10 April 2016.
- Plattner, Aaron (2 May 2013). "Linux, Solaris, and FreeBSD driver 319.17 (long-lived branch release)". Nvidia. Retrieved 10 April 2016.
- "Релиз проприетарного драйвера NVIDIA 319.17 с поддержкой Optimus и RandR 1.4" (in Russian). 2 May 2013. Retrieved 10 April 2016.
- "NVIDIA Talks Of Optimus Possibilities For Linux". Phoronix. January 25, 2012.
- "On laptops that don't have that hardware mux you currently cannot use the NVIDIA GPU for display.", July 23, 2010, accessed November 27, 2010. Archived July 18, 2011, at the Wayback Machine
- "Chapter 35. PRIME Render Offload". download.nvidia.com. Retrieved 2019-10-09.