Jump to content

Multiple buffering

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by UKER (talk | contribs) at 02:00, 7 April 2010 (Doesn't merit a section of its own.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, multiple buffering is the use of more than one buffer to hold a block of data, so that a "reader" will see a complete (though perhaps old) version of the data, rather than a partially-updated version of the data being created by a "writer". It also is used to avoid the need to use Dual-ported RAM when the readers and writers are different devices.

Description

The easiest way to explain how double buffering works is to take a real world example. It is a nice sunny day and you have decided to get the paddling pool out, only you can not find your garden hose. You'll have to fill the pool with buckets. So you fill one bucket (or buffer) from the tap, turn the tap off, walk over to the pool, pour the water in, walk back to the tap to repeat the exercise. This is analogous to single buffering. The tap has to be turned off while you "process" the bucket of water.

Now consider how you would do it if you had two buckets. You would fill the first bucket and then swap the second in under the running tap. You then have the length of time it takes for the second bucket to fill in order to empty the first into the paddling pool. When you return you can simply swap the buckets so that the first is now filling again, during which time you can empty the second into the pool. This can be repeated until the pool is full. It is clear to see that this technique will fill the pool far faster as there is much less time spent waiting, doing nothing, while buckets fill. This is analogous to double buffering. The tap can be on all the time and does not have to wait while the processing is done.

If you employed another person to carry a bucket to the pool while one is being filled and another emptied, then this would be analogous to triple buffering. If this step took long enough you could employ even more buckets, so that the faucet is continuously running filling buckets.

In computer science the situation of having a running tap that cannot be, or should not be, turned off is common (such as a stream of audio). Also, computers typically prefer to deal with chunks of data rather than streams. In such situations double buffering is often employed.

Double buffering Petri net

Double Buffering Petri Net

The following Petri net in the illustration shows how double buffering works. Transitions W1 and W2 represent writing to buffer 1 and 2 respectively while R1 and R2 represent reading from buffer 1 and 2 respectively. At the beginning only the transition W1 is enabled. After W1 fires, R1 and W2 are both enabled and can proceed in parallel. When they finish, R2 and W1 proceed in parallel and so on.

So after the initial transient where W1 fires alone, this system is periodic and the transitions are enabled always in pair (R1 with W2 and R2 with W1 respectively).

This Petri net is live and safe.

Double buffering in computer graphics

In computer graphics, double buffering is a technique for drawing graphics that shows no (or less) flicker, tearing, and other artifacts.

It is difficult for a program to draw a display so that pixels do not change more than once. For instance to update a page of text it is much easier to clear the entire page and then draw the letters than to somehow erase all the pixels that are not in both the old and new letters. However, this intermediate image is seen by the user as flickering. In addition computer monitors constantly redraw the visible video page (at around 60 times a second), so even a perfect update may be visible momentarily as a horizontal divider between the "new" image and the un-redrawn "old" image, known as tearing.

A software implementation of double buffering uses a video page stored in system RAM that all drawing operations are written to. When a drawing operation is considered complete, the whole page (or only the changed portion), is copied into the video RAM (VRAM) in one operation to avoid flicker. This is generally synchronised so that copy operation is ahead of the monitor's raster beam so that ideally (if the copy is faster than the video beam) tearing is also avoided. This software method is not always flawless, and has a higher overhead than the page flipping method.

Double buffering necessarily requires more video memory and CPU time than single buffering because of the video memory allocated for the buffer itself, and the time for the copy operation, and the synchronization of the copy. The first software implementation of double buffering in computer graphics appeared in Color Computer Magazine: "The Rainbow", in an article titled "Amnotron Animation" by Archor Wright.

Compositing window managers often combine the "copying" operation from the "back buffer" with "compositing" used to position windows, transform them with scale or warping effects, and make portions transparent. Thus the "front buffer" may contain only the composited version of the size of the screen, while there is a different "back buffer" for every window and it contains the non-composited image of the entire window contents.

Page Flipping

In this method (sometimes called ping-pong buffering), instead of copying the data, both buffers are capable of being displayed (both are in VRAM). At any one time, one buffer is actively being displayed by the monitor, while the other, background buffer is being drawn. When drawing is complete, the roles of the two are switched. The page-flip is typically accomplished by modifying the value of a pointer to the beginning of the display data in the video memory.

The page-flip is much faster than copying the data and can guarantee that tearing will not be seen as long as the pages are switched over during the monitor's vertical blank period when no video data is being drawn. The currently active and visible buffer is called the front buffer, while the background page is called the 'back buffer'.

Triple buffering

In computer graphics, triple buffering is a variant on double buffering that provides a speed improvement. In double buffering the program must wait until the finished drawing is copied or swapped before starting the next drawing. This waiting period could be several milliseconds during which neither buffer can be touched. At 60 frames per second whatever time it takes to draw an image is in effect rounded up to the next 16.67 milliseconds by this delay.

In triple buffering the program has two back buffers and can immediately start drawing in the other one. The third buffer, the front buffer, is read by the graphics card to display the image on the monitor. Once the monitor has been drawn, the front buffer is flipped with (or copied from) the back buffer holding the last complete screen. Since one of the back buffers is always complete, the graphics card never has to wait for the software to complete. Consequently, the software and the graphics card are completely independent, and can run at their own pace. Finally, the displayed image was started without waiting for synchronization and thus with minimum lag.[1]

Due to the software algorithm not having to poll the graphics hardware for monitor refresh events, the algorithm is free to run as fast as possible. This can mean that several drawings that are never displayed are written to the back buffers. This is not the only method of triple buffering available, but is the most prevalent on the PC architecture where the speed of the target machine is highly variable.

Another method of triple buffering involves synchronizing with the monitor frame rate. Drawing is not done if both back buffers contain finished images that have not been displayed yet. This avoids wasting CPU drawing undisplayed images and also results in a more constant frame rate (smoother movement of moving objects), but with increased latency[2]. This is the case when using triple buffering in DirectX, where a chain of 3 buffers are rendered and always displayed.

Triple buffering implies three buffers, but the method can be extended to as many buffers as is practical for the application. Usually, there is no advantage to using more than three buffers.

Other uses

The term double buffering is used for copying data between two buffers for direct memory access (DMA) transfers, not for enhancing performance, but to meet specific addressing requirements of a device (esp. 32-bit devices on systems with wider addressing provided via Physical Address Extension).[3] Microsoft Windows device drivers are particularly noteworthy as a place where such double buffering is likely to be used.

Double buffering is also used as a technique to facilitate interlacing or deinterlacing of video signals.

See also

References

  1. ^ "Triple Buffering: Why We Love It". AnandTech. June 26th, 2009. Retrieved 2009-07-16. {{cite web}}: Check date values in: |date= (help)
  2. ^ "Triple Buffering: Why We Love It". AnandTech. June 26th, 2009. Retrieved 2009-07-16. {{cite web}}: Check date values in: |date= (help)
  3. ^ "Physical Address Extension - PAE Memory and Windows". Microsoft Windows Hardware Development Central. 2005. Retrieved 2008-04-07.