Out-of-core algorithm

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Out-of-core or external memory algorithms are algorithms that are designed to process data that is too large to fit into a computer's main memory at one time. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory such as hard drives or tape drives.[1]

A typical example is geographic information systems, especially digital elevation models, where the full data set easily exceeds several gigabytes or even terabytes of data.

This notion naturally extends to a network connecting a data server to a treatment or visualization workstation. Popular mass-of-data based web applications such as google-Map or google-Earth enter this topic.

This extends beyond general purpose CPUs, and also includes GPU computing as well as classical digital signal processing. In GPGPU based computing where powerful graphics cards (GPUs) with little memory (compared to the more familiar system memory which is most often referred to simply as RAM) and slow CPU to GPU memory transfer (when compared to computation bandwidth).

See also[edit]


  1. ^ Vitter, JS (2001). "External Memory Algorithms and Data Structures: Dealing with MASSIVE DATA.". ACM Computing Surveys 33 (2): 209–271. doi:10.1145/384192.384193. 

External links[edit]