Model of computation
This article does not cite any sources. (December 2009) (Learn how and when to remove this template message)
This article needs additional citations for verification. (February 2010) (Learn how and when to remove this template message)
In computability theory and computational complexity theory, a model of computation is the definition of the set of allowable operations used in computation and their respective costs. Only assuming a certain model of computation is it possible to analyze the computational resources required, such as the execution time or memory space or to discuss the limitations of algorithms or computers.
In model-driven engineering, the model of computation explains how the behaviour of the whole system is the result of the behaviour of each of its components.
In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above mentioned Turing machine model.
There are many models of computation, differing in the set of admissible operations and their computations cost. They fall into the following broad categories: abstract machine, used in proofs of computability and upper bounds on computational complexity of algorithms, and decision tree models, used in proofs of lower bounds on computational complexity of algorithmic problems.
- Stack machine (0-operand machine)
- Accumulator machine (1-operand machine)
- Register machine (2,3,... operand machine)
- Random access machine
|This computer science article is a stub. You can help Wikipedia by expanding it.|