Separation of mechanism and policy
||This article may require cleanup to meet Wikipedia's quality standards. (June 2008)|
The separation of mechanism and policy is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which resources to allocate.
This is most commonly discussed in the context of security mechanisms (authentication and authorization), but is actually applicable to a much wider range of resource allocation problems (e.g. CPU scheduling, memory allocation, Quality of Service), and the general question of good object abstraction.
In a 2000 article, Chervenak et al. described the principles of mechanism neutrality and policy neutrality.
Rationale and implications
The separation of mechanism and policy is the fundamental approach of a microkernel that distinguishes it from a monolithic one. In a microkernel the majority of operating system services are provided by user-level server processes. It is considered important[by whom?] for an operating system to have the flexibility of providing adequate mechanisms to support the broadest possible spectrum of real-world security policies.
It is almost impossible to envision all of the different ways in which a system might be used by different types of users over the life of the product. This means that any hard-coded policies are likely to be inadequate or inappropriate for some (or perhaps even most) users. Decoupling the mechanism implementations from the policy specifications makes it possible for different applications to use the same mechanism implementations with different policies. This means that those mechanisms are likely to better meet the needs of a wider range of users, for a longer period of time.
If it is possible to enable new policies without changing the implementing mechanisms, the costs and risks of such policy changes can be greatly reduced. In the first instance, this could be accomplished merely by segregating mechanisms and their policies into distinct modules: by replacing the module which dictates a policy (e.g. CPU scheduling policy) without changing the module which executes this policy (e.g. the scheduling mechanism), we can change the behaviour of the system. Further, in cases where a wide or variable range of policies are anticipated depending on applications' needs, it makes sense to create some non-code means for specifying policies, i.e. policies are not hardcoded into executable code but can be specified as an independent description. For instance, file protection policies (e.g. Unix's user/group/other read/write/execute) might be parametrized. Alternatively an implementing mechanism could be designed to include an interpreter for a new policy specification language. In both cases, the systems are usually accompanied by a [clarify] mechanism (e.g. configuration files, or APIs) that permits policy specifications to be incorporated to the system or replaced by another after it has been delivered to the customer.
An everyday example of mechanism/policy separation is the use of card keys to gain access to locked doors. The mechanisms (magnetic card readers, remote controlled locks, connections to a security server) do not impose any limitations on entrance policy (which people should be allowed to enter which doors, at which times). These decisions are made by a centralized security server, which (in turn) probably makes its decisions by consulting a database of room access rules. Specific authorization decisions can be changed by updating a room access database. If the rule schema of that database proved too limiting, the entire security server could be replaced while leaving the fundamental mechanisms (readers, locks, and connections) unchanged.
- Butler W. Lampson and Howard E. Sturgis. Reflections on an Operating System Design  Communications of the ACM 19(5):251-265 (May 1976)
- Wulf 74 pp.337-345
- Brinch Hansen 70 pp.238-241
- Miller, M. S., & Drexler, K. E. (1988). "Markets and computation: Agoric open systems". In Huberman, B. A. (Ed.). (1988), pp. 133–176. The Ecology of Computation. North-Holland.
- Artsy, Yeshayahu et al., 1987.
- Chervenak 2000 p.2
- Raphael Finkel, Michael L. Scott, Artsy Y. and Chang, H. [www.cs.rochester.edu/u/scott/papers/1989_IEEETSE_Charlotte.pdf Experience with Charlotte: simplicity and function in a distributed operating system]. IEEE Trans. Software Engng 15:676-685; 1989. Extended abstract presented at the IEEE Workshop on Design Principles for Experimental Distributed Systems, Purdue University; 1986.
- R. Spencer, S. Smalley, P. Loscocco, M. Hibler, D. Andersen, and J. Lepreau The Flask Security Architecture: System Support for Diverse Security Policies In Proceedings of the Eighth USENIX Security Symposium, pages 123–139, Aug. 1999.
- Per Brinch Hansen (2001). The evolution of operating systems (pdf). Retrieved 2006-10-24. included in book: Per Brinch Hansen, ed. (2001) . "1". Classic operating systems: from batch processing to distributed systems. New York,: Springer-Verlag. pp. 1–36. ISBN 0-387-95113-X. (p.18)
- Wulf, W.; E. Cohen, W. Corwin, A. Jones, R. Levin, C. Pierson, F. Pollack (June 1974). "HYDRA: the kernel of a multiprocessor operating system". Communications of the ACM 17 (6): 337–345. doi:10.1145/355616.364017. ISSN 0001-0782.
- Hansen, Per Brinch (April 1970). "The nucleus of a Multiprogramming System". Communications of the ACM 13 (4): 238–241. doi:10.1145/362258.362278. ISSN 0001-0782. (pp.238–241)
- Levin, R.; E. Cohen, W. Corwin, F. Pollack, W. Wulf (1975). "Policy/mechanism separation in Hydra". ACM Symposium on Operating Systems Principles / Proceedings of the fifth ACM symposium on Operating systems principles: 132–140. doi:10.1145/800213.806531.
- Chervenak et al. The data grid Journal of Network and Computer Applications, Volume 23, Issue 3, July 2000, Pages 187-200
- Artsy, Yeshayahu, and Livny, Miron, An Approach to the Design of Fully Open Computing Systems (University of Wisconsin / Madison, March 1987) Computer Sciences Technical Report #689.