Jump to content

Denotational semantics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
SmackBot (talk | contribs)
m Date maintenance tags and general fixes
(3 intermediate revisions by the same user not shown)
Line 5: Line 5:
Broadly speaking, denotational semantics is concerned with finding mathematical objects that represent what programs do. Collections of such objects are called domains. For example, programs (or program phrases) might be represented by partial functions, or by [[Actor model|Actor]] [[Denotational semantics of the Actor model|event diagram scenarios]], or by games between the environment and the system: these are all general examples of domains.
Broadly speaking, denotational semantics is concerned with finding mathematical objects that represent what programs do. Collections of such objects are called domains. For example, programs (or program phrases) might be represented by partial functions, or by [[Actor model|Actor]] [[Denotational semantics of the Actor model|event diagram scenarios]], or by games between the environment and the system: these are all general examples of domains.


An important tenet of denotational semantics is that ''semantics should be [[Principle of compositionality|compositional]]'': the denotation of a program phrase should be built out of the denotations of its subphrases. A simple example: the meaning of "3 + 4" is determined by the meanings of "3", "4", and "+". See discussion on [[Denotational_semantics#Compositionality|Compositionality]] below on how this can be done.
An important tenet of denotational semantics is that ''semantics should be [[Principle of compositionality|compositional]]'': the denotation of a program phrase should be built out of the denotations of its subphrases. A simple example: the meaning of "3 + 4" is determined by the meanings of "3", "4", and "+". See discussion on [[Denotational_semantics#Compositionality in Programming Languages|Compositionality in Programming Languages]] below on how this can be done.


Denotational semantics was first developed as a framework for functional and sequential programs modeled as mathematical functions mapping input to output. The first section of this article describes denotational semantics developed within this framework followed by a more general computational framework. Later sections deal with issues of polymorphism, etc.
Denotational semantics was first developed as a framework for functional and sequential programs modeled as mathematical functions mapping input to output. The first section of this article describes denotational semantics developed within this framework followed by a more general computational framework. Later sections deal with issues of polymorphism, etc.
Line 41: Line 41:
Thus the least upper bound of this chain, then, is the full factorial function (which happens to be a total function).
Thus the least upper bound of this chain, then, is the full factorial function (which happens to be a total function).


==Compositionality==
==Compositionality in Programming Language==
An important aspect of denotational semantics of programming languages is compositionality, by which the denotation of a program is constructed from denotations of its parts. For example consider the expression "7 + 4". Compositionality in this case is to provide a meaning for "7 + 4" in terms of the meanings of "7", "4" and "+".
An important aspect of denotational semantics of programming languages is compositionality, by which the denotation of a program is constructed from denotations of its parts. For example consider the expression "8 / 4". Compositionality in this case is to provide a meaning for "8 / 4" in terms of the meanings of "8", "4" and "/".


A basic denotational semantics in domain theory is compositional because it is given as follows. We start by considering program fragments, i.e. programs with free variables. A ''typing context'' assigns a type to each free variable. For instance, in the expression (''x'' + ''y'') might be considered in a typing context (''x'':<tt>nat</tt>,''y'':<tt>nat</tt>). We now give a denotational semantics to program fragments, using the following scheme.
A basic denotational semantics in domain theory is compositional because it is given as follows. We start by considering program fragments, i.e. programs with free variables. A ''typing context'' assigns a type to each free variable. For instance, in the expression (''x'' / ''y'') might be considered in a typing context (''x'':<tt>nat</tt>,''y'':<tt>nat</tt>). We now give a denotational semantics to program fragments, using the following scheme.
#We begin by describing the meaning of the types of our language: the meaning of each type must be a domain. We write 〚τ〛 for the domain denoting the type τ. For instance, the meaning of type <tt>nat</tt> should be the domain of natural numbers: 〚<tt>nat</tt>〛=ℕ<sub>⊥</sub>.
#We begin by describing the meaning of the types of our language: the meaning of each type must be a domain. We write 〚τ〛 for the domain denoting the type τ. For instance, the meaning of type <tt>nat</tt> should be the domain of natural numbers: 〚<tt>nat</tt>〛=ℕ<sub>⊥</sub>.
#From the meaning of types we derive a meaning for typing contexts. We set 〚 ''x''<sub>1</sub>:τ<sub>1</sub>,…, ''x''<sub>n</sub>:τ<sub>n</sub>〛 = 〚 τ<sub>1</sub>〛× … ×〚τ<sub>n</sub>〛. For instance, 〚''x'':<tt>nat</tt>,''y'':<tt>nat</tt>〛= ℕ<sub>⊥</sub>×ℕ<sub>⊥</sub>. As a special case, the meaning of the empty typing context, with no variables, is the domain with one element, denoted 1.
#From the meaning of types we derive a meaning for typing contexts. We set 〚 ''x''<sub>1</sub>:τ<sub>1</sub>,…, ''x''<sub>n</sub>:τ<sub>n</sub>〛 = 〚 τ<sub>1</sub>〛× … ×〚τ<sub>n</sub>〛. For instance, 〚''x'':<tt>nat</tt>,''y'':<tt>nat</tt>〛= ℕ<sub>⊥</sub>×ℕ<sub>⊥</sub>. As a special case, the meaning of the empty typing context, with no variables, is the domain with one element, denoted 1.
#Finally, we must give a meaning to each program-fragment-in-typing-context. Suppose that ''P'' is a program fragment of type σ, in typing context Γ, often written Γ⊢''P'':σ. Then the meaning of this program-in-typing-context must be a continuous function 〚Γ⊢''P'':σ〛:〚Γ〛→〚σ〛. For instance, 〚⊢7:<tt>nat</tt>〛:1→ℕ<sub>⊥</sub> is the constantly "7" function, while 〚''x'':<tt>nat</tt>,''y'':<tt>nat</tt>⊢''x''+''y'':<tt>nat</tt>〛:ℕ<sub>⊥</sub>×ℕ<sub>⊥</sub>→ℕ<sub>⊥</sub> is the function that adds two numbers.
#Finally, we must give a meaning to each program-fragment-in-typing-context. Suppose that ''P'' is a program fragment of type σ, in typing context Γ, often written Γ⊢''P'':σ. Then the meaning of this program-in-typing-context must be a continuous function 〚Γ⊢''P'':σ〛:〚Γ〛→〚σ〛. For instance, 〚⊢8:<tt>nat</tt>〛:1→ℕ<sub>⊥</sub> is the constantly "8" function, while 〚''x'':<tt>nat</tt>,''y'':<tt>nat</tt>⊢''x''/''y'':<tt>nat</tt>〛:ℕ<sub>⊥</sub>×ℕ<sub>⊥</sub>→ℕ<sub>⊥</sub> is the function that divides two numbers.


Now, the meaning of the compound expression (7+4) is determined by composing the three functions 〚⊢7:<tt>nat</tt>〛:1→ℕ<sub>⊥</sub>, 〚⊢4:<tt>nat</tt>〛:1→ℕ<sub>⊥</sub>, and 〚''x'':<tt>nat</tt>,''y'':<tt>nat</tt>⊢''x''+''y'':<tt>nat</tt>〛:ℕ<sub>⊥</sub>×ℕ<sub>⊥</sub>→ℕ<sub>⊥</sub>.
Now, the meaning of the compound expression (8/4) is determined by composing the three functions 〚⊢8:<tt>nat</tt>〛:1→ℕ<sub>⊥</sub>, 〚⊢4:<tt>nat</tt>〛:1→ℕ<sub>⊥</sub>, and 〚''x'':<tt>nat</tt>,''y'':<tt>nat</tt>⊢''x''/''y'':<tt>nat</tt>〛:ℕ<sub>⊥</sub>×ℕ<sub>⊥</sub>→ℕ<sub>⊥</sub>.


In fact, this is a general scheme for compositional denotational semantics. There is nothing specific about domains and continuous functions here. One can work with a different [[category (mathematics)|category]] instead. For example, in [[game semantics]], the category of games has games as objects and strategies as morphisms: we can interpret types as games, and programs as strategies. For a simple language without recursion, we can make do with the [[Category of sets|category of sets and functions]]. For a language with side-effects, we can work in the [[Kleisli category]] for a monad. For a language with state, we can work in a [[functor category]].
There is nothing specific about domains and continuous functions here. One can work with a different [[category (mathematics)|category]] instead. For example, in [[game semantics]], the category of games has games as objects and strategies as morphisms: we can interpret types as games, and programs as strategies. For a simple language without recursion, we can make do with the [[Category of sets|category of sets and functions]]. For a language with side-effects, we can work in the [[Kleisli category]] for a monad. For a language with state, we can work in a [[functor category]].

Unfortunately, the above is not a general scheme for compositional denotational semantics that is adequate for modern software engineering. An initial difficulty comes with expressions like <tt>8/0</tt>. An inital approach was to define a special element of the domain to be an "error element" and have <tt>8/0<tt> denote the error element. However, the error element approach is unsatisfactory for software engineering. One problem is that it becomes necessary for programs to continually test for the error element. Another problem is that the error element approach does not automatically propagate exceptions higher if they are not caught.

Scott and Strachey [1971] proposed that the semantics of programming languages be reduced to the semantics of the [[lambda calculus]] and thus inherit the [[Domain_theory#Motivation_and_intuition|denotational semantics of the lambda calculus]]. However, it turned out that general computation could not be implemented in the lambda calculus (see [[Indeterminacy in concurrent computation]]). Thus there arose the problem of how to provide modular denotational semantics for general programming languages. One solution to this problem is to use the <tt>Eval</tt> messages with an environment so that programs obtain their denotational semantics from the methods explained earlier in this article.

===Environments===
Environments hold the bindings of identifiers. When an environment is sent a <tt>Lookup</tt> message for an identifier '''x''', it returns the latest (lexical) binding of '''x'''.

As an example of how this works consider the lambda expression <tt><L></tt> below which implements a tree data structure when supplied with parameters for a <tt>leftSubTree</tt> and <tt>rightSubTree</tt>. When such a tree is given a parameter message <tt>"getLeft"</tt>, it return <tt>leftSubTree</tt> and likewise when given the message <tt>"getRight"</tt> it returns <tt>rightSubTree</tt>.

λ(leftSubTree, rightSubTree)
λ(message)
''if'' (message == "getLeft") ''then'' leftSubTree
''else if'' (message == "getRight") ''then'' rightSubTree

Consider what happens when an expression of the form <tt>"(<L> 1 2)"</tt> is sent an <tt>Eval</tt> message with environment '''E'''. One semantics for application expressions such as this one is the following: <tt><L>, 1</tt> and <tt>2</tt> are each sent <tt>Eval</tt> messages with environment '''E'''. The integers <tt>1</tt> and <tt>2</tt> immediately reply to the <tt>Eval</tt> message with themselves.

However, <tt><L><tt> responds to the <tt>Eval</tt> message by creating a [[Closure (computer science)|closure]] '''C''' that has a ''body'' <tt><L></tt> and environment '''E'''. <tt>"(<L> 1 2)"</tt> then sends '''C''' the message '''[1 2]'''.

When '''C''' receives the message '''[1 2]''', it creates a new environment '''F''' which behaves as follows:
#When it receives a <tt>Lookup</tt> message for the identifier <tt>leftSubTree</tt>, it responds with <tt>1</tt>
#When it receives a <tt>Lookup</tt> message for the identifier <tt>rightSubTree</tt>, it responds with <tt>2</tt>
#When it receives a <tt>Lookup</tt> message for any other identifier, it forwards the <tt>Lookup</tt> message to '''E'''

'''C''' then sends an <tt>Eval</tt> message with environment '''F''' to the following:

λ(message)
''if'' (message == "getLeft") ''then'' leftSubTree
''else if'' (message == "getRight") ''then'' rightSubTree

===Arithmetic expressions===
For another example consider the expression "<tt><expression<sub>1</sub>> / <expression<sub>2</sub>></tt>" which has the subexpressions <tt><expression<sub>1</sub>></tt> and <tt><expression<sub>2</sub>></tt>. When the composite expression receives an <tt>Eval</tt> message with an environment '''E''' and a customer '''C''', it sends <tt>Eval</tt> messages to <tt><expression<sub>1</sub>></tt> and <tt><expression<sub>2</sub>></tt> with environment '''E''' and sends '''C''' a newly created '''C<sub>0</sub>'''. When '''C<sub>0</sub>''' has received back two values '''N<sub>1</sub>''' and '''N<sub>2</sub>''', it sends "'''/'''" the message ['''N<sub>1</sub>''' '''N<sub>2</sub>'''] with customer '''C''' to perform the division.

===Other programming language constructs===
The denotational compositional semantics presented above is very general and can be used for [[Functional programming|functional]], [[Imperative programming|imperative]], [[Concurrent programming language|concurrent]], [[Logic programming|logic]], ''etc.'' programs.<ref>Carl Hewitt. [http://arxiv.org/abs/0812.4852 Common sense for concurrency and strong paraconsistency using unstratified inference and reflection] Arxiv 0812.4852. 2008.</ref> For example it easily provides denotation semantics for constructs that are difficult to formalize using other approaches such as [[Delay (programming)|delays]] and [[Future (programming)|futures]].


==Development of denotational semantics==
==Development of denotational semantics==

Revision as of 10:54, 14 September 2009

In computer science, denotational semantics is an approach to formalizing the meanings of programming languages by constructing mathematical objects (called denotations) which describe the meanings of expressions from the languages. Other approaches to providing a formal semantics of programming languages include axiomatic semantics and operational semantics.

Denotational semantics originated in the work of Christopher Strachey and Dana Scott in the 1960s. As originally developed by Strachey and Scott, denotational semantics provided the denotation (meaning) of a computer program as a function that mapped input into output.[1] To give denotations to recursively defined programs, Scott proposed working with continuous functions between domains, specifically complete partial orders. Work continues in the present day in investigating appropriate denotational semantics for aspects of programming languages such as sequentiality, concurrency, non-determinism and local state.

Broadly speaking, denotational semantics is concerned with finding mathematical objects that represent what programs do. Collections of such objects are called domains. For example, programs (or program phrases) might be represented by partial functions, or by Actor event diagram scenarios, or by games between the environment and the system: these are all general examples of domains.

An important tenet of denotational semantics is that semantics should be compositional: the denotation of a program phrase should be built out of the denotations of its subphrases. A simple example: the meaning of "3 + 4" is determined by the meanings of "3", "4", and "+". See discussion on Compositionality in Programming Languages below on how this can be done.

Denotational semantics was first developed as a framework for functional and sequential programs modeled as mathematical functions mapping input to output. The first section of this article describes denotational semantics developed within this framework followed by a more general computational framework. Later sections deal with issues of polymorphism, etc.

Denotations of recursive programs

In this section we review the semantics of functional recursive programs which were the initial subject matter of denotational semantics.

The problem is as follows. We need to give a semantics to programs such as the definition of the factorial function as

function factorial(n:Nat):Nat ≡ if (n==0)then 1 else n*factorial(n-1).

The meaning of this factorial program should be a function on the natural numbers, but, because of its recursive definition, it is not clear how to understand this in a compositional way.

In the semantics of recursion, a domain is typically a partial order, which can be understood as an order of definedness. For instance, the set of partial functions on the natural numbers can be given an order as follows:

given partial functions f and g, let "f≤g" mean that "f agrees with g on all values for which f is defined".

The core idea behind using partial orders is that each recursive step broadens the domain on which a function is defined. Thus, for example, if f was the factorial function, recursively defined for only n iterations, then g could be the factorial function, recursively defined for n+1 (or more) iterations.

It is usual to assume some properties of the domain, such as the existence of limits of chains (see cpo) and a bottom element. The partial order of partial functions has a bottom element, the totally undefined function. It also has least upper bounds of chains. Various additional properties are often reasonable and helpful: the article on domain theory has more details. One particularly important property is that of Scott continuity: one is interested in the continuous functions between domains (in the Scott topology). These are functions that preserve the order structure, and that preserve least upper bounds.

In this setting, types are denoted by domains, and the elements of the domains roughly capturing the elements of the types. A denotational semantics is given to a program phrase with free variables in terms of a continuous function from the denotation of its environment type to the denotation of its type. For example, the phrase n*g(n-1) has type Nat, and it has two free variables: n, of type Nat, and g of type Nat -> Nat. Thus its denotation will be a continuous function

.

Under this order on the partial functions, the denotation of the factorial program can be given as follows. First, we must develop denotations (as Scott-continuous functions) for the basic constructions such as if-then-else, ==, and multiplication. One must also develop a denotational semantics for function abstraction and application. The program phrase

λ n:N. if (n==0)then 1 else n*g(n-1)

can then be given a denotation as a continuous function between the domains of partial functions

.

The denotation of the factorial program is defined to be the least fixed point of this function F. It is thus an element of the domain . The reason that such a fixed point exists is because F is a continuous function. A version of Tarski's fixed point theorem says that continuous functions on domains have least fixed points.

One way of proving the fixed point theorem gives an intuition as to why it is appropriate to give a semantics for recursion in this way. Every continuous function F:D→D on a domain D with bottom element ⊥ has a fixed point given by

i∈NFi(⊥).

Here, the notation Fi indicates i-many applications of F. The symbol "⊔" means "least upper bound".

It is instructive to think of the components of the chain as "iterates". In the case of the factorial program above, we had a function F on the domain of partial functions.

  • F0(⊥) is the totally undefined partial function N→N;
  • F1(⊥) is the function that is defined at 0, to be 1, and undefined elsewhere;
  • F5(⊥) is the factorial function defined up to 4: F5(⊥)(4) = 24. It is undefined for arguments greater than 4.

Thus the least upper bound of this chain, then, is the full factorial function (which happens to be a total function).

Compositionality in Programming Language

An important aspect of denotational semantics of programming languages is compositionality, by which the denotation of a program is constructed from denotations of its parts. For example consider the expression "8 / 4". Compositionality in this case is to provide a meaning for "8 / 4" in terms of the meanings of "8", "4" and "/".

A basic denotational semantics in domain theory is compositional because it is given as follows. We start by considering program fragments, i.e. programs with free variables. A typing context assigns a type to each free variable. For instance, in the expression (x / y) might be considered in a typing context (x:nat,y:nat). We now give a denotational semantics to program fragments, using the following scheme.

  1. We begin by describing the meaning of the types of our language: the meaning of each type must be a domain. We write 〚τ〛 for the domain denoting the type τ. For instance, the meaning of type nat should be the domain of natural numbers: 〚nat〛=ℕ.
  2. From the meaning of types we derive a meaning for typing contexts. We set 〚 x11,…, xnn〛 = 〚 τ1〛× … ×〚τn〛. For instance, 〚x:nat,y:nat〛= ℕ×ℕ. As a special case, the meaning of the empty typing context, with no variables, is the domain with one element, denoted 1.
  3. Finally, we must give a meaning to each program-fragment-in-typing-context. Suppose that P is a program fragment of type σ, in typing context Γ, often written Γ⊢P:σ. Then the meaning of this program-in-typing-context must be a continuous function 〚Γ⊢P:σ〛:〚Γ〛→〚σ〛. For instance, 〚⊢8:nat〛:1→ℕ is the constantly "8" function, while 〚x:nat,y:natx/y:nat〛:ℕ×ℕ→ℕ is the function that divides two numbers.

Now, the meaning of the compound expression (8/4) is determined by composing the three functions 〚⊢8:nat〛:1→ℕ, 〚⊢4:nat〛:1→ℕ, and 〚x:nat,y:natx/y:nat〛:ℕ×ℕ→ℕ.

There is nothing specific about domains and continuous functions here. One can work with a different category instead. For example, in game semantics, the category of games has games as objects and strategies as morphisms: we can interpret types as games, and programs as strategies. For a simple language without recursion, we can make do with the category of sets and functions. For a language with side-effects, we can work in the Kleisli category for a monad. For a language with state, we can work in a functor category.

Unfortunately, the above is not a general scheme for compositional denotational semantics that is adequate for modern software engineering. An initial difficulty comes with expressions like 8/0. An inital approach was to define a special element of the domain to be an "error element" and have 8/0 denote the error element. However, the error element approach is unsatisfactory for software engineering. One problem is that it becomes necessary for programs to continually test for the error element. Another problem is that the error element approach does not automatically propagate exceptions higher if they are not caught.

Scott and Strachey [1971] proposed that the semantics of programming languages be reduced to the semantics of the lambda calculus and thus inherit the denotational semantics of the lambda calculus. However, it turned out that general computation could not be implemented in the lambda calculus (see Indeterminacy in concurrent computation). Thus there arose the problem of how to provide modular denotational semantics for general programming languages. One solution to this problem is to use the Eval messages with an environment so that programs obtain their denotational semantics from the methods explained earlier in this article.

Environments

Environments hold the bindings of identifiers. When an environment is sent a Lookup message for an identifier x, it returns the latest (lexical) binding of x.

As an example of how this works consider the lambda expression <L> below which implements a tree data structure when supplied with parameters for a leftSubTree and rightSubTree. When such a tree is given a parameter message "getLeft", it return leftSubTree and likewise when given the message "getRight" it returns rightSubTree.

 λ(leftSubTree, rightSubTree)
   λ(message)
     if (message == "getLeft") then leftSubTree
     else if (message == "getRight") then rightSubTree

Consider what happens when an expression of the form "(<L> 1 2)" is sent an Eval message with environment E. One semantics for application expressions such as this one is the following: <L>, 1 and 2 are each sent Eval messages with environment E. The integers 1 and 2 immediately reply to the Eval message with themselves.

However, <L> responds to the Eval message by creating a closure C that has a body <L> and environment E. "(<L> 1 2)" then sends C the message [1 2].

When C receives the message [1 2], it creates a new environment F which behaves as follows:

  1. When it receives a Lookup message for the identifier leftSubTree, it responds with 1
  2. When it receives a Lookup message for the identifier rightSubTree, it responds with 2
  3. When it receives a Lookup message for any other identifier, it forwards the Lookup message to E

C then sends an Eval message with environment F to the following:

   λ(message)
     if (message == "getLeft") then leftSubTree
     else if (message == "getRight") then rightSubTree

Arithmetic expressions

For another example consider the expression "<expression1> / <expression2>" which has the subexpressions <expression1> and <expression2>. When the composite expression receives an Eval message with an environment E and a customer C, it sends Eval messages to <expression1> and <expression2> with environment E and sends C a newly created C0. When C0 has received back two values N1 and N2, it sends "/" the message [N1 N2] with customer C to perform the division.

Other programming language constructs

The denotational compositional semantics presented above is very general and can be used for functional, imperative, concurrent, logic, etc. programs.[2] For example it easily provides denotation semantics for constructs that are difficult to formalize using other approaches such as delays and futures.

Development of denotational semantics

Denotational semantics has developed by investigating more elaborate constructions of programming languages and different models of computation.

Denotational semantics of non-deterministic programs

The concept of power domains has been developed to give a denotational semantics to non-deterministic sequential programs. Writing P for a power domain constructor, the domain P(D) is the domain of non-deterministic computations of type denoted by D.

There are difficulties with fairness and unboundedness in domain-theoretic models of non-determinism.[3] See Power domains for nondeterminism.

Denotational semantics of concurrency

Many researchers have argued that the domain theoretic models given above do not suffice for the more general case of concurrent computation. For this reason various new models have been introduced. In the early 1980s, people began using the style of denotational semantics to give semantics for concurrent languages. Examples include Will Clinger's work with the actor model; Glynn Winskel's work with event structures and petri nets[4]; and the work by Francez, Hoare, Lehmann, and de Roever (1979) on trace semantics for CSP.[5] All these lines of enquiry remain under investigation (see e.g. Hewitt's Timed Diagrams Model,[6] or the various denotational models for CSP[7]).

Recently, Winskel and others have proposed the category of profunctors as a domain theory for concurrency.[8][9]

Denotational semantics of state

State (such as a heap) and imperative features can be straightforwardly modeled in the denotational semantics described above. All the textbooks below have the details. The key idea is to consider a command as a partial function on some domain of states. The denotation of "x:=3" is then the function that takes a state to the state with 3 assigned to x. The sequencing operator ";" is denoted by composition of functions. Fixed-point constructions are then used to give a semantics to looping constructs, such as "while".

Things become more difficult in modelling programs with local variables. One approach is to no longer work with domains, but instead to interpret types as functors from some category of worlds to a category of domains. Programs are then denoted by natural continuous functions between these functors.[10][11]

Denotations of data types

Many programming languages allow users to define recursive data types. For example, the type of lists of numbers can be specified by

datatype list = Cons of (Nat, list) | Empty.

This section deals only with functional data structures that cannot change. Conventional programming languages would typically allow the elements of such a recursive list to be changed.

For another example: the type of denotations of the untyped lambda calculus is

datatype D = (D → D)

The problem of solving domain equations is concerned with finding domains that model these kinds of datatypes. One approach, roughly speaking, is to consider the collection of all domains as a domain itself, and then solve the recursive definition there. The textbooks below give more details.

Polymorphic data types are data types that are defined with a parameter. For example, the type of α lists is defined by

datatype α list = Cons of (α, list) | Empty.

Lists of numbers, then, are of type Nat list, while lists of strings are of String list.

Some researchers have developed domain theoretic models of polymorphism. Other researchers have also modeled parametric polymorphism within constructive set theories. Details are found in the textbooks listed below.

A recent research area has involved denotational semantics for object and class based programming languages.[12]

Denotational semantics for programs of restricted complexity

Following the development of programming languages based on linear logic, denotational semantics have been given to languages for linear usage (see e.g. proof nets, coherence spaces) and also polynomial time complexity[13].

Denotational semantics of sequentiality

The problem of full abstraction (see below) for the sequential programming language PCF was, for a long time, a big open question in denotational semantics. The difficulty with PCF is that it is a very sequential language. For example, there is no way to define the parallel-or function in PCF. It is for this reason that the approach using domains, as introduced above, yields a denotational semantics that is not fully abstract.

This open question was mostly resolved in the 1990s with the development of game semantics and also with techniques involving logical relations.[14] For more details, see the page on PCF.

Denotational semantics as source-to-source translation

It is often useful to translate one programming language into another. For example, a concurrent programming language might be translated into a process calculus; a high-level programming language might be translated into byte-code. (Indeed, conventional denotational semantics can be seen as the interpretation of programming languages into the internal language of the category of domains.)

In this context, notions from denotational semantics, such as full abstraction, help to satisfy security concerns.[15][16]

Abstraction

It is often considered important to connect denotational semantics with operational semantics. This is especially important when the denotational semantics is rather mathematical and abstract, and the operational semantics is more concrete or closer to the computational intuitions. The following properties of a denotational semantics are often of interest.

  1. Syntax independence: The denotations of programs should not involve the syntax of the source language.
  2. Soundness: All observably distinct programs have distinct denotations;
  3. Full abstraction: Two programs have the same denotations precisely when they are observationally equivalent. For semantics in the traditional style, full abstraction may be understood roughly as the requirement that "operational equivalence coincides with denotational equality". For denotational semantics in more intensional models, such as the Actor model and process calculi, there are different notions of equivalence within each model, and so the concept of full abstraction is a matter of debate, and harder to pin down. Also the mathematical structure of operational semantics and denotational semantics can become very close.

Additional desirable properties we may wish to hold between operational and denotational semantics are:

  1. Constructivism: Constructivism is concerned with whether domain elements can be shown to exist by constructive methods.
  2. Independence of denotational and operational semantics: The denotational semantics should be formalized using mathematical structures that are independent of the operational semantics of a programming language; However, the underlying concepts can be closely related. See the section on Compsitionality below.
  3. Full completeness or definability: Every morphism of the semantic model should be the denotation of a program.[17]

Semantics versus implementation

According to Dana Scott [1980]:

It is not necessary for the semantics to determine an implementation, but it should provide criteria for showing that an implementation is correct.

According to Clinger [1981]:

Usually, however, the formal semantics of a conventional sequential programming language may itself be interpreted to provide an (inefficient) implementation of the language. A formal semantics need not always provide such an implementation, though, and to believe that semantics must provide an implementation leads to confusion about the formal semantics of concurrent languages. Such confusion is painfully evident when the presence of unbounded nondeterminism in a programming language's semantics is said to imply that the programming language cannot be implemented.

Connections to other areas of computer science

Some work in denotational semantics has interpreted types as domains in the sense of domain theory which can be seen as a branch of model theory, leading to connections with type theory and category theory. Within computer science, there are connections with abstract interpretation, program verification, and model checking.

Monads were introduced to denotational semantics as a way of organising semantics, and these ideas have had a big impact in functional programming (see monads in functional programming).

References

  1. ^ Dana Scott and Christopher Strachey. Toward a mathematical semantics for computer languages Oxford Programming Research Group Technical Monograph. PRG-6. 1971.
  2. ^ Carl Hewitt. Common sense for concurrency and strong paraconsistency using unstratified inference and reflection Arxiv 0812.4852. 2008.
  3. ^ Paul Blain Levy: Amb Breaks Well-Pointedness, Ground Amb Doesn't. Electr. Notes Theor. Comput. Sci. 173: 221-239 (2007)
  4. ^ Event Structure Semantics for CCS and Related Languages. DAIMI Research Report, University of Aarhus, 67 pp., April 1983.
  5. ^ Nissim Francez, C.A.R. Hoare, Daniel Lehmann, and Willem-Paul de Roever. Semantics of nondeterminism, concurrency, and communication Journal of Computer and System Sciences. December 1979.
  6. ^ Carl Hewitt (2006) What is Commitment? Physical, Organizational, and Social COIN@AAMAS. 2006.
  7. ^ A. W. Roscoe: The Theory and Practice of Concurrency, Prentice Hall, ISBN 0-13-674409-5. Revised 2005.
  8. ^ Gian Luca Cattani, Glynn Winskel: Profunctors, open maps and bisimulation. Mathematical Structures in Computer Science 15(3): 553-614 (2005)
  9. ^ Mikkel Nygaard, Glynn Winskel: Domain theory for concurrency. Theor. Comput. Sci. 316(1): 153-190 (2004)
  10. ^ Peter W. O'Hearn, John Power, Robert D. Tennent, Makoto Takeyama: Syntactic control of interference revisited. Electr. Notes Theor. Comput. Sci. 1. 1995.
  11. ^ Frank J. Oles: A Category-Theoretic Approach to the Semanics of Programming. PhD thesis, Syracuse University. 1982.
  12. ^ Bernhard Reus, Thomas Streicher: Semantics and logic of object calculi. Theor. Comput. Sci. 316(1): 191-213 (2004)
  13. ^ P. Baillot. Stratified coherence spaces: a denotational semantics for Light Linear Logic ( ps.gz) Theoretical Computer Science , 318 (1-2), pp.29-55, 2004.
  14. ^ P. W. O'Hearn and J. G. Riecke, Kripke Logical Relations and PCF, Information and Computation, Volume 120, Issue 1, July 1995, Pages 107-116.
  15. ^ Martin Abadi. Protection in programming-language translations. Proc. of ICALP'98. LNCS 1443. 1998.
  16. ^ Andrew Kennedy. Securing the .NET programming model. Theoretical Computer Science, 364(3). 2006
  17. ^ Curien, Pierre-Louis. "Definability and Full Abstraction". Electronic Notes in Theoretical Computer Science. 172. Papers in honour of Gordon Plotkin: Elsevier: 301–310. doi:10.1016/j.entcs.2007.02.011. {{cite journal}}: Cite has empty unknown parameter: |coauthors= (help)

Further reading

Textbooks

  • Joseph E. Stoy, Denotational Semantics: The Scott-Strachey Approach to Programming Language Semantics. MIT Press, Cambridge, Massachusetts, 1977. (A classic if dated textbook.)
  • Carl Gunter, "Semantics of Programming Languages: Structures and Techniques". MIT Press, Cambridge, Massachusetts, 1992. (ISBN 978-0262071437)
  • Glynn Winskel, Formal Semantics of Programming Languages. MIT Press, Cambridge, Massachusetts, 1993. (ISBN 978-0262731034)
  • R. D. Tennent, Denotational semantics. Handbook of logic in computer science, vol. 3 pp 169—322. Oxford University Press, 1994. (ISBN 0-19-853762-X)
  • S. Abramsky and A. Jung: Domain theory. In S. Abramsky, D. M. Gabbay, T. S. E. Maibaum, editors, Handbook of Logic in Computer Science, vol. III. Oxford University Press, 1994. (ISBN 0-19-853762-X)
  • David A. Schmidt, Denotational semantics: a methodology for language development, Allyn and Bacon, 1986, ISBN 0205104509 (out or print now; free electronic version available)

Other references

  • Irene Greif. Semantics of Communicating Parallel Processes MIT EECS Doctoral Dissertation. August 1975.
  • Gordon Plotkin. A powerdomain construction SIAM Journal on Computing September 1976.
  • Edsger Dijkstra. A Discipline of Programming Prentice Hall. 1976.
  • Krzysztof R. Apt, J. W. de Bakker. Exercises in Denotational Semantics MFCS 1976: 1-11
  • J. W. de Bakker. Least Fixed Points Revisited Theoretical Computer Science 2(2): 155-181 (1976)
  • Carl Hewitt and Henry Baker Actors and Continuous Functionals Proceeding of IFIP Working Conference on Formal Description of Programming Concepts. August 1-5, 1977.
  • Henry Baker. Actor Systems for Real-Time Computation MIT EECS Doctoral Dissertation. January 1978.
  • Michael Smyth. Power domains Journal of Computer and System Sciences. 1978.
  • George Milne and Robin Milner. Concurrent processes and their syntax JACM. April, 1979.
  • Nissim Francez, C.A.R. Hoare, Daniel Lehmann, and Willem-Paul de Roever. Semantics of nondeterminism, concurrency, and communication Journal of Computer and System Sciences. December 1979.
  • Nancy Lynch and Michael J. Fischer. On describing the behavior of distributed systems in Semantics of Concurrent Computation. Springer-Verlag. 1979.
  • Jerald Schwartz Denotational semantics of parallelism in Semantics of Concurrent Computation. Springer-Verlag. 1979.
  • William Wadge. An extensional treatment of dataflow deadlock Semantics of Concurrent Computation. Springer-Verlag. 1979.
  • Ralph-Johan Back. Semantics of Unbounded Nondeterminism ICALP 1980.
  • David Park. On the semantics of fair parallelism Proceedings of the Winter School on Formal Software Specification. Springer-Verlag. 1980.
  • Will Clinger, Foundations of Actor Semantics. MIT Mathematics Doctoral Dissertation, June 1981.
  • Lloyd Allison, A Practical Introduction to Denotational Semantics Cambridge University Press. 1987.
  • P. America, J. de Bakker, J. N. Kok and J. Rutten. Denotational semantics of a parallel object-oriented language Information and Computation, 83(2): 152 - 205 (1989)
  • David A. Schmidt, The Structure of Typed Programming Languages. MIT Press, Cambridge, Massachusetts, 1994. ISBN 0-262-69171-X.
  • M. Korff True concurrency semantics for single pushout graph transformations with applications to actor systems Working papers of the Int. Workshop on Information Systems - Correctness and Reusability. World Scientific. 1995.
  • M. Korff and L. Ribeiro Concurrent derivations as single pushout graph grammar processes Proceedings of the Joint COMPUGRAPH/SEMAGRAPH Workshop on Graph Rewriting and Computation. ENTCS Vol 2, Elsevier. 1995.
  • Thati, Prasanna, Carolyn Talcott, and Gul Agha. Techniques for Executing and Reasoning About Specification Diagrams International Conference on Algebraic Methodology and Software Technology (AMAST), 2004.
  • J.C.M. Baeten, T. Basten, and M.A. Reniers. Algebra of Communicating Processes Cambridge University Press. 2005.
  • He Jifeng and C.A.R. Hoare. Linking Theories of Concurrency United Nations University International Institute for Software Technology UNU-IIST Report No. 328. July, 2005.
  • Luca Aceto and Andrew D. Gordon (editors). Algebraic Process Calculi: The First Twenty Five Years and Beyond Process Algebra. Bertinoro, Forlì, Italy, August 1–5, 2005.
  • A. W. Roscoe: The Theory and Practice of Concurrency, Prentice Hall, ISBN 0-13-674409-5. Revised 2005.