Predicate transformer semantics

From Wikipedia, the free encyclopedia
(Redirected from Weakest precondition)
Jump to navigation Jump to search

Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions (see below).

Moreover, predicate transformer semantics are a reformulation of Floyd–Hoare logic. Whereas Hoare logic is presented as a deductive system, predicate transformer semantics (either by weakest-preconditions or by strongest-postconditions see below) are complete strategies to build valid deductions of Hoare logic. In other words, they provide an effective algorithm to reduce the problem of verifying a Hoare triple to the problem of proving a first-order formula. Technically, predicate transformer semantics perform a kind of symbolic execution of statements into predicates: execution runs backward in the case of weakest-preconditions, or runs forward in the case of strongest-postconditions.

Weakest preconditions[edit]


For a statement S and a postcondition R, a weakest precondition is a predicate Q such that for any precondition P, if and only if . In other words, it is the "loosest" or least restrictive requirement needed to guarantee that R holds after S. Uniqueness follows easily from the definition: If both Q and Q' are weakest preconditions, then by the definition so and so , and thus . We often use to denote the weakest precondition for statement S with repect to a postcondition R.


We use T to denote the predicate that is everywhere true and F to denote the one that is everywhere false. We shouldn't at least conceptually confuse ourselves with a Boolean expression defined by some language syntax, which might also contain true and false as Boolean scalars. For such scalars we need to do a type coercion such that we have T = predicate(true) and F = predicate(false). Such a promotion is carried out often casually, so people tend to take T as true and F as false.




We give below two equivalent weakest-preconditions for the assignment statement. In these formulas, is a copy of R where free occurrences of x are replaced by E. Hence, here, expression E is implicitly coerced into a valid term of the underlying logic: it is thus a pure expression, totally defined, terminating and without side effect.

  • version 1:

where y is a fresh variable and not free in E and R (representing the final value of variable x)

  • version 2:

Provided that E is well defined, we just apply the so-called one-point rule on version 1. Then

The first version avoids a potential duplication of x in R, whereas the second version is simpler when there is at most a single occurrence of x in R. The first version also reveals a deep duality between weakest-precondition and strongest-postcondition (see below).

An example of a valid calculation of wp (using version 2) for assignments with integer valued variable x is:

This means that in order for the postcondition x > 10 to be true after the assignment, the precondition x > 15 must be true before the assignment. This is also the "weakest precondition", in that it is the "weakest" restriction on the value of x which makes x > 10 true after the assignment.


For example,


As example:

While loop[edit]

Partial Correctness[edit]

Ignoring termination for a moment, we can define the rule for the weakest liberal precondition, denoted wlp, using a predicate INV, called the loop invariant, typically supplied by the programmer:

Total Correctness[edit]

To show total correctness, we also have to show that the loop terminates. For this we define a well-founded relation on the state space denoted as (wfs, <) and define a variant function vf , such that we have:

where v is a fresh tuple of variables

Informally, in the above conjunction of three formulas:

  • the first one means that the variant must be part of the well-founded relation before entering the loop;
  • the second one means that the body of the loop (i.e. statement S) must preserve the invariant and reduce the variant;
  • the last one means that the loop postcondition R must be established when the loop finishes.

However, the conjunction of those three is not a necessary condition. Exactly, we have

Non-deterministic guarded commands[edit]

Actually, Dijkstra's Guarded Command Language (GCL) is an extension of the simple imperative language given until here with non-deterministic statements. Indeed, GCL aims to be a formal notation to define algorithms. Non-deterministic statements represent choices left to the actual implementation (in an effective programming language): properties proved on non-deterministic statements are ensured for all possible choices of implementation. In other words, weakest-preconditions of non-deterministic statements ensure

  • that there exists a terminating execution (e.g. there exists an implementation),
  • and, that the final state of all terminating execution satisfies the postcondition.

Notice that the definitions of weakest-precondition given above (in particular for while-loop) preserve this property.


Selection is a generalization of if statement:

[citation needed]

Here, when two guards and are simultaneously true, then execution of this statement can run any of the associated statement or .


Repetition is a generalization of while statement in a similar way.

Specification statement[edit]

Refinement calculus extends GCL with the notion of specification statement. Syntactically, we prefer to write a specification statement as


which specifies a computation that starts in a state satisfying pre and is guaranteed to end in a state satisfying post by changing only x. We call a logical constant employed to aid in a specification. For example, we can specify a computation that increment x by 1 as


Another example is a computation of a square root of an integer.


The specification statement appears like a primitive in the sense that it does not contain other statements. However, it is very expressive, as pre and post are arbitrary predicates. Its weakest precondition is as follows.

where s is fresh.

It combines Morgan's syntactic idea with the sharpness idea by Bijlsma, Matthews and Wiltink[1]. The very advantage of this is its capability of defining wp of goto L and other jump statements. [2]

Goto statement[edit]

Formalization of jump statements like goto L takes a very long bumpy process. A common belief seems to indicate the goto statement could only be argued operationally. This is probably due to a failure to recognize that goto L is actually miraculous (i.e. non-strict) and does not follow Dijkstra's coined Law of Miracle Excluded, as stood in itself. But it enjoys an extremely simple operational view from the weakest precondition perspective, which was unexpected. We define

where wpL is the weakest precondition at label L.

For goto L execution transfers control to label L at which the weakest precondition has to hold. The way that wpL is referred to in the rule should not be taken as a big surprise. It is just wp(L:S, Q ) for some Q computed to that point. This is like any wp rules, using constituent statements to give wp definitions, even though goto L appears a primitive. The rule does not require the uniqueness for locations where wpL holds within a program, so theoretically it allows the same label to appear in multiple locations as long as the weakest precondition at each location is the same wpL. The goto statement can jump to any of such locations. This actually justifies that we could place the same labels at the same location multiple times, as S(L:L: S1), which is the same as S(L: S1). Also, it does not imply any scoping rule, thus allowing a jump into a loop body, for example. Let us calculate wp of the following program S, which has a jump into the loop body.

     wp(do x > 0 → L: x := x-1 od;  if x < 0 → x := -x; goto L ⫿ x ≥ 0 → skip fi,  post) 
   =   { sequential composition and alternation rules }
     wp(do x > 0 → L: x := x-1 od, (x<0 ∧ wp(x := -x; goto L, post)) ∨ (x ≥  0 ∧ post)
   =   { sequential composition, goto, assignment rules }
     wp(do x > 0 → L: x := x-1 od, x<0 ∧ wpL(x ← -x) ∨ x≥0 ∧ post)
   =   { repetition rule }
     the strongest solution of 
              Z: [ Z ≡ x > 0 ∧ wp(L: x := x-1, Z) ∨ x < 0 ∧ wpL(x ← -x) ∨ x=0 ∧ post ]    
   =  { assignment rule, found wpL = Z(x ← x-1) }
     the strongest solution of 
              Z: [ Z ≡ x > 0 ∧ Z(x ← x-1) ∨ x < 0 ∧ Z(x ← x-1) (x ← -x) ∨ x=0 ∧ post]
   =  { substitution }
     the strongest solution of 
              Z:[ Z ≡ x > 0 ∧ Z(x ← x-1) ∨ x < 0 ∧ Z(x ← -x-1) ∨ x=0 ∧ post ]
   =  { solve the equation by approximation }
     post(x ← 0)

Therefore, wp(S, post ) = post(x ← 0) .

Other predicate transformers[edit]

Weakest liberal precondition [edit]

An important variant of the weakest precondition is the weakest liberal precondition , which yields the weakest condition under which S either does not terminate or establishes R. It therefore differs from wp in not guaranteeing termination. Hence it corresponds to Hoare logic in partial correctness: for the statement language given above, wlp differs with wp only on while-loop, in not requiring a variant (see above).

Strongest postcondition[edit]

Given S a statement and R a precondition (a predicate on the initial state), then is their strongest-postcondition: it implies any postcondition satisfied by the final state of any execution of S, for any initial state satisfying R. In other words, a Hoare triple is provable in Hoare logic if and only if the predicate below hold:

Usually, strongest-postconditions are used in partial correctness. Hence, we have the following relation between weakest-liberal-preconditions and strongest-postconditions:

For example, on assignment we have:

where y is fresh

Above, the logical variable y represents the initial value of variable x. Hence,

On sequence, it appears that sp runs forward (whereas wp runs backward):

Win and sin predicate transformers[edit]

Leslie Lamport has suggested win and sin as predicate transformers for concurrent programming.[3]

Predicate transformers properties[edit]

This section presents some characteristic properties of predicate transformers.[4] Below, S denotes a predicate transformer (a function between two predicates on the state space) and P a predicate. For instance, S(P) may denote wp(S,P) or sp(S,P). We keep x as the variable of the state space.


Predicate transformers of interest (wp, wlp, and sp) are monotonic. A predicate transformer S is monotonic if and only if:

This property is related to the consequence rule of Hoare logic.


A predicate transformer S is strict iff:

For instance, wp is artificially made strict, whereas wlp is generally not. In particular, if statement S may not terminate then is satisfiable. We have

Indeed, T is a valid invariant of that loop.

The non-strict but monotonic or conjunctive predicate transformers are called miraculous and can also be used to define a class of programming constructs, in particular, jump statements, which Dijkstra cared less about. Those jump statements include straight goto L, break and continue in a loop and return statements in a procedure body, exception handling, etc. It turns out that all jump statements are executable miracles[5], i.e. they can be implemented but not strict.


A predicate transformer S is terminating iff:

Actually, this terminology makes sense only for strict predicate transformers: indeed, is the weakest-precondition ensuring termination of S.

It seems that naming this property non-aborting would be more appropriate: in total correctness, non-termination is abortion, whereas in partial correctness, it is not.


A predicate transformer S is conjunctive iff:

This is the case for , even if statement S is non-deterministic as a selection statement or a specification statement.


A predicate transformer S is disjunctive iff:

This is generally not the case of when S is non-deterministic. Indeed, consider a non-deterministic statement S choosing an arbitrary boolean. This statement is given here as the following selection statement:

Then, reduces to the formula .

Hence, reduces to the tautology

Whereas, the formula reduces to the wrong proposition .


Beyond predicate transformers[edit]

Weakest-preconditions and strongest-postconditions of imperative expressions[edit]

In predicate transformers semantics, expressions are restricted to terms of the logic (see above). However, this restriction seems too strong for most existing programming languages, where expressions may have side effects (call to a function having a side effect), may not terminate or abort (like division by zero). There are many proposals to extend weakest-preconditions or strongest-postconditions for imperative expression languages and in particular for monads.

Among them, Hoare Type Theory combines Hoare logic for a Haskell-like language, separation logic and type theory.[9] This system is currently implemented as a Coq library called Ynot.[10] In this language, evaluation of expressions corresponds to computations of strongest-postconditions.

Probabilistic Predicate Transformers[edit]

Probabilistic Predicate Transformers are an extension of predicate transformers for probabilistic programs. Indeed, such programs have many applications in cryptography (hiding of information using some randomized noise), distributed systems (symmetry breaking). [11]

See also[edit]


  1. ^ Chen, Wei and Udding, Jan Tijmen, "The Specification Statement Refined" WUCS-89-37 (1989).
  2. ^ Chen, Wei, "A wp Characterization of Jump Statements," 2021 International Symposium on Theoretical Aspects of Software Engineering (TASE), 2021, pp. 15-22. doi: 10.1109/TASE52547.2021.00019.
  3. ^ Lamport, Leslie (July 1990). "win and sin: Predicate Transformers for Concurrency". ACM Trans. Program. Lang. Syst. 12 (3): 396–428. CiteSeerX doi:10.1145/78969.78970. S2CID 209901.
  4. ^ Back, Ralph-Johan; Wright, Joakim (2012) [1978]. Refinement Calculus: A Systematic Introduction. Texts in Computer Science. Springer. ISBN 978-1-4612-1674-2.
  5. ^ Chen, Wei, "Exit Statements are Executable Miracles" WUCS-91-53 (1991).
  6. ^ Dijkstra, Edsger W. (1968). "A Constructive Approach to the Problem of Program Correctness". BIT Numerical Mathematics. 8 (3): 174–186. doi:10.1007/bf01933419. S2CID 62224342.
  7. ^ Wirth, N. (April 1971). "Program development by stepwise refinement" (PDF). Comm. ACM. 14 (4): 221–7. doi:10.1145/362575.362577. hdl:20.500.11850/80846. S2CID 13214445.
  8. ^ Tutorial on Hoare Logic: a Coq library, giving a simple but formal proof that Hoare logic is sound and complete with respect to an operational semantics.
  9. ^ Nanevski, Aleksandar; Morrisett, Greg; Birkedal, Lars (September 2008). "Hoare Type Theory, Polymorphism and Separation" (PDF). Journal of Functional Programming. 18 (5–6): 865–911. doi:10.1017/S0956796808006953.
  10. ^ Ynot a Coq library implementing Hoare Type Theory.
  11. ^ Morgan, Carroll; McIver, Annabelle; Seidel, Karen (May 1996). "Probabilistic Predicate Transformers" (PDF). ACM Trans. Program. Lang. Syst. 18 (3): 325–353. CiteSeerX doi:10.1145/229542.229547. S2CID 5812195.