In functional programming, a monad is an abstraction that allows structuring programs generically. Supporting languages may use monads to abstract away boilerplate code needed by the program logic. Monads achieve this by providing their own data type (a particular type for each type of monad), which represents a specific form of computation, along with one procedure to wrap values of any basic type within the monad (yielding a monadic value) and another to compose functions that output monadic values (called monadic functions).

This allows monads to simplify a wide range of problems, like handling potential undefined values (with the Maybe monad), or keeping values within a flexible, well-formed list (using the List monad). With a monad, a programmer can turn a complicated sequence of functions into a succinct pipeline that abstracts away auxiliary data management, control flow, or side-effects.

Both the concept of a monad and the term originally come from category theory, where a monad is defined as a functor with additional structure.[a] Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as the monad laws, which should be satisfied by any monad and can be used to verify monadic code.

Since monads make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such as Haskell, even offer pre-built definitions in their core libraries for the general monad structure and common instances.

## Overview

"For a monad M, a value of type M a represents having access to a value of type a within the context of the monad." —C. A. McCann

A monad is defined by a type constructor M along with two operations:

• return :: a -> M a (also called unit) which wraps any value of type a into a monadic value of type M a,
• join :: M (M a) -> M a (also called composition) which unwraps any wrapped monadic value of type M (M a) to a simple monadic value of type M a.

The type constructor M is moreover assumed to be a functor, i.e. to provide fmap :: (a -> b) -> M a -> M b, while return and join should satisfy natural compatibility relations specified below.

In practice, monads are generally defined through their bind operator instead, written with the infix notation >>=:

• bind :: M a -> (a -> M b) -> M b which maps the monadic type M a to the monadic type M b given a monadic function of type a -> M b.

Note that providing either bind or join to define a monad is rigorously equivalent:

• Mapping a -> M b functorially to M a -> M (M b) and joining M (M b) to M b , the binding ma >>= f is recovered through join (fmap f ma).
• Reciprocally, letting a = M b , the identity maps a = M b to M b so that join mmb is recovered as mmb >>= id, where id is the identity map.

### Usage

Monads allow the programmer to compose a sequence of operations, similar to a pipeline with bind operators chaining expressions together. Such sequences naturally reflect the behaviour of imperative languages, where each line of code is an instruction, which through variable declarations and assignments, passes over an eventually modified context of execution to the next.

The do notation in Haskell, for instance, allows one to write monadic code blocks as:

main :: IO ()
main = do name <- getLine                    -- getLine :: IO String
putStrLn ("Hi " ++ name ++ "!")    -- putStrLn :: String -> IO ()


In the IO monad, the above is equivalent to the binding getLine >>= (putStrLn . hi) where hi name = "Hi " ++ name ++ "!". Writing do code blocks allows us to borrow the readability of imperative code. Using bind however encourages to write pure function pipelines.

Although sequences of operations seem to be the most powerful application of monads to computer science, in handling states, side-effects and I/O operations, the concept is far more general and there are plenty of simpler examples given for instance by the List and Maybe monads. The more subtle Reader monad, describing all function types s -> a from a common source s, is another canonical example and a structural model for many other monads, such as State and IO.

Lists provide the first example of a monad. Hence, in spite of their abstract definition, monads are embodied by a very simple datatype, common to most programming languages.

With [a] = List a denoting the type of lists with elements of type a, unit and composition in Listare defined by:

• unit :: a -> [a] assigns to any value x of type a the singleton list [x] containing this element,
• join :: [[a]] -> [a] concatenates a list of lists of type [[a]] by joining them into a flattened list of type [a].

Due to its great simplicity, the list monad is somehow universal: it may represent any sequence of terms awaiting composition by an associative operation.

Consider for instance the function pipe : [b -> b] -> b -> b, composing a list of functions viewed as a pipeline. Thinking of nested lists as a metaphor for spawned subprocesses, there are two equivalent ways to form the resulting chained process:

• pipe . (map pipe) :: [[b -> b]] -> b -> b
• pipe . join :: [[b -> b]] -> b -> b

This equivalence reflects the associativity of (.): parentheses may be omitted in (f . g) . h = f . (g . h) = f . g . h, as the order in which one composes functions does not matter. Working with the List monad allows one to fully recognize this property and focus on handling only values of the monadic type List(b -> b), to which nested lists are naturally reduced by monadic composition.

Note that arithmetic operations such as sum :: [Num] -> Num and prod :: [Num] -> Num also provide simple variations of the example above. This is a consequence of the fact that (Num, (+), 0), (Num, (*), 1), and (b -> b, (.), id) are all common examples of monoids, i.e. spaces where an associative composition law is defined and has a unit. These simple structures should be kept in mind, as monads themselves are an abstraction of the monoid concept, and however esoteric, a monad is a monoid in the category of endofunctors.

Another widespread interpretation of the List monad is as a representation of alternatives, viewing [a] as collections of possible outcomes for a value chosen in a. Although less natural, this gives a convenient point of view on the bind operator, which simply joins the lists obtained from elementwise function application:

• bind :: [a] -> (a -> [b]) -> [b]

In this picture, a process associates to any single element of a a list of possible outcomes in b, which the binding simply enumerates from a list of possible values in a. This point of view uses language more adapted to sets and misses the structural ordering of lists. It may however be helpful in understanding monadic operations.

Consider for instance the following powerset function, returning all sublists of a given list:

powerset :: [a] -> [[a]]
powerset [] = [[]]
powerset (x:xs) = do
ys <- powerset xs       -- ys :: [a]
[x:ys, ys]              -- either put x or don't


Its do block implementation, equivalent to powerset (x:xs) = powerset xs >>= (\ys -> [x: ys, ys]), illustrates the expressiveness ofbind.

### An example: Maybe

To motivate programming with monads, here is a quick pseudocode example. Undefined values or operations are one particular problem that robust software should prepare for and handle gracefully.

The first step toward this goal might be to create an option type that will mark a value as either carrying a value of some type T (T can be any type) or carrying no value. The new type will be called Maybe T and values of that type can either contain a value of type T, or be the empty value Nothing. A value X of type T which is defined but used in the context of Maybe will be called Just X. This is done to avoid confusion by differentiating between cases where a variable carries a defined value and those where it does not.

data Maybe T  =  Just T or Nothing


Maybe T can be understood as a "wrapping" type, wrapping the type T into a new type with built-in exception handling, though carrying no information about the cause of an exception.

In the following code, variables prefixed with m have the type Maybe T for some type T. For example, if a variable mx contains a value, it is Just x, where the variable x has the type T. λx → ... is an anonymous function with the parameter x whose type is inferred, and ∘ is the function composition operator.

Another improvement would be if a function could manage simple checked exceptions with a Maybe type, short-circuiting and returning Nothing once a step fails, but returning the correct value without comment if a calculation succeeds.

An addition function add, which does exactly this when adding two Maybe values, mx and my, can be defined like so:

 add : (Maybe Number, Maybe Number) → Maybe Number
if mx is Nothing then
... Nothing
else if my is Nothing then
... Nothing
else
... Just (x + y)
end if


Writing functions that process Maybe values case-by-case can be tedious though, and will only become more so as more functions are defined. An operation to chain steps together is one way to alleviate this, and with an infix operator like x >>= y, it can even intuitively represent feeding the (possibly undefined) result from each step into the next. Since each result is technically inserted into another function, however, the operator will take a function for a parameter. As add already specifies the type of its output value, it should not hurt to keep the operator flexible and accept functions that output different types from their input:

 >>= : (Maybe T, T → Maybe U) → Maybe U
(mx >>= f) = ...
if mx is (Just x) then
... f(x)    -- f returns a defined value of type Maybe U
else
... Nothing -- f returns no value
end if


With >>= available, add can now be redefined as something much more compact:

 add(mx,my)  =  mx >>= λx -> (my >>= λy -> Just (x + y))


This is more concise, but a little extra analysis reveals something even more powerful. For one, the only role that Just plays in add is to tag an underlying value as also being a Maybe value. To emphasize how Just acts on the underlying value by wrapping it, it can be redefined as a function too, called eta for now:

 eta : T → Maybe T
eta(x)  =  Just x


The big picture is that these two functions >>= and eta were designed to simplify add, but they clearly do not depend on the specifics of add in any way, just the Maybe type. These functions can, in fact, apply to any values and functions of the Maybe type, regardless of the underlying values' types. For example, here is a concise NOT operator from (Kleene's) trinary logic that uses the same functions to automate undefined values too:

trinot : Maybe Boolean → Maybe Boolean
trinot(mp)  =  mp >>= λp -> (eta ∘ not) p


It turns out the Maybe type, together with >>= and eta, forms a monad. While other monads will embody different logical processes, and some may have extra properties, all of them will have three similar components (directly or indirectly) that follow the basic outline of this example.

### Definition

The more common definition for a monad in functional programming, used in the above example, is actually based on a Kleisli triple rather than category theory's standard definition. The two constructs turn out to be mathematically equivalent, however, so either definition will yield a valid monad. Given any well-defined, basic types T, U, a monad consists of three parts:

• A type constructor M that builds up a monadic type M T[b]
• A type converter, often called unit or return, that embeds an object x in the monad:
unit(x) : T → M T[c]
• A combinator, typically called bind (as in binding a variable) and represented with an infix operator >>=, that unwraps a monadic variable, then inserts it into a monadic function/expression, resulting in a new monadic value:
(mx >>= f) : (M T, T → M U) → M U[d]

To fully qualify as a monad though, these three parts must also respect a few laws:

• unit is a left-identity for bind:
unit(a) >>= λx → f(x) ↔ f(a)
• unit is also a right-identity for bind:
ma >>= λx → unit(x) ↔ ma
• bind is essentially associative:[e]
ma >>= λx → (f(x) >>= λy → g(y)) ↔ (ma >>= λx → f(x)) >>= λy → g(y)

Algebraically, this means any monad both gives rise to a category (called the Kleisli category) and a monoid in the category of functors (from values to computations), with monadic composition as a binary operator and unit as identity.[citation needed]

### Usage

The value of the monad pattern goes beyond merely condensing code and providing a link to mathematical reasoning. Whatever language or default programming paradigm a developer uses, following the monad pattern brings many of the benefits of purely functional programming. By reifying a specific kind of computation, a monad not only encapsulates the tedious details of that computational pattern, but it does so in a declarative way, improving the code's clarity. As monadic values explicitly represent not only computed values, but computed effects, a monadic expression can be substituted with its value in referentially transparent positions, much like pure expressions can be, allowing for many techniques and optimizations based on rewriting.

Typically, programmers will use bind to chain monadic functions into a sequence, which has led some to describe monads as "programmable semicolons", a reference to how many imperative languages use semicolons to separate statements. However, it should be stressed that monads do not actually order computations; even in languages that use them as central features, simpler function composition can arrange steps within a program. A monad's general utility rather lies in simplifying a program's structure and improving separation of concerns through abstraction.

The monad structure can also be seen as a uniquely mathematical and compile time variation on the decorator pattern. Some monads can pass along extra data that is inaccessible to functions, and some even exert finer control over execution, for example only calling a function under certain conditions. Because they let application programmers implement domain logic while offloading boilerplate code onto pre-developed modules, monads can even be considered a tool for aspect-oriented programming.

One other noteworthy use for monads is isolating side-effects, like input/output or mutable state, in otherwise purely functional code. Even purely functional languages can still implement these "impure" computations without monads, via an intricate mix of function composition and continuation-passing style (CPS) in particular. With monads though, much of this scaffolding can be abstracted away, essentially by taking each recurring pattern in CPS code and bundling it into a distinct monad.

If a language does not support monads by default, it is still possible to implement the pattern, often without much difficulty. When translated from category-theory to programming terms, the monad structure is a generic concept and can be defined directly in any language that supports an equivalent feature for bounded polymorphism. A concept's ability to remain agnostic about operational details while working on underlying types is powerful, but the unique features and stringent behavior of monads set them apart from other concepts.

## Applications

Discussions of specific monads will typically focus on solving a narrow implementation problem since a given monad represents a specific computational form. In some situations though, an application can even meet its high-level goals by using appropriate monads within its core logic.

Here are just a few applications that have monads at the heart of their designs:

## History

The term "monad" in programming actually goes all the way back to the APL and J programming languages, which do tend toward being purely functional. However, in those languages, "monad" is only shorthand for a function taking one parameter (a function with two parameters being a "dyad", and so on).

The mathematician Roger Godement was the first to formulate the concept of a monad (dubbing it a "standard construction") in the late 1950s, though the term "monad" that came to dominate is due to category-theorist Saunders Mac Lane.[citation needed] The form defined above using bind, however, was originally described in 1965 by mathematician Heinrich Kleisli in order to prove that any monad could be characterized as an adjunction between two (covariant) functors.

Starting in the 1980s, a vague notion of the monad pattern began to surface in the computer science community. According to programming language researcher Philip Wadler, computer scientist John C. Reynolds anticipated several facets of it in the 1970s and early 1980s, when he discussed the value of continuation-passing style, category theory as a rich source for formal semantics, and the type distinction between values and computations. The research language Opal, which was actively designed up until 1990, also effectively based I/O on a monadic type, but the connection was not realized at the time.

The computer scientist Eugenio Moggi was the first to explicitly link the monad of category theory to functional programming, in a conference paper in 1989, followed by a more refined journal submission in 1991. In earlier work, several computer scientists had advanced using category theory to provide semantics for the lambda calculus. Moggi's key insight was that a real-world program is not just a function from values to other values, but rather a transformation that forms computations on those values. When formalized in category-theoretic terms, this leads to the conclusion that monads are the structure to represent these computations.

Several others popularized and built on this idea, including Philip Wadler and Simon Peyton Jones, both of whom were involved in the specification of Haskell. In particular, Haskell used a problematic "lazy stream" model up through v1.2 to reconcile I/O with lazy evaluation, until switching over to a more flexible monadic interface. The Haskell community would go on to apply monads to many problems in functional programming, and researchers working with Haskell eventually generalized the monad pattern into a wider hierarchy of structures, including applicative functors and arrows.[citation needed]

At first, programming with monads was largely confined to Haskell and its derivatives, but as functional programming has influenced other paradigms, many languages have incorporated a monad pattern (in spirit if not in name). Formulations now exist in Scheme, Perl, Python, Racket, Clojure, Scala, F#, and have also been considered for a new ML standard.[citation needed]

## Analysis

One benefit of the monad pattern is bringing mathematical precision to bear on program logic. Not only can the monad laws be used to check an instance's validity, but features from related structures (like functors) can be used through subtyping.

Returning to the Maybe example, its components were declared to make up a monad, but no proof was given that it satisfies the monad laws.

This can be rectified by plugging the specifics of Maybe into one side of the general laws, then algebraically building a chain of equalities to reach the other side:

Law 1:  eta(a) >>= f(x)  ⇔  (Just a) >>= f(x)  ⇔  f(a)

Law 2:  ma >>= eta(x)           ⇔  ma

if ma is (Just a) then
eta(a)              ⇔ Just a
else                        or
Nothing             ⇔ Nothing
end if

Law 3:  (ma >>= f(x)) >>= g(y)                       ⇔  ma >>= (f(x) >>= g(y))

if (ma >>= f(x)) is (Just b) then               if ma is (Just a) then
g(ma >>= f(x))                                (f(x) >>= g(y)) a
else                                            else
Nothing                                         Nothing
end if                                          end if

⇔  if ma is (Just a) and f(a) is (Just b) then
(g ∘ f) a
else if ma is (Just a) and f(a) is Nothing then
Nothing
else
Nothing
end if


### Derivation from functors

Though rarer in computer science, one can use category theory directly, which defines a monad as a functor with two additional natural transformations. So to begin, a structure requires a higher-order function (or "functional") named map to qualify as a functor:

map φ : (a → b) → (ma → mb)

This is not always a major issue, however, especially when a monad is derived from a pre-existing functor, whereupon the monad inherits map automatically. (For historical reasons, this map is instead called fmap in Haskell.)

A monad's first transformation is actually the same unit from the Kleisli triple, but following the hierarchy of structures closely, it turns out unit characterizes an applicative functor, an intermediate structure between a monad and a basic functor. In the applicative context, unit is sometimes referred to as pure but is still the same function. What does differ in this construction is the law unit must satisfy; as bind is not defined, the constraint is given in terms of map instead:

(unit ∘ φ) x ↔ ((map φ) ∘ unit) x

The final leap from applicative functor to monad comes with the second transformation, the join function (in category theory this is a natural transformation usually called μ), which "flattens" nested applications of the monad:

join(mma) : M (M T) → M T

As the characteristic function, join must also satisfy three variations on the monad laws:[citation needed]

join ∘ (map join) mmma ↔ (join ∘ join) mmma ↔ ma
join ∘ (map unit) ma ↔ (join ∘ unit) ma ↔ ma
join ∘ (map map φ) mma ↔ ((map φ) ∘ join) mma ↔ mb

Regardless of whether a developer defines a direct monad or a Kleisli triple, the underlying structure will be the same, and the forms can be derived from each other easily:

(map φ) ma ↔ ma >>= (unit ∘ φ)
join(mma) ↔ mma >>= id
ma >>= f ↔ (join ∘ (map f)) ma

### Another example: List The complex multivalued square and cube root functions may be composed so they produce the sixth root function. The structure that governs the type of input and output and the structure that composes the different actions are, together, a list monad.
The bullet symbol • denotes the bind operator, z is a complex number, and the square brackets denote an array, and := means 'is defined as':
(f•g)(z) := append(map(f,g(z)))

lift(f) = f° := unit•f = f•unit

sqrt°(z) == append(map(unit,sqrt(z)))= append(map(sqrt,unit(z)))

sxrt(z) = (cbrt°•sqrt°)(z) == append(map(cbrt°,sqrt°(z)))

[dubious ]

The List monad naturally demonstrates how deriving a monad from a simpler functor can come in handy. In many languages, a list structure comes pre-defined along with some basic features, so a List type constructor and append operator (represented with ++ for infix notation) are assumed as already given here.

Embedding a plain value in a list is also trivial in most languages:

unit(x)  =  [x]


From here, applying a function iteratively with a list comprehension may seem like an easy choice for bind and converting lists to a full monad. The difficulty with this approach is that bind expects monadic functions, which in this case will output lists themselves; as more functions are applied, layers of nested lists will accumulate, requiring more than a basic comprehension.

However, a procedure to apply any simple function over the whole list, in other words map, is straight-forward:

(map φ) xlist  =  [ φ(x1), φ(x2), ..., φ(xn) ]


Now, these two procedures already promote List to an applicative functor. To fully qualify as a monad, only a correct notion of join to flatten repeated structure is needed, but for lists, that just means unwrapping an outer list to append the inner ones that contain values:

join(xlistlist)  =  join([xlist1, xlist2, ..., xlistn])
=  xlist1 ++ xlist2 ++ ... ++ xlistn


The resulting monad is not only a list, but one that automatically resizes and condenses itself as functions are applied. bind can now also be derived with just a formula, then used to feed List values through a pipeline of monadic functions:

(xlist >>= f)  =  join ∘ (map f) xlist


One application for this monadic list is representing nondeterministic computation. List can hold results for all execution paths in an algorithm, then condense itself at each step to "forget" which paths led to which results (a sometimes important distinction from deterministic, exhaustive algorithms). Another benefit is that checks can be embedded in the monad; specific paths can be pruned transparently at their first point of failure, with no need to rewrite functions in the pipeline.

A second situation where List shines is composing multivalued functions. For instance, the nth complex root of a number should yield n distinct complex numbers, but if another mth root is then taken of those results, the final m•n values should be identical to the output of the m•nth root. List completely automates this issue away, condensing the results from each step into a flat, mathematically correct list.

## Techniques

Monads present opportunities for interesting techniques beyond just organizing program logic. Monads can lay the groundwork for useful syntactic features while their high-level and mathematical nature enable significant abstraction.

### Syntactic sugar do-notation

Although using bind openly often makes sense, many programmers prefer a syntax that mimics imperative statements (called do-notation in Haskell, perform-notation in OCaml, computation expressions in F#, and for comprehension in Scala). This is only syntactic sugar that disguises a monadic pipeline as a code block; the compiler will then quietly translate these expressions into underlying functional code.

Translating the add function from the Maybe into Haskell can show this feature in action. A non-monadic version of add in Haskell looks like this:

add mx my =
case mx of
Nothing -> Nothing
Just x  -> case my of
Nothing -> Nothing
Just y  -> Just (x + y)


In monadic Haskell, return is the standard name for unit, plus lambda expressions must be handled explicitly, but even with these technicalities, the Maybe monad makes for a cleaner definition:

add mx my =
mx >>= (\x ->
my >>= (\y ->
return (x + y)))


With do-notation though, this can be distilled even further into a very intuitive sequence:

add mx my = do
x <- mx
y <- my
return (x + y)


A second example shows how Maybe can be used in an entirely different language: F#. With computation expressions, a "safe division" function that returns None for an undefined operand or division by zero can be written as:

let readNum () =
let succ,v = Int32.TryParse(s)
if (succ) then Some(v) else None

let secure_div =
maybe {
if (y = 0)
then None
else return (x / y)
}


At build-time, the compiler will internally "de-sugar" this function into a denser chain of bind calls:

maybe.Delay(fun () ->
if (y=0) then None else maybe.Return(x / y))))


For a last example, even the general monad laws themselves can be expressed in do-notation:

do { x <- return v; f x }            ==  do { f v }
do { x <- m; return x }              ==  do { m }
do { y <- do { x <- m; f x }; g y }  ==  do { x <- m; y <- f x; g y }


While convenient, a developer should always remember that this block style is purely syntactic and can be replaced with outwardly monadic (or even non-monadic CPS) expressions. Using bind to express the monadic pipeline can still be clearer in many cases, and some functional programming advocates even argue that since block-style allows novices to carry over habits from imperative programming, it should be avoided by default and only used when obviously superior.

### General interface

Every monad needs a specific implementation that meets the monad laws, but other aspects like the relation to other structures or standard idioms within a language are shared by all monads. As a result, a language or library may provide a general Monad interface with function prototypes, subtyping relationships, and other general facts. Besides providing a head-start to development and guaranteeing a new monad inherits features from a supertype (such as functors), checking a monad's design against the interface adds another layer of quality control.[citation needed]

### Operators

Monadic code can often be simplified even further through the judicious use of operators. The map functional can be especially helpful since it works on more than just ad-hoc monadic functions; so long as a monadic function should work analogously to a predefined operator, map can be used to instantly "lift" the simpler operator into a monadic one.[f] With this technique, the definition of add from the Maybe example could be distilled into:

add(mx,my)  =  map (+)


The process could be taken even one step further by defining add not just for Maybe, but for the whole Monad interface. By doing this, any new monad that matches the structure interface and implements its own map will immediately inherit a lifted version of add too. The only change to the function needed is generalizing the type signature:

add : (Monad Number, Monad Number)  →  Monad Number


Another monadic operator that is also useful for analysis is monadic composition (represented as infix >=> here), which allows chaining monadic functions in a more mathematical style:

(f >=> g) x  =  (f(x) → mb) >>= g(y = b)


With this operator, the monad laws can be written in terms of functions alone, highlighting the correspondence to associativity and existence of an identity:

(unit >=> g)     ↔  g
(f >=> unit)     ↔  f
(f >=> g) >=> h  ↔  f >=> (g >=> h)


## Variations

At a mathematical level, some monads have particularly nice properties and are uniquely fitted to certain problems.

An additive monad is a monad endowed with an additional closed, associative, binary operator mplus and an identity element under mplus, called mzero. The Maybe monad can be considered additive, with Nothing as mzero and a variation on the OR operator as mplus. List is also an additive monad, with the empty list [] acting as mzero and the concatenation operator ++ as mplus.

Intuitively, mzero represents a monadic wrapper with no value from an underlying type, but is also considered a "zero" (rather than a "one") since it acts as an absorber for bind, returning mzero whenever bound to a monadic function. This property is two-sided, and bind will also return mzero when any value is bound to a monadic zero function.

Sometimes, the general outline of a monad may be useful, but no simple pattern recommends one monad or another. This is where a free monad comes in; as a free object in the category of monads, it can represent monadic structure without any specific constraints beyond the monad laws themselves. Just as a free monoid concatenates elements without evaluation, a free monad allows chaining computations with markers to satisfy the type system, but otherwise imposes no deeper semantics itself.

For example, by working entirely through the Just and Nothing markers, the Maybe monad is in fact a free monad. The List monad, on the other hand, is not a free monad since it brings extra, specific facts about lists (like append) into its definition. One last example is an abstract free monad, using type markers for unit and bind and a recursive, linear type constructor:

newtype Free F(T)  =  Unit T or Bind (F, Free F(T))

unit(x)   =  Unit x
mx >>= f  =  ...
if mx is Unit x then
... f(x)
else
... Bind (f,mx)
end if


Free monads, however, are not restricted to a linked-list like in this example, and can be built around other structures like trees.

Using free monads intentionally may seem impractical at first, but their formal nature is particularly well-suited for syntactic problems. A free monad can be used to track syntax and type while leaving semantics for later, and has found use in parsers and interpreters as a result. Others have applied them to more dynamic, operational problems too, such as providing iteratees within a language.

Besides generating monads with extra properties, for any given monad, one can also define a comonad. Conceptually, if monads represent computations built up from underlying values, then comonads can be seen as reductions back down to values. Monadic code, in a sense, cannot be fully "unpacked"; once a value is wrapped within a monad, it remains quarantined there along with any side-effects (a good thing in purely functional programming). Sometimes though, a problem is more about consuming contextual data, which comonads can model explicitly.

Technically, a comonad is the categorical dual of a monad, which loosely means that it will have the same required components, only with the direction of the type signatures reversed. Starting from the bind-centric monad definition, a comonad consists of:

• A type constructor W that marks the higher-order type W T
• The dual of unit, called counit here, extracts the underlying value from the comonad:
counit(wa) : W T → T

• A reversal of bind (also represented with =>>) that extends a chain of reducing functions:
(wa =>> f) : (W U, W U → T) → W T[h]


extend and counit must also satisfy duals of the monad laws:

counit ∘ ( (wa =>> f) → wb )  ↔  f(wa) → b
wa =>> counit  ↔  wa
wa ( (=>> f(wx = wa)) → wb (=>> g(wy = wb)) → wc )  ↔  ( wa (=>> f(wx = wa)) → wb ) (=>> g(wy = wb)) → wc


Analogous to monads, comonads can also be derived from functors using a dual of join:

duplicate(wa) : W T → W (W T)


While operations like extend are reversed, however, a comonad does not reverse functions it acts on, and consequently, comonads are still functors with map, not cofunctors. The alternate definition with duplicate, counit, and map must also respect its own comonad laws:

((map duplicate) ∘ duplicate) wa  ↔  (duplicate ∘ duplicate) wa  ↔  wwwa
((map counit) ∘ duplicate)    wa  ↔  (counit ∘ duplicate)    wa  ↔  wa
((map map φ) ∘ duplicate)     wa  ↔  (duplicate ∘ (map φ))   wa  ↔  wwb


And as with monads, the two forms can be converted automatically:

(map φ) wa    ↔  wa =>> (φ ∘ counit) wx
duplicate wa  ↔  wa =>> wx

wa =>> f(wx)  ↔  ((map f) ∘ duplicate) wa


A simple example is the Product comonad, which outputs values based on an input value and shared environment data. In fact, the Product comonad is just the dual of the Writer monad and effectively the same as the Reader monad (both discussed below). Product and Reader differ only in which function signatures they accept, and how they complement those functions by wrapping or unwrapping values.

A less trivial example is the Stream comonad, which can be used to represent data streams and attach filters to the incoming signals with extend. In fact, while not as popular as monads, researchers have found comonads particularly useful for stream processing and modeling dataflow programming.

Due to their strict definitions, however, one cannot simply move objects back and forth between monads and comonads. As an even higher abstraction, arrows can subsume both structures, but finding more granular ways to combine monadic and comonadic code is an active area of research.

## More examples

The simplest monad is the Identity monad, which just annotates plain values and functions to satisfy the monad laws:

newtype Id T  =  T

unit(x)    =  x
(x >>= f)  =  f(x)


Identity does actually have valid uses though, such as providing a base case for recursive monad transformers. It can also be used to perform basic variable assignment within an imperative-style block.[i][citation needed]

### Collections

Any collection with a proper append is already a free monoid, but it turns out that List is not the only collection that also has a well-defined join and qualifies as a monad. One can even mutate List into these other monadic collections by simply imposing special properties on append:[j][citation needed]

Collection Monoid properties
List Free
Finite multiset Commutative
Finite set Commutative and idempotent
Finite permutations Non-commutative and idempotent

As already mentioned, pure code should not have unmanaged side effects, but that does not preclude a program from explicitly describing and managing effects. This idea is central to Haskell's IO monad, where an object of type IO a can be seen as containing the current state of the world outside the program, and computing a value of type a. A computation that computes no value – i.e., a procedure – has the type IO (), "computing" the dummy value (). When a programmer binds an IO value to a function, the function makes decisions based on that view of the world (input from users, files, etc.), then yields a monadic value reflecting the new world-state (program output).

For example, Haskell has several functions for acting on the wider file system, including one that checks whether a file exists and another that deletes a file. Their two type signatures are:

doesFileExist :: FilePath -> IO Bool
removeFile :: FilePath -> IO ()


The first is interested in whether a given file really exists, and as a result, outputs a Boolean value within the IO monad. The second function, on the other hand, is only concerned with acting on the file system so the IO container it outputs is empty.

IO is not limited just to file I/O though; it even allows for user I/O, and along with imperative syntax sugar, can mimic a typical "Hello, World!" program:

main :: IO ()
main = do
putStrLn "Hello, world!"
putStrLn "What is your name, user?"
name <- getLine
putStrLn ("Nice to meet you, " ++ name ++ "!")


Desugared, this translates into the following monadic pipeline (>> in Haskell is just a variant of bind for when only monadic effects matter and the underlying result can be discarded):

main :: IO ()
main =
putStrLn "Hello, world!" >>
putStrLn "What is your name, user?" >>
getLine >>= (\name ->
putStrLn ("Nice to meet you, " ++ name ++ "!"))


Another common situation is keeping a log file or otherwise reporting a program's progress. Sometimes, a programmer may want to log even more specific, technical data for later profiling or debugging. The Writer monad can handle these tasks by generating auxiliary output that accumulates step-by-step.

To show how the monad pattern is not restricted to primarily functional languages, this example implements a Writer monad in JavaScript. First, an array (with nested tails) allows constructing the Writer type as a linked list. The underlying output value will live in position 0 of the array, and position 1 will implicitly hold a chain of auxiliary notes:

const writer = [value, []];


Defining unit is also very simple:

const unit = value => [value, []];


Only unit is needed to define simple functions that output Writer objects with debugging notes:

const squared = x => [x * x, [${x} was squared.]]; const halved = x => [x / 2, [${x} was halved.]];


A true monad still requires bind, but for Writer, this amounts simply to appending a function's output to the monad's linked list:

const bind = (writer, transform) => {
const [value, log] = writer;
};


The sample functions can now be chained together using bind, but defining a version of monadic composition (called pipelog here) allows applying these functions even more succinctly:

const pipelog = (writer, ...transforms) =>
transforms.reduce(bind, writer);


The final result is a clean separation of concerns between stepping through computations and logging them to audit later:

pipelog(unit(4), squared, halved);
// Resulting writer object = [8, ['4 was squared.', '16 was halved.']]


An environment monad (also called a reader monad and a function monad) allows a computation to depend on values from a shared environment. The monad type constructor maps a type T to functions of type ET, where E is the type of the shared environment. The monad functions are:

${\begin{array}{ll}{\text{return}}\colon &T\rightarrow E\rightarrow T=t\mapsto e\mapsto t\\{\text{bind}}\colon &(E\rightarrow T)\rightarrow (T\rightarrow E\rightarrow T')\rightarrow E\rightarrow T'=r\mapsto f\mapsto e\mapsto f\,(r\,e)\,e\end{array}}$ The following monadic operations are useful:

${\begin{array}{ll}{\text{ask}}\colon &E\rightarrow E={\text{id}}_{E}\\{\text{local}}\colon &(E\rightarrow E)\rightarrow (E\rightarrow T)\rightarrow E\rightarrow T=f\mapsto c\mapsto e\mapsto c\,(f\,e)\end{array}}$ The ask operation is used to retrieve the current context, while local executes a computation in a modified subcontext. As in a state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad.

Formally, a value in an environment monad is equivalent to a function with an additional, anonymous argument; return and bind are equivalent to the K and S combinators, respectively, in the SKI combinator calculus.

A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state (of type s) along with a return value (of type t). This is similar to an environment monad, except that it also returns a new state, and thus allows modeling a mutable environment.

type State s t = s -> (t, s)


Note that this monad takes a type parameter, the type of the state information. The monad operations are defined as follows:

-- "return" produces the given value without changing the state.
return x = \s -> (x, s)
-- "bind" modifies m so that it applies f to its result.
m >>= f = \r -> let (x, s) = m r in (f x) s


Useful state operations include:

get = \s -> (s, s) -- Examine the state at this point in the computation.
put s = \_ -> ((), s) -- Replace the state.
modify f = \s -> ((), f s) -- Update the state


Another operation applies a state monad to a given initial state:

runState :: State s a -> s -> (a, s)
runState t s = t s


do-blocks in a state monad are sequences of operations that can examine and update the state data.

Informally, a state monad of state type S maps the type of return values T into functions of type $S\rightarrow T\times S$ , where S is the underlying state. The return and bind function are:

${\begin{array}{ll}{\text{return}}\colon &T\rightarrow S\rightarrow T\times S=t\mapsto s\mapsto (t,s)\\{\text{bind}}\colon &(S\rightarrow T\times S)\rightarrow (T\rightarrow S\rightarrow T'\times S)\rightarrow S\rightarrow T'\times S\ =m\mapsto k\mapsto s\mapsto (k\ t\ s')\quad {\text{where}}\;(t,s')=m\,s\end{array}}$ .

From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in any cartesian closed category by definition.

A continuation monad with return type R maps type T into functions of type $\left(T\rightarrow R\right)\rightarrow R$ . It is used to model continuation-passing style. The return and bind functions are as follows:

${\begin{array}{ll}{\text{return}}\colon &T\rightarrow \left(T\rightarrow R\right)\rightarrow R=t\mapsto f\mapsto f\,t\\{\text{bind}}\colon &\left(\left(T\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow \left(T'\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T'\rightarrow R\right)\rightarrow R=c\mapsto f\mapsto k\mapsto c\,\left(t\mapsto f\,t\,k\right)\end{array}}$ The call-with-current-continuation function is defined as follows:

${\text{call/cc}}\colon \ \left(\left(T\rightarrow \left(T'\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow R\right)\rightarrow R=f\mapsto k\mapsto \left(f\left(t\mapsto x\mapsto k\,t\right)\,k\right)$ ### Program logging

The following code is pseudocode. Suppose we have two functions foo and bar, with types

foo : int -> int
bar : int -> int


That is, both functions take in an integer and return another integer. Then we can apply the functions in succession like so:

foo (bar x)


Where the result is the result of foo applied to the result of bar applied to x.

But suppose we are debugging our program, and we would like to add logging messages to foo and bar. So we change the types as so:

foo : int -> int * string
bar : int -> int * string


So that both functions return a tuple, with the result of the application as the integer, and a logging message with information about the applied function and all the previously applied functions as the string.

Unfortunately, this means we can no longer compose foo and bar, as their input type int is not compatible with their output type int * string. And although we can again gain composablility by modifying the types of each function to be int * string -> int * string, this would require us to add boilerplate code to each function to extract the integer from the tuple, which would get tedious as the number of such functions increases.

Instead, let us define a helper function to abstract away this boilerplate for us:

bind : int * string -> (int -> int * string) -> int * string


bind takes in an integer and string tuple, then takes in a function (like foo) that maps from an integer to an integer and string tuple. Its output is an integer and string tuple, which is the result of applying the input function to the integer within the input integer and string tuple. In this way, we only need to write boilerplate code to extract the integer from the tuple once, in bind.

Now we have regained some composability. For example:

bind (bind (x,s) bar) foo


Where (x,s) is an integer and string tuple.

To make the benefits even clearer, let us define an infix operator as an alias for bind:

(>>=) : int * string -> (int -> int * string) -> int * string


So that t >>= f is the same as bind t f.

Then the above example becomes:

((x,s) >>= bar) >>= foo


Finally, it would be nice to not have to write (x, "") every time we wish to create an empty logging message, where "" is the empty string. So let us define a new function:

return : int -> int * string


Which wraps x in the tuple described above.

Now we have a nice pipeline for logging messages:

((return x) >>= bar) >>= foo


That allows us to more easily log the effects of bar and foo on x.

int * string is analogous to a monadic value. bind and return are analogous to the corresponding functions of the same name. In fact, int * string, bind, and return form a monad.

Alternatives for modeling computations:

• Effect systems are a different way to describe side effects as types
• Uniqueness types are a third approach to handling side-effects in functional languages

Related design concepts:

• Aspect-oriented programming emphasizes separating out ancillary bookkeeping code to improve modularity and simplicity
• Inversion of control is the abstract principle of calling specific functions from an overarching framework
• Type classes are a specific language feature used to implement monads and other structures in Haskell
• The decorator pattern is a more concrete, ad-hoc way to achieve similar benefits in object-oriented programming