(Redirected from Monads in functional programming)

In functional programming, a monad is a design pattern that defines how functions, actions, inputs, and outputs can be used together to build generic types,[1] with the following organization:

1. Define a data type, and how values of that data type are combined.
2. Create functions that use the data type, and compose them together into actions, following the rules defined in the first step.

A monad may encapsulate values of a particular data type, creating a new type associated with a specific additional computation, typically to handle special cases of the type. For example, the simple Maybe monad encapsulates variables which may have a null value, representing an option type, and automatically ensures that null values are not passed as arguments to functions that cannot handle them, serving as an alternative programming technique to throwing and catching exceptions when null values arise. Another example is the List monad, where the empty list is a constant value of type List, and the cons operator binds a plain value as the head of a previous list.

The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad.[2] A monad is defined by a return operator that creates values, and a bind operator used to link the actions in the pipeline; this definition must follow a set of axioms called monad laws, which are needed for the composition of actions in the pipeline to work properly.

Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. As such, monads have been described as "programmable semicolons"; a semicolon is the operator used to chain together individual statements in many imperative programming languages,[2] thus the expression implies that extra code will be executed between the actions in the pipeline. Monads have also been explained with a physical metaphor as assembly lines, where a conveyor belt transports data between functional units that transform it one step at a time.[3]

Purely functional programs can use monads to structure procedures that include sequenced operations like those found in structured programming.[4][5] Many common programming concepts can be described in terms of a monad structure without losing the beneficial property of referential transparency, including side effects such as input/output, variable assignment, exception handling, parsing, nondeterminism, concurrency, continuations, or domain-specific languages. This allows these concepts to be defined in a purely functional manner, without major extensions to the language's semantics. Languages like Haskell provide monads in the standard core, allowing programmers to reuse large parts of their formal definition and apply in many different libraries the same interfaces for combining functions.[6]

The name and concept comes from category theory, where monads are one particular kind of functor, a mapping between categories. In functional programming contexts, thanks to the specifics of the category used, the term monad denotes what is known as strong monad in category theory.[7]

## Overview

A monad is created by defining a type constructor M and two operations, bind and return (where return is often also called unit):

• The unary return operation takes a value from a plain type (a) and puts it into a container using the constructor, creating a monadic value (with type M a).
• The binary bind operation ">>=" takes as its arguments a monadic value with type M a and a function (aM b) that can transform the value.
• The bind operator unwraps the plain value with type a embedded in its input monadic value with type M a, and feeds it to the function.
• The function then creates a new monadic value, with type M b, that can be fed to the next bind operators composed in the pipeline.

With these elements, the programmer composes a sequence of function calls (the "pipeline") with several bind operators chained together in an expression. Each function call transforms its input plain type value, and the bind operator handles the returned monadic value, which is fed into the next step in the sequence. Between each pair of composed function calls, the bind operator can inject into the monadic value some additional information that is not accessible within the function, and pass it along. It can also exert finer control of the flow of execution, for example by calling the function only under some conditions, or executing the function calls in a particular order.

For example, the following code defines a binary operator x//y as safe division that avoids dividing by zero, using the Maybe monad and its constructors Nothing and Just.[a] The monadic values x and y may have the form Nothing or Just i, where i is an integer. The integer values contained within x and y are extracted into the plain values a and b, which are processed by the plain division operator "/" only when b is not zero.

-- Defining a safe division operator "//" (a "monadic function") using the Maybe monad
-- The expression  x // y is defined as the
x // y =                              -- ''bind'' binary operator (>>=) of the ''Maybe'' monad
x >>= (\a ->                       -- which takes as parameters 'x' and an anonymous function
y >>= (\b ->                     -- which divides x by y.
if b == 0 then Nothing else Just (a / b)))  -- The result value has type Maybe (Number)

-- Example usages ("pipelines")
Nothing // Just 10 -- This expression returns Nothing (by definition of the ''bind'' operator (>>=) in the Maybe monad)
Just 10 // Just 5 -- This expression returns Just 2 (by definition of the // operator)
Just 10 // Just 0 -- This expression returns Nothing (by definition of the // operator)
(Just 10 // Just 0) // Just 2 -- This expression, a pipeline of two composed "//" calls, returns Nothing


In the fourth example expression, a pipeline that chains together two safe divisions, the result of the first division is Nothing, which is fed as the input value to the second "//" operator; therefore the result is Nothing as well. Notice how the definition of the "//" operator doesn't need to check whether any of its input values is Nothing, as the bind operator of the Maybe monad already handles this concern: by definition of bind, when either the x or the y monadic parameters are Nothing (instead of matching the pattern Just value), the "if b == 0 ..." expression is not evaluated.

The operations that define the monad must fulfil several properties to allow the correct composition of monadic functions (i.e. functions that use values from the monad as their arguments or return value). Because a monad can insert additional operations around a program's domain logic, monads can be considered a sort of aspect-oriented programming.[8] The domain logic can be defined by the application programmer in the pipeline, while required aside bookkeeping operations can be handled by a pre-defined monad built in advance.

## History

The APL and J programming languages—which tend toward being purely functional—use the term monad as a shorthand for "function taking a single parameter" (as opposed to "dyad," or "function taking two parameters"). This predates use of the term in Haskell by nearly thirty years, and the meaning is entirely different.

The concept of monad programming appeared in the 1980s in the programming language Opal even though it was called "commands" and never formally specified.[citation needed]

Eugenio Moggi first described the general use of monads to structure programs in 1991.[7] Several people built on his work, including programming language researchers Philip Wadler and Simon Peyton Jones (both of whom were involved in the specification of Haskell). Early versions of Haskell used a problematic "lazy list" model for I/O, and Haskell 1.3 introduced monads as a more flexible way to combine I/O with lazy evaluation.

In addition to I/O, programming language researchers and Haskell library designers have successfully applied monads to topics including parsers and programming language interpreters. The concept of monads along with the Haskell do-notation for them has also been generalized to form applicative functors and arrows.

For a long time, Haskell and its derivatives have been the only major users of monads in programming. There also exist formulations in Scheme, Perl, Python, Racket, Clojure and Scala, and monads have been an option in the design of a new ML standard. Recently F# has included a feature called computation expressions or workflows, which are an attempt to introduce monadic constructs within a syntax more palatable to those programmers whose only prior experience has been with imperative languages.[9]

## Motivating examples

The Haskell programming language is a functional language that makes heavy use of monads, and includes syntactic sugar to make monadic composition more convenient. All of the code samples in this article are written in Haskell unless noted otherwise.

Two examples are often given when introducing monads: the Maybe monad, which represent computations where expressions can contain null values, and the I/O monad, which represent computations that interact with input/output effects. Of course, monads are not restricted to the Haskell language. The examples section below shows in JavaScript the Writer monad, which accumulates a separate log alongside the main chain of values in a computation.

Consider the option type Maybe t, representing a value that is either a single value of type t, or no value at all. To distinguish these, there are two algebraic data type constructors: Just t, containing the value t, or Nothing, containing no value.

data Maybe t   =   Just t  |  Nothing


We would like to be able to use this type as a simple sort of checked exception: at any point in a computation, the computation may fail, which causes the rest of the computation to be skipped and the final result to be Nothing. If all steps of the calculation succeed, the final result is Just x for some value x.

In the following example, add is a function that takes two arguments of type Maybe Int, and returns a result of the same type. If both mx and my have Just values, we want to return Just their sum; but if either mx or my is Nothing, we want to return Nothing. If we naively attempt to write functions with this kind of behavior, we'll end up with a nested series of "if Nothing then Nothing else do something with the x in Just x" cases that will quickly become unwieldy:[2]

add :: Maybe Int -> Maybe Int -> Maybe Int
case mx of
Nothing ->    Nothing
Just x  ->    case my of
Nothing ->   Nothing
Just y  ->   Just (x + y)


To alleviate this, we can define operations for chaining these computations together. The bind binary operator (>>=) chains the results of one computation that could fail, into a function that chooses another computation that could fail. If the first argument is Nothing, the second argument (the function) is ignored and the entire operation simply fails. If the first argument is Just x, we pass x to the function to get a new Maybe value, which may or may not result in a Just value.

(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Nothing  >>= _    =    Nothing    -- A failed computation returns Nothing
(Just x) >>= f    =    f x        -- Applies function f to value x


We already have a value constructor that returns a value without affecting the computation's additional state: Just.

return :: a -> Maybe a
return x   =   Just x       -- Wraps value x, returning a value of type (Maybe a)


We can then write the example as:

add :: Maybe Int -> Maybe Int -> Maybe Int
add mx my =             -- Adds two values of type (Maybe Int), where each input value can be Nothing
mx   >>=   (\x ->         -- Extracts value x if mx is not Nothing
my   >>=   (\y ->       -- Extracts value y if my is not Nothing
return (x + y)))  -- Wraps value (x+y), returning the sum as a value of type (Maybe Int)


Using some additional syntactic sugar known as do-notation, the example can be written as:

add :: Maybe Int -> Maybe Int -> Maybe Int
x <- mx
y <- my
return (x + y)


The above do expression resembles an imperative program, which assigns the Int values contained in the 'mx' and 'my' monadic values to temporary Int variables 'x' and 'y', and raises an exception if either 'mx' or 'my' have the value Nothing. In such case, the whole do expression has value Nothing. The exception handling is declared by defining the input parameters with type Maybe Int. Rather than having to declare an error case in a try ... catch block, the behavior defined in the bind operator of the Maybe monad (return Nothing) is executed in that case.

Since this type of operation is quite common, there is a standard function in Haskell (liftM2) to take two monadic values (here: two Maybes) and combine their contents (two numbers) using another function (addition), making it possible to write the previous example as

add :: Maybe Int -> Maybe Int -> Maybe Int


(Writing out the definition of liftM2 yields the code presented above in do-notation.)

In purely functional languages such as Haskell, functions cannot have any externally visible side effects as part of the function semantics. Although a function cannot directly cause a side effect, it can construct a value describing a desired side effect that the caller should apply at a convenient time.[10] In the Haskell notation, a value of type IO a represents an action that, when performed, produces a value of type a.

We can think of a value of type IO as an action that takes as its argument the current state of the world, and will return a new world where the state has been changed according to the function's return value. For example, the functions doesFileExist and removeFile in the standard Haskell library have the following types

doesFileExist :: FilePath -> IO Bool
removeFile :: FilePath -> IO ()


So, one can think of removeFile as a function that, given a FilePath, returns an IO action; this action will ensure that the world, in this case the underlying file system, won't have a file named by that FilePath when it gets executed. Here, the IO internal value is of type () which means that the caller does not care about any other outcomes. On the other hand, in doesFileExist, the function returns an IO action which wraps a boolean value, True or False; this conceptually represents a new state of the world where the caller knows for certain whether that FilePath is present in the file system or not at the time of the action is performed. The state of the world managed in this way can be passed from action to action, thus defining a series of actions which will be applied in order as steps of state changes. This process is similar to how a temporal logic represents the passage of time using only declarative propositions. The following example clarifies in detail how this chaining of actions occurs in a program, again using Haskell.

We would like to be able to describe all of the basic types of I/O operations, e.g. write text to standard output, read text from standard input, read and write files, send data over networks, etc. In addition, we need to be able to compose these primitives to form larger programs. For example, we would like to be able to write:

main :: IO ()
main = do
name <- getLine
putStrLn ("Nice to meet you, " ++ name ++ "!")


How can we formalize this intuitive notation? To do this, we need to be able to perform some basic operations with I/O actions:

• We should be able to sequence two I/O operations together. In Haskell, this is written as an infix operator >>, so that putStrLn "abc" >> putStrLn "def" is an I/O action that prints two lines of text to the console. The type of >> is IO aIO bIO b, meaning that the operator takes two I/O operations and returns a third that sequences the two together and returns the value of the second.
• We should have an I/O action which does nothing. That is, it returns a value but has no side effects. In Haskell, this action constructor is called return; it has type aIO a.
• More subtly, we should be able to determine our next action based on the results of previous actions. To do this, Haskell has an operator >>= (pronounced bind) with type IO a → (aIO b) → IO b. That is, the operand on the left is an I/O action that returns a value of type a; the operand on the right is a function that can pick an I/O action based on the value produced by the action on the left. The resulting combined action, when performed, performs the first action, then evaluates the function with the first action's return value, then performs the second action, and finally returns the second action's value.
An example of the use of this operator in Haskell would be getLine >>= putStrLn, which reads a single line of text from standard input and echoes it to standard output. Note that the first operator, >>, is just a special case of this operator in which the return value of the first action is ignored and the selected second action is always the same.

It is not necessarily obvious that the three preceding operations, along with a suitable primitive set of I/O operations, allow us to define any program action whatsoever, including data transformations (using lambda expressions), if/then control flow, and looping control flows (using recursion). We can write the above example as one long expression:

main =
putStrLn "What is your name?" >>
getLine >>= \name ->
putStrLn ("Nice to meet you, " ++ name ++ "!")


The pipeline structure of the bind operator ensures that the getLine and putStrLn operations get evaluated only once and in the given order, so that the side-effects of extracting text from the input stream and writing to the output stream are correctly handled in the functional pipeline. This remains true even if the language performs out-of-order or lazy evaluation of functions.

Clearly, there is some common structure between the I/O definitions and the Maybe definitions, even though they are different in many ways. Monads are an abstraction upon the structures described above, and many similar structures, that finds and exploits the commonalities. The general monad concept includes any situation where the programmer wants to carry out a purely functional computation while a related computation is carried out on the side.

## Formal definition

A monad is a construction that, given an underlying type system, embeds a corresponding type system (called the monadic type system) into it (that is, each monadic type acts as the underlying type). This monadic type system preserves all significant aspects of the underlying type system, while adding features particular to the monad.[b]

The usual formulation of a monad for programming is known as a Kleisli triple, and has the following components:

1. A type constructor that defines, for every underlying type, how to obtain a corresponding monadic type. In Haskell's notation, the name of the monad represents the type constructor. If M is the name of the monad and t is a data type, then M t is the corresponding type in the monad.
2. A unit function that injects a value in an underlying type to a value in the corresponding monadic type. The unit function has the polymorphic type t→M t. The result is normally the "simplest" value in the corresponding type that completely preserves the original value (simplicity being understood appropriately to the monad). In Haskell, this function is called return due to the way it is used in the do-notation described later.
3. A binding operation of polymorphic type (M t)→(t→M u)→(M u), which Haskell represents by the infix operator >>=. Its first argument is a value in a monadic type, its second argument is a function that maps from the underlying type of the first argument to another monadic type, and its result is in that other monadic type. Typically, the binding operation can be understood as having four stages:
1. The monad-related structure on the first argument is "pierced" to expose any number of values in the underlying type t.
2. The given function is applied to all of those values to obtain values of type (M u).
3. The monad-related structure on those values is also pierced, exposing values of type u.
4. Finally, the monad-related structure is reassembled over all of the results, giving a single value of type (M u).

Given a type constructor M, in most contexts, a value of type M a can be thought of as an action that returns a value of type a. The return operation takes a value from a plain type a and puts it into a monadic container of type M a; the bind operation chains a monadic value of type M a with a function of type a → M b to create a monadic value of type M b.

For a monad to behave correctly, the definitions must obey a few axioms, together called the monad laws.[11] The ≡ symbol indicates equivalence between two Haskell expressions in the following text.

• return acts approximately as a neutral element of >>=, in that:
• (return v) >>= f ≡ f v
• m >>= return ≡ m
• Binding with two functions in succession is the same as binding with their monadic-style composition:
• (m >>= f) >>= g ≡ m >>= ( \x -> (f x >>= g) )

The axioms can also be expressed using expressions in do-block style:

• do { x <- return v; f x } ≡ do { f v }
• do { x <- m; return x } ≡ do { m }
• do { y <- do { x <- m; f x }; g y } ≡ do { x <- m; y <- f x; g y }

or using the monadic composition operator, (f >=> g) x = (f x) >>= g:

• return >=> g ≡ g
• f >=> return ≡ f
• (f >=> g) >=> h ≡ f >=> (g >=> h)

### fmap and join

Although Haskell defines monads in terms of the return and bind functions, it is also possible to define a monad in terms of return and two other operations, join and fmap. This formulation fits more closely with the original definition of monads in category theory. The fmap operation, with type (tu) → M t→M u,[12] takes a function between two types and produces a function that does the "same thing" to values in the monad. The join operation, with type M (M t)→M t, "flattens" two layers of monadic information into one.

The two formulations are related as follows:

fmap f m = m >>= (return . f)
join n = n >>= id

m >>= g   ≡   join (fmap g m)


Here, m has the type M t, n has the type M (M r), f has the type tu, and g has the type t → M v, where t, r, u and v are underlying types.

The fmap function is defined for any functor in the category of types and functions, not just for monads. It is expected to satisfy the functor laws:

fmap id   ≡   id
fmap (f . g)   ≡   (fmap f) . (fmap g)


The return function characterizes pointed functors in the same category, by accounting for the ability to "lift" values into the functor. It should satisfy the following law:

return . f ≡ fmap f . return


This is equivalent to stating that return is a natural transformation from the identity "id" to "fmap".

join . fmap join     ≡ join . join
join . fmap return   ≡ join . return = id
join . fmap (fmap f) ≡ fmap f . join


The first two laws correspond to the monad axioms, while the third states that join is a natural transformation from "fmap . fmap" to "fmap".

An additive monad is a monad endowed with a monadic zero mzero and a binary operator mplus satisfying the monoid laws, with the monadic zero as unit. The operator mplus has type M tM tM t (where M is the monad constructor and t is the underlying data type), satisfies the associative law and has the zero as both left and right identity. That is:

(a mplus b) mplus c   ≡   a mplus (b mplus c)
m mplus mzero           ≡   m
mzero mplus m           ≡   m


Thus, an additive monad is also a monoid. For >>=, on the other hand, mzero acts as a null-element. Just as multiplying a number by 0 results in 0, binding mzero with any function produces the zero for the result type:

mzero >>= f               ≡   mzero


Similarly, binding any m with a function that always returns a zero results in a zero

m >>= (\x -> mzero)       ≡   mzero


Intuitively, the zero represents a value in the monad that has only monad-related structure and no values from the underlying type. In the Maybe monad, "Nothing" is a zero. In the List monad, "[]" (the empty list) is a zero.

## Syntactic sugar: do-notation

Although there are times when it makes sense to use the bind operator >>= directly in a program, it is more typical to use a format called do-notation (perform-notation in OCaml, computation expressions in F#), that mimics the appearance of imperative languages. The compiler translates do-notation to expressions involving >>=.

For example, both following definitions for safe division for values in the Maybe monad are equivalent:

x // y = do
a <- x  -- Extract the values "inside" x and y, if there are any.
b <- y
if b == 0 then Nothing else Just (a / b)

x // y = x >>= (\a -> y >>= (\b -> if b == 0 then Nothing else Just (a / b)))


As another example, the following code:

a = do x <- [3..4]
[1..2]
return (x, 42)


is transformed during compilation into:

a = [3..4] >>= (\x -> [1..2] >>= (\_ -> return (x, 42)))


It is helpful to see the implementation of the list monad, and to know that concatMap maps a function over a list and concatenates (flattens) the resulting lists:

instance Monad [] where
m >>= f  = concat (map f m)
return x = [x]
fail s   = []

concatMap f = concat . map f


Therefore, the following transformations hold and all the following expressions are equivalent:

a = [3..4] >>= (\x -> [1..2] >>= (\_ -> return (x, 42)))
a = [3..4] >>= (\x -> concatMap (\_ -> return (x, 42)) [1..2] )
a = [3..4] >>= (\x -> [(x,42),(x,42)] )
a = concatMap (\x -> [(x,42),(x,42)] ) [3..4]
a = [(3,42),(3,42),(4,42),(4,42)]


Notice that the values in the list [1..2] are not used. The lack of a left-pointing arrow, translated into a binding to a function that ignores its argument, indicates that only the monadic structure is of interest, not the values inside it, e.g. for a state monad this might be used for changing the state without producing any more result values. The do-block notation can be used with any monad as it is simply syntactic sugar for >>=.

A similar example in F# using a computation expression:

let readNum () =
let succ,v = Int32.TryParse(s)
if (succ) then Some(v) else None

let secure_div =
maybe {
if (y = 0)
then None
else return (x / y)
}


The syntactic sugar of the maybe block would get translated internally to the following expression:

maybe.Delay(fun () ->
if (y=0) then None else maybe.Return(x / y))))


Given values produced by safe division, we might want to carry on doing calculations without having to check manually if they are Nothing (i.e. resulted from an attempted division by zero). We can do this using a "lifting" function, which we can define not only for Maybe but for arbitrary monads. In Haskell this is called liftM2:

liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
liftM2 op mx my = do
x <- mx
y <- my
return (op x y)


Recall that arrows in a type associate to the right, so liftM2 is a function that takes a binary function as an argument and returns another binary function. The type signature says: If m is a monad, we can "lift" any binary function into it. For example:

(.*.) :: (Monad m, Num a) => m a -> m a -> m a
x .*. y = liftM2 (*) x y


defines an operator (.*.) which multiplies two numbers, unless one of them is Nothing (in which case it again returns Nothing). The advantage here is that we need not dive into the details of the implementation of the monad; if we need to do the same kind of thing with another function, or in another monad, using liftM2 makes it immediately clear what is meant (see Code reuse).

Mathematically, the liftM2 operator is defined by:

${\displaystyle {\text{liftM2}}\colon \forall M\colon {\text{monad}},\;(A_{1}\to A_{2}\to R)\to M\,A_{1}\to M\,A_{2}\to M\,R=}$${\displaystyle op\mapsto m_{1}\mapsto m_{2}\mapsto {\text{bind}}\;m_{1}\;(a_{1}\mapsto {\text{bind}}\;m_{2}\;(a_{2}\mapsto {\text{return}}\;(op\,a_{1}\,a_{2})))}$

## Other examples

The simplest monad is the identity monad, which attaches no information to values.

Id t = t
return x = x
x >>= f = f x


A do-block in this monad performs variable substitution; do {x <- 2; return (3*x)} results in 6.

From the category theory point of view, the identity monad is derived from the adjunction between identity functors.

### Collections

Some familiar collection types, including lists, sets, and multisets, are monads. The definition for lists is given here.

-- "return" constructs a one-item list.
return x = [x]
-- "bind" concatenates the lists obtained by applying f to each item in list xs.
xs >>= f = concat (map f xs)
-- The zero object is an empty list.
mzero = []


List comprehensions are a special application of the list monad. For example, the list comprehension [ 2*x | x <- [1..n], isOkay x] corresponds to the computation in the list monad do {x <- [1..n]; if isOkay x then return (2*x) else mzero;}.

The notation of list comprehensions is similar to the set-builder notation, but sets can't be made into a monad, since there's a restriction on the type of computation to be comparable for equality, whereas a monad does not put any constraints on the types of computations. Actually, the Set is a restricted monad.[13] The monads for collections naturally represent nondeterministic computation. The list (or other collection) represents all the possible results from different nondeterministic paths of computation at that given time. For example, when one executes x <- [1,2,3,4,5], one is saying that the variable x can non-deterministically take on any of the values of that list. If one were to return x, it would evaluate to a list of the results from each path of computation. Notice that the bind operator above follows this theme by performing f on each of the current possible results, and then it concatenates the result lists together.

Statements like if condition x y then return () else mzero are also often seen; if the condition is true, the non-deterministic choice is being performed from one dummy path of computation, which returns a value we are not assigning to anything; however, if the condition is false, then the mzero = [] monad value non-deterministically chooses from 0 values, effectively terminating that path of computation. Other paths of computations might still succeed. This effectively serves as a "guard" to enforce that only paths of computation that satisfy certain conditions can continue. So collection monads are very useful for solving logic puzzles, Sudoku, and similar problems.

In a language with lazy evaluation, like Haskell, a list is evaluated only to the degree that its elements are requested: for example, if one asks for the first element of a list, only the first element will be computed. With respect to usage of the list monad for non-deterministic computation that means that we can non-deterministically generate a lazy list of all results of the computation and ask for the first of them, and only as much work will be performed as is needed to get that first result. The process roughly corresponds to backtracking: a path of computation is chosen, and then if it fails at some point (if it evaluates mzero), then it backtracks to the last branching point, and follows the next path, and so on. If the second element is then requested, it again does just enough work to get the second solution, and so on. So the list monad is a simple way to implement a backtracking algorithm in a lazy language.

From the category theory point of view, collection monads are derived from adjunctions between a free functor and an underlying functor between the category of sets and a category of monoids. Taking different types of monoids, we obtain different types of collections, as in the table below:

Type of collections Type of monoids
list monoid
finite multiset commutative monoid
finite set idempotent commutative monoid
finite permutation idempotent non-commutative monoid

An environment monad (also called a reader monad and a function monad) allows a computation to depend on values from a shared environment. The monad type constructor maps a type T to functions of type ET, where E is the type of the shared environment. The monad functions are:

${\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow E\rightarrow T=t\mapsto e\mapsto t\\{\text{bind}}\colon &(E\rightarrow T)\rightarrow (T\rightarrow E\rightarrow T')\rightarrow E\rightarrow T'=r\mapsto f\mapsto e\mapsto f\,(r\,e)\,e\end{array}}}$

The following monadic operations are useful:

${\displaystyle {\begin{array}{ll}{\text{ask}}\colon &E\rightarrow E={\text{id}}_{E}\\{\text{local}}\colon &(E\rightarrow E)\rightarrow (E\rightarrow T)\rightarrow E\rightarrow T=f\mapsto c\mapsto e\mapsto c\,(f\,e)\end{array}}}$

The ask operation is used to retrieve the current context, while local executes a computation in a modified subcontext. As in a state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad.

A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state (of type s) along with a return value (of type t). This is similar to an environment monad, except that it also return a new state, and thus allows modeling a mutable environment.

type State s t = s -> (t, s)


Note that this monad, unlike those already seen, takes a type parameter, the type of the state information. The monad operations are defined as follows:

-- "return" produces the given value without changing the state.
return x = \s -> (x, s)
-- "bind" modifies m so that it applies f to its result.
m >>= f = \r -> let (x, s) = m r in (f x) s


Useful state operations include:

get = \s -> (s, s) -- Examine the state at this point in the computation.
put s = \_ -> ((), s) -- Replace the state.
modify f = \s -> ((), f s) -- Update the state


Another operation applies a state monad to a given initial state:

runState :: State s a -> s -> (a, s)
runState t s = t s


do-blocks in a state monad are sequences of operations that can examine and update the state data.

Informally, a state monad of state type S maps the type of return values T into functions of type ${\displaystyle S\rightarrow T\times S}$, where S is the underlying state. The return and bind function are:

${\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow S\rightarrow T\times S=t\mapsto s\mapsto (t,s)\\{\text{bind}}\colon &(S\rightarrow T\times S)\rightarrow (T\rightarrow S\rightarrow T'\times S)\rightarrow S\rightarrow T'\times S\ =m\mapsto k\mapsto s\mapsto (k\ t\ s')\quad {\text{where}}\;(t,s')=m\,s\end{array}}}$.

From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in any cartesian closed category by definition.

The writer monad allows a program to compute various kinds of auxiliary output which can be "composed" or "accumulated" step-by-step, in addition to the main result of a computation. It is often used for logging or profiling. Given the underlying type T, a value in the writer monad has type W × T, where W is a type endowed with an operation satisfying the monoid laws. Namely, W has a binary operation, (a,b) → a*b, which is associative, (a*b)*c = a*(b*c), and has a neutral element ε with the property x*ε = ε*x = x for all x.

${\displaystyle {\text{return}}\colon T\rightarrow W\times T=t\mapsto (\epsilon ,t)}$
${\displaystyle {\text{bind}}\colon (W\times T)\rightarrow (T\rightarrow W\times T')\rightarrow W\times T'=(w,t)\mapsto f\mapsto (w*w',\,t')\quad {\text{where}}\;(w',t')=f\,t}$

where ε and * are the identity element of the monoid W and its associative operation, respectively.

In JavaScript, a writer instance can be expressed as a two-element array, in which the first element is an arbitrary value and the second element is an array that collects extra information.

const writer = [value, []];


The array brackets work here as the monad's type constructor, creating a value of the monadic type for the Writer monad from simpler components (the value in position 0 of the array, and the log array in position 1).

unit (used in place of return, which is a reserved word in JavaScript) creates a new writer instance from a basic value, with an empty accumulator array attached to it.

const unit = value => [value, []];


bind applies a function to a writer instance, and returns a new writer instance composed of the result of the application, and the algebraic sum of the initial and new accumulators.

const bind = (writer, transform) => {
const [value, log] = writer;
};


pipeline is an auxiliary function that concatenates a sequence of binds applied to a list of functions.

const pipeline = (writer, ...transforms) =>
transforms.reduce(bind, writer);


Examples of functions that return values of the type expected by the above writer monad:

const squared = x => [x * x, [${x} was squared.]]; const halved = x => [x / 2, [${x} was halved.]];


Finally, an example of using the monad to build a pipeline of mathematical functions with debug information on the side (that is, an array of debug information is concatenated, and returned with the result, as well):

pipeline(unit(4), squared, halved); // [8, ['4 was squared.', '16 was halved.']]


A continuation monad with return type ${\displaystyle R}$ maps type ${\displaystyle T}$ into functions of type ${\displaystyle \left(T\rightarrow R\right)\rightarrow R}$. It is used to model continuation-passing style. The return and bind functions are as follows:

${\displaystyle {\begin{array}{ll}{\text{return}}\colon &T\rightarrow \left(T\rightarrow R\right)\rightarrow R=t\mapsto f\mapsto f\,t\\{\text{bind}}\colon &\left(\left(T\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow \left(T'\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T'\rightarrow R\right)\rightarrow R=c\mapsto f\mapsto k\mapsto c\,\left(t\mapsto f\,t\,k\right)\end{array}}}$

The call-with-current-continuation function is defined as follows:

${\displaystyle {\text{call/cc}}\colon \ \left(\left(T\rightarrow \left(T'\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow R\right)\rightarrow R\right)\rightarrow \left(T\rightarrow R\right)\rightarrow R=f\mapsto k\mapsto \left(f\left(t\mapsto x\mapsto k\,t\right)\,k\right)}$

### Others

Other concepts that researchers have expressed as monads include:

Free monads are similar to free monoids, in that they, intuitively speaking, are generic structures that fulfill the monad (monoid) laws without depending on the type in question.

For any type t, the free monoid of t is [t], with ++ as the associative binary operation and [] as the unit element. In Haskell, we can write this as:

instance Functor [] where
fmap _ [] = []
fmap fun (x:xs) = fun x : fmap fun xs

instance Monoid [t] where
mappend xs ys = xs ++ ys
mempty = []


Whereas in a concrete monoid, one could add the values t1,t2,...,tn with its binary operation; in [], they are simply concatenated into [t1,t2,...,tn], signifying that they "belong together". What that "belonging together" means, however, is left unspecified.

The free monad is based on the same idea. If we take List t = Nil | Cons t (List t) and insert a Functor into it, we get the free monad:

data Free f a = Return a | Bind (f (Free f a))

instance Functor f => Functor (Free f) where
fmap fun (Return x) = Return (fun x)
fmap fun (Bind x) = Bind (fmap (fmap fun) x)

instance Functor f => Monad (Free f) where
return x = Return x
Return x >>= fun = fun x
Bind x >>= fun = Bind (fmap (>>= fun) x)


Unlike List, which stores a list of values, Free stores a list of functors, wrapped around an initial value. Accordingly, the Functor and Monad instances of Free do nothing other than handing a given function down that list with fmap.

Comonads are the categorical dual of monads. They are defined by a type constructor W T and two operations: extract with type W TT for any T, and extend with type (W TT' ) → W T → W T' . The operations extend and extract are expected to satisfy these laws:

${\displaystyle {\text{extend}}\,\,{\text{extract}}={\text{id}}}$
${\displaystyle {\text{extract}}\circ ({\text{extend}}\,f)=f}$
${\displaystyle ({\text{extend}}\,f)\circ ({\text{extend}}\,g)={\text{extend}}\,(f\circ ({\text{extend}}\,g))}$

Alternatively, comonads may be defined in terms of operations fmap, extract and duplicate. The fmap and extract operations define W as a copointed functor. The duplicate operation characterizes comonads: it has type W T → W (W T) and satisfies the following laws:

${\displaystyle {\text{extract}}\circ {\text{duplicate}}={\text{id}}}$
${\displaystyle {\text{fmap}}\,{\text{extract}}\circ {\text{duplicate}}={\text{id}}}$
${\displaystyle {\text{duplicate}}\circ {\text{duplicate}}={\text{fmap}}\,{\text{duplicate}}\circ {\text{duplicate}}}$

The two formulations are related as follows:

${\displaystyle {\begin{array}{ll}{\text{fmap}}:&(A\rightarrow B)\rightarrow \mathrm {W} \,A\rightarrow \mathrm {W} \,B=f\mapsto {\text{extend}}\,(f\circ {\text{extract}})\\{\text{duplicate}}:&\mathrm {W} \,A\rightarrow \mathrm {W} \,\mathrm {W} \,A={\text{extend}}\,{\text{id}}\\{\text{extend}}:&(\mathrm {W} \,A\rightarrow B)\rightarrow \mathrm {W} \,A\rightarrow \mathrm {W} \,B=f\mapsto ({\text{fmap}}\,f)\circ {\text{duplicate}}\end{array}}}$

Whereas monads could be said to represent side-effects, a comonad W represents a kind of context. The extract functions extracts a value from its context, while the extend function may be used to compose a pipeline of "context-dependent functions" of type W AB.

The identity comonad is the simplest comonad: it maps type T to itself. The extract operator is the identity and the extend operator is function application.

The product comonad maps type T into tuples of type ${\displaystyle C\times T}$, where C is the context type of the comonad. The comonad operations are:

${\displaystyle {\begin{array}{ll}{\text{extract}}:&(C\times T)\rightarrow T=(c,t)\mapsto t\\{\text{extend}}:&((C\times A)\rightarrow B)\rightarrow C\times A\rightarrow C\times B=f\mapsto (c,a)\mapsto (c,f\,(c,a))\\{\text{fmap}}:&(A\rightarrow B)\rightarrow (C\times A)\rightarrow (C\times B)=f\mapsto (c,a)\mapsto (c,f\,a)\\{\text{duplicate}}:&(C\times A)\rightarrow (C\times (C\times A))=(c,a)\mapsto (c,(c,a))\end{array}}}$

The function comonad maps type T into functions of type ${\displaystyle M\rightarrow T}$, where M is a type endowed with a monoid structure. The comonad operations are:

${\displaystyle {\begin{array}{ll}{\text{extract}}:&(M\rightarrow T)\rightarrow T=f\mapsto f\,\varepsilon \\{\text{extend}}:&((M\rightarrow A)\rightarrow B)\rightarrow (M\rightarrow A)\rightarrow M\rightarrow B=f\mapsto g\mapsto m\mapsto f\,(m'\mapsto g\,(m*m'))\\{\text{fmap}}:&(A\rightarrow B)\rightarrow (M\rightarrow A)\rightarrow M\rightarrow B=f\mapsto g\mapsto (f\circ g)\\{\text{duplicate}}:&(M\rightarrow A)\rightarrow M\rightarrow M\rightarrow A=f\mapsto m\mapsto m'\mapsto f\,(m*m')\end{array}}}$

where ε is the identity element of ${\displaystyle M}$ and * is its associative operation.

The costate comonad maps a type T into type ${\displaystyle (S\rightarrow T)\times S}$, where S is the base type of the store. The comonad operations are:

${\displaystyle {\begin{array}{ll}{\text{extract}}:&((S\rightarrow T)\times S)\rightarrow T=(f,s)\mapsto f\,s\\{\text{extend}}:&(((S\rightarrow A)\times S)\rightarrow B)\rightarrow ((S\rightarrow A)\times S)\rightarrow (S\rightarrow B)\times S\,=f\mapsto (g,s)\mapsto ((s'\mapsto f\,(g,s')),s)\\{\text{fmap}}:&(A\rightarrow B)\rightarrow ((S\rightarrow A)\times S)\rightarrow (S\rightarrow B)\times S=f\mapsto (f',s)\mapsto (f\circ f',s)\\{\text{duplicate}}:&((S\rightarrow A)\times S)\rightarrow (S\rightarrow ((S\rightarrow A)\times S))\times S=(f,s)\mapsto ((s'\mapsto (f,s')),s)\end{array}}}$

• Arrows in functional programming — whereas monads generalize the results of a computation to effects, arrows further generalize the inputs similarly.
• Aspect-oriented programming, a paradigm to increase modularity by isolating secondary or supporting functionality.
• Effect system, an alternative way of describing side effects as types.
• Inversion of control — the abstract principle of calling specific functions from a reusable software entity.
• Monad transformers — which allow monads to be composed in a modular and convenient way.
• Uniqueness types - an alternative way of dealing with side-effects in functional languages

## Notes

1. ^ In this example the monad constructors are used directly to create monadic values instead of using the return operation.
2. ^ Technically, the monad is not required to preserve the underlying type. For example, the trivial monad in which there is only one polymorphic value which is produced by all operations satisfies all of the axioms for a monad. Conversely, the monad is not required to add any additional structure; the identity monad, which simply preserves the original type unchanged, also satisfies the monad axioms and is useful as a recursive base for monad transformers.

## References

1. ^ Lippert, Eric. "Monads, part one". Retrieved 6 September 2013.
2. ^ a b c O'Sullivan, Bryan; Goerzen, John; Stewart, Don (2009). Real World Haskell. O'Reilly. chapter 14.
3. ^ "A physical analogy for monads". Archived from the original on 10 September 2010.
4. ^ Wadler, Philip (1990). Comprehending Monads. Proceedings of the 1990 ACM Conference on LISP and Functional Programming. Nice. CiteSeerX .
5. ^ Wadler, Philip (1992). The Essence of Functional Programming. Conference Record of the Nineteenth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. CiteSeerX .
6. ^ Hughes, J. (2005). "Programming with arrows". Advanced Functional Programming. Springer Berlin Heidelberg. pp. 73–129. CiteSeerX .
7. ^ a b Moggi, Eugenio (1991). "Notions of computation and monads" (PDF). Information and Computation. 93 (1). doi:10.1016/0890-5401(91)90052-4.
8. ^ De Meuter, Wolfgang. "Monads as a theoretical foundation for AOP". Workshop on Aspect Oriented Programming, ECOOP 1997.
9. ^ "Some Details on F# Computation Expressions". Retrieved 2007-12-14.
10. ^ Peyton Jones, Simon L.; Wadler, Philip. Imperative Functional Programming. Conference record of the Twentieth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Charleston, South Carolina. 1993