In Unix-like computer operating systems (and, to some extent, Microsoft Windows), a pipeline is the original software pipeline: a set of processes chained by their standard streams, so that the output of each process (stdout) feeds directly as input (stdin) to the next one. Each connection is implemented by an anonymous pipe. Filter programs are often used in this configuration.
% program1 | program2 | program3 % ls -l | grep key | less
Unix pipeline can be thought of as left associative infix operation whose operands are programs with parameters. Programatically all programs in pipeline run at the same time (in parallel), but, looking at syntax, it can be thought that one runs after another. It is a functional composition. One can be reminded of functional programming, where data is passed from one function to another (as their input or output).
Pipelines in command line interfaces
All widely used Unix and Windows shells have a special syntax construct for the creation of pipelines. In all usage one writes the filter commands in sequence, separated by the ASCII vertical bar character "
|" (which, for this reason, is often called "pipe character"). The shell starts the processes and arranges for the necessary connections between their standard streams (including some amount of buffer storage).
By default, the standard error streams ("stderr") of the processes in a pipeline are not passed on through the pipe; instead, they are merged and directed to the console. However, many shells have additional syntax for changing this behaviour. In the csh shell, for instance, using "
|&" instead of "
|" signifies that the standard error stream too should be merged with the standard output and fed to the next process. The Bourne Shell can also merge standard error, using
2>&1, as well as redirect it to a different file.
In the most commonly used simple pipelines the shell connects a series of sub-processes via pipes, and executes external commands within each sub-process. Thus the shell itself is doing no direct processing of the data flowing through the pipeline.
However, it's possible for the shell to perform processing directly, using a so-called "mill", or "pipemill", (since a while command is used to "mill" over the results from the initial command). This construct generally looks something like:
command | while read var1 var2 ...; do # process each line, using variables as parsed into $var1, $var2, etc # (note that this is a subshell: var1, var2 etc will not be available # after the while loop terminates) done
Such pipemill may not perform as intended if the body of the loop includes commands, such as cat and ssh that read from stdin: on the loop's first iteration, such a program (let's call it the drain) will read the remaining output from command, and the loop will then terminate (with results depending on the specifics of the drain). There are a couple of possible ways to avoid this behavior. First, some drains support an option to disable reading from stdin (e.g. ssh -n). Alternatively, if the drain does not need to read any input from stdin to do something useful, it can be given < /dev/null as input.
Creating pipelines programmatically
Pipelines can be created under program control. The Unix
pipe() system call asks the operating system to construct a new anonymous pipe object. This results in two new, opened file descriptors in the process: the read-only end of the pipe, and the write-only end. The pipe ends appear to be normal, anonymous file descriptors, except that they have no ability to seek.
To avoid deadlock and exploit parallelism, the Unix process with one or more new pipes will then, generally, call
fork() to create new processes. Each process will then close the end(s) of the pipe that it will not be using before producing or consuming any data. Alternatively, a process might create a new thread and use the pipe to communicate between them.
Named pipes may also be created using
mknod() and then presented as the input or output file to programs as they are invoked. They allow multi-path pipes to be created, and are especially effective when combined with standard error redirection, or with
In most Unix-like systems, all processes of a pipeline are started at the same time, with their streams appropriately connected, and managed by the scheduler together with all other processes running on the machine. An important aspect of this, setting Unix pipes apart from other pipe implementations, is the concept of buffering: for example a sending program may produce 5000 bytes per second, and a receiving program may only be able to accept 100 bytes per second, but no data is lost. Instead, the output of the sending program is held in a queue. When the receiving program is ready to read data, the operating system sends its data from the queue, then removes that data from the queue. If the queue buffer fills up, the sending program is suspended (blocked) until the receiving program has had a chance to read some data and make room in the buffer. In Linux, the size of the buffer is 65536 bytes. An open source third-party filter called bfr is available to provide larger buffers if required.
The pipeline concept and the vertical-bar notation were invented by Douglas McIlroy, one of the authors of the early command shells, after he noticed that much of the time they were processing the output of one program as the input to another. His ideas were implemented in 1973 when Ken Thompson added pipes to the UNIX operating system. The idea was eventually ported to other operating systems, such as DOS, OS/2, Microsoft Windows, and BeOS, often with the same notation.
Other operating systems
- Anonymous pipe, a FIFO structure used for interprocess communication
- GStreamer, a pipeline-based multimedia framework
- Hartmann pipeline
- Named pipe persistent pipes used for interprocess communication
- Pipeline (computing) for other computer-related pipelines.
- Pipeline (software) for the general software engineering concept.
- Redirection (computing)
- Tee (command), a general command for tapping data from a pipeline
- XML pipeline for processing of XML files
- http://www.linfo.org/pipe.html Pipes: A Brief Introduction by The Linux Information Project (LINFO)
- History of Unix pipe notation
- Doug McIlroy’s original 1964 memo, proposing the concept of a pipe for the first time
- The Single UNIX® Specification, Issue 7 from The Open Group : create an interprocess channel – System Interfaces Reference,
- Pipes: A Brief Introduction by The Linux Information Project (LINFO)
- Unix Pipes – powerful and elegant programming paradigm (Softpanorama)
- Ad Hoc Data Analysis From The Unix Command Line at Wikibooks – Shows how to use pipelines composed of simple filters to do complex data analysis.
- Use And Abuse Of Pipes With Audio Data – Gives an introduction to using and abusing pipes with netcat, nettee and fifos to play audio across a network.
- stackoverflow.com – A Q&A about bash pipeline handling.