Jump to content

Function (computer programming): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Self-modifying code: {{main|self-modifying code}}
Line 15: Line 15:


==Main concepts==
==Main concepts==
The main component of a subroutine is its ''body'', the piece of program that is executed when the subprogram is called, and defines its effect.
The main component of a subroutine is its ''body'', the piece of program that is executed when the subprogram is called, and defines its effect of the cheese.


A subroutine may be coded so as to obtain a specific set of data values from the calling program (its ''[[parameter (computer science)|parameters]]''), and eventually provide a specific set of values to it (its ''return values''). Indeed, a common use of subroutines is to implement [[function (mathematics)|mathematical function]]s, such as the [[logarithm]] of a number or the [[determinant]] of a [[matrix]], for which the return values are entirely determined by the parameters. However, a subroutine call may also have other ''[[side effect]]s'', such as reading from or writing to a [[peripheral device]], examining or modifying [[data structure]]s in the computer's [[memory]], halting the [[Computer program|program]] or the [[machine]], or even delaying the program's execution for a specified time.
A subroutine may be coded so as to obtain a specific set of data values from the calling program (its ''[[parameter (computer science)|parameters]]''), and eventually provide a specific set of values to it (its ''return values''). Indeed, a common use of subroutines is to implement [[function (mathematics)|mathematical function]]s, such as the [[logarithm]] of a number or the [[determinant]] of a [[matrix]], for which the return values are entirely determined by the parameters. However, a subroutine call may also have other ''[[side effect]]s'', such as reading from or writing to a [[peripheral device]], examining or modifying [[data structure]]s in the computer's [[memory]], halting the [[Computer program|program]] or the [[machine]], or even delaying the program's execution for a specified time.

Revision as of 10:28, 5 March 2009

In computer science, a subroutine or subprogram (also called procedure, function, method, or routine) is a portion of code within a larger program, which performs a specific task and is relatively independent of the remaining code. (The name "method" is mostly used in object-oriented programming, specifically for subroutines that are part of objects or object classes.)

As the name suggests, a subprogram or subroutine behaves in much the same way as a complete computer program that is used as a step in a larger program. A subroutine is often coded so that it can be executed (called) several times and/or from several places during a single execution of the program, including from other subroutines, and branch back (return) to the point after the call once its task is done.

Subroutines are a powerful programming tool [1], and the syntax of many programming languages includes support for writing and using them. Judicious use of subroutines (for example, though the structured programming approach) will often substantially reduce the cost of developing and maintaining a large program, while increasing its quality and reliability [2]. Subroutines, often collected into libraries, are an important mechanism for sharing and trading software.

Maurice Wilkes, David Wheeler, and Stanley Gill are credited with the invention of this concept, which they referred to as closed subroutine [3].

Main concepts

The main component of a subroutine is its body, the piece of program that is executed when the subprogram is called, and defines its effect of the cheese.

A subroutine may be coded so as to obtain a specific set of data values from the calling program (its parameters), and eventually provide a specific set of values to it (its return values). Indeed, a common use of subroutines is to implement mathematical functions, such as the logarithm of a number or the determinant of a matrix, for which the return values are entirely determined by the parameters. However, a subroutine call may also have other side effects, such as reading from or writing to a peripheral device, examining or modifying data structures in the computer's memory, halting the program or the machine, or even delaying the program's execution for a specified time.

A subroutine can be coded so that it may call itself recursively, at one or more places, in order to perform its task. This technique allows direct implementation of functions defined by mathematical induction and recursive divide and conquer algorithms.

Language support

High-level programming languages usually include specific constructs for

  • delimiting the part of the program (body) that comprises the subroutine,
  • assigning a name to the subroutine,
  • specifying the names and/or types of its parameters and/or return values,
  • providing a private naming scope for its temporary variables,
  • identifying variables outside the subroutine which are accessible within it,
  • calling the subroutine,
  • providing values to its parameters,
  • specifying the return values from within its body,
  • returning to the calling program,
  • disposing of the values returned by a call,
  • handling any exceptional conditions encountered during the call,
  • packaging subroutines into a module, library, object, class, etc.

Some programming languages, such as Pascal , Fortran, and Ada, distinguish between functions or function subprograms, which provide an explicit return value to the calling program, and subroutines or procedures, which do not. Other languages, such as C and Lisp, do not make this distinction, and treat those terms as synonymous.

Advantages

The advantages of breaking a program into subroutines include:

  • reducing the duplication of code within a program,
  • enabling the reuse of code across multiple programs,
  • improving the program's readability, maintainability, and extensibility,
  • decomposing a complex programming task into simpler steps
  • dividing a large programming task among various programmers, or various stages of a project,
  • hiding implementation details from users of the subroutine, and,
  • limiting the consequences of program errors or hardware failures.

History

Self-modifying code

The first use of subprograms was on early computers that were programmed in machine code or assembly language, and did not have a specific call instruction [citation needed]. On these computers, subroutines had to be called by a sequence of lower level machine instructions, possibly implemented as a macro. These instructions typically modified the program itself, replacing the operand of a branch instruction at the end of the procedure's body so that it would return to the proper location in the calling program (the return address, usually just after the instruction that jumped into the subroutine).

Subroutine libraries

Even with this cumbersome approach, subroutines proved very useful. For one thing they allowed the same code to be used in many different programs. Morever, memory was a very scarce resource on early computers, and subroutines allowed significant savings in program size.

In many early computers, the program instructions were entered into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program; and the same subroutine tape could then be used by many different programs. A similar approach was used in computers whose main input was through punched cards. The name "subroutine library" originally meant a library, in the literal sense, which kept indexed collections of such tapes or card decks for collective use.

Return by indirect jump

To remove the need for self-modifying code, computer designers eventually provided an "indirect jump" instruction, whose operand, instead of being the return address, was the location of a variable that contained said address.

In those computers, instead of modifying the subroutine's return jump, the calling program would store the return address in some predefined location. When done, the subroutine had only to execute an indirect jump through that location.

In the IBM System/360, for example, the return address was typically saved in a processor register (usually as part of the branch instructions (BAL or BALR). If necessary, the subroutine would save the contents of that register to a private memory location such as a register stack (or, for simple subroutines, not alter that designated register whilst executing that code). Subroutines would therefore often be coded without the need for a stack, using just a single machine instruction (BAL) to call the subroutine and another single machine instruction (BR) to return, thereby minimizing overhead significantly. Additionally, where a return code was expected from the subroutine, the return value was often designed to be a multiple of 4 - so that it could be used as a direct branch table index - with the branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency.

Example:   BAL  14,SUBRTN01    go to sub-routine , using reg 14 as save register  (sets reg 15 to 0,4,8  as return value)
           B    TABLE(15)      use returned value in reg 15 to index the branch table, branching to the appropriate branch instr. 
TABLE      B    OK             (return code =00   GOOD                      )
           B    BAD            (return code =04   Invalid input             } Branch table
           B    ERROR          (return code =08   Unexpected condition      }

Jump to subroutine

Another advance was the "jump to subroutine" instruction, which combined the saving of the return address with the calling jump. In the HP 2100 assembly language, for example, one would write

       ...
       JSB MYSUB    (Calls subroutine MYSUB.)
 BB    ...          (Will return here after MYSUB is done.)

to call a subroutine called MYSUB from the main program. The subroutine would be coded as

 MYSUB NOP          (Storage for MYSUB's return address.)
 AA    ...          (Start of MYSUB's body.)
       ...
       JMP MYSUB,I  (Returns to the calling program.)

The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB,I which branched to the location stored at location MYSUB.

Compilers for FORTRAN and other languages could easily make use of them these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls.

Incidentally, a similar technique was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the "return" address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory which was very limited on small computers such as the IBM PC.

Call stack

Most modern implementations use a call stack, a special case of the stack data structure, to implement subroutine calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address.

The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in RISC and VLIW architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose.

The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter [citation needed].

Some designs, notably some Forth implementations, used two separate call stacks, one mainly for control information (like return addresses and loop counters) and the other for data.

When stack-based procedure calls were first introduced, an important motivation was to save precious memory [citation needed]. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs which include thousands of subroutines, of which only a handful are active at any given moment [citation needed]. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management.

However, another advantage of the call stack method is that it allows recursive subroutine calls, since each nested call to the same procedure gets a separate instance of its private data.

Delayed stacking

One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased excution time, or increased processor complexity, or both.

This overhead is most obvious and objectionable in leaf procedures which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) which will be needed after Q returns.

The parts of the program which are responsible for the entry into and exit out of the subroutine (and hence, the setting up and removal of each stack frame) are called the function prologue and epilogue.

If the procedure or function itself uses stack handling commands, outside of the prologue and epilogue, e.g. to store intermediate calculation values, the programmer needs to keep track of the number of 'push' and 'pop' instructions so as not to corrupt the original return address.

Side-effects

In most imperative programming languages, subprograms may have so-called side-effects; that is, they may cause changes that remain after the subprogram has returned. It can be technically very difficult to predict whether a subprogram has a side-effect or not. In imperative programming, compilers usually assume every subprogram has a side-effect to avoid complex analysis of execution paths. Because of its side-effects, a subprogram may return different results each time it is called, even if it is called with the same arguments. A simple example is a subprogram that implements a pseudorandom number generator; that is, a subprogram that returns a random number each time it is called.

In pure functional programming languages such as Haskell, subprograms can have no side effects, and will always return the same result if repeatedly called with the same arguments. Such languages typically only support functions, since subroutines that do not return a value have no use unless they can cause a side effect. In functional programming writing to a file is a side effect.

C and C++ examples

In the C and C++ programming languages, subprograms are referred to as "functions" (or "methods" when associated with a class). Note that these languages use the special keyword void to indicate that a function takes no parameters (especially in C) and/or does not return any value. Note that C/C++ functions can have side-effects, including modifying any variables whose addresses are passed as parameters (i.e. "passed by reference"). Examples:

 void function1(void) { /* some code */ }

The function does not return a value and has to be called as a stand-alone function, e.g., function1();

 int function2(void)
   { return 5; }

This function returns a result (the number 5), and the call can be part of an expression, e.g., x + function2()

 char function3 (int number)
   { char selection[] = {'S','M','T','W','T','F','S'}; 
     return selection[number];
   }

This function converts a number between 0 to 6 into the initial letter of the corresponding day of the week, namely 0 \u2192 'S', 1 \u2192 'M', ..., 6 \u2192 'S'. The result of calling it might be assigned to a variable, e.g., num_day = function3(number);.

 void function4 (int* pointer_to_var)
   { (*pointer_to_var)++; }

This function does not return a value but modifies the variable whose address is passed as the parameter; it would be called with "function4(&variable_to_increment);".

Local variables, recursion and re-entrancy

A subprogram may find it useful to make use of a certain amount of "scratch" space; that is, memory used during the execution of that subprogram to hold intermediate results. Variables stored in this scratch space are referred to as local variables, and the scratch space itself is referred to as an activation record. An activation record typically has a return address that tells it where to pass control back to when the subprogram finishes.

A subprogram may have any number and nature of call sites. If recursion is supported, a subprogram may even call itself, causing its execution to suspend while another nested execution of the same subprogram occurs. Recursion is a useful technique for simplifying some complex algorithms, and breaking down complex problems. Recursive languages generally provide a new copy of local variables on each call. If the programmer desires the value of local variables to stay the same between calls, they can be declared "static" in some languages, or global values or common areas can be used.

Early languages like Fortran did not initially support recursion because variables were statically allocated, as well as the location for the return address. Most computers before the late 1960s such as the PDP-8 did not have support for hardware stack registers.[citation needed]

Modern languages after ALGOL such as Pl/1 and C almost invariably use a stack, usually supported most modern computer instruction sets to provide a fresh activation record for every execution of a subprogram. That way, the nested execution is free to modify its local variables without concern for the effect on other suspended executions in progress. As nested calls accumulate, a call stack structure is formed, consisting of one activation record for each suspended subprogram. In fact, this stack structure is virtually ubiquitous, and so activation records are commonly referred to as stack frames.

Some languages such as Pascal and Ada also support nested subroutines, which are subroutines callable only within the scope of an outer (parent) subroutine. Inner subroutines have access to the local variables of the outer subroutine which called them. This is accomplished by storing extra context information within the activation record, also known as a display.

If a subprogram can function properly even when called while another execution is already in progress, that subprogram is said to be re-entrant. A recursive subprogram must be re-entrant. Re-entrant subprograms are also useful in multi-threaded situations, since multiple threads can call the same subprogram without fear of interfering with each other.

In a multi-threaded environment, there is generally more than one stack. An environment which fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records.

Overloading

It is sometimes desirable to have a number of functions with the same name, but operating on different types of data, or with different parameter profiles. For example, a square root function might be defined to operate on reals, complex values or matrices. The algorithm to be used in each case is different, and the return result may be different. By writing three separate functions with the same name, the programmer has the convenience of not having to remember different names for each type of data. Further if a subtype can be defined for the reals, to separate positive and negative reals, two functions can be written for the reals, one to return a real when the parameter is positive, and another to return a complex value when the parameter is negative.

When a series of functions with the same name can accept different parameter profiles or parameters of different types, each of the functions is said to be overloaded.

As another example, a subroutine might construct an object that will accept directions, and trace its path to these points on screen. There are a plethora of parameters that could be passed in to the constructor (colour of the trace, starting x and y co-ordinates, trace speed). If the programmer wanted the constructor to be able to accept only the color parameter, then he could call another constructor that accepts only color, which in turn calls the constructor with all the parameters passing in a set of "default values" for all the other parameters (X and Y would generally be centered on screen or placed at the origin, and the speed would be set to another value of the coder's choosing).

Conventions

A number of conventions for the coding of subprograms have been developed. It has been commonly preferable that the name of a subprogram should be a verb when it does a certain task, an adjective when it makes some inquiry, and a noun when it is used to substitute variables and such.

Experienced programmers recommend that a subprogram perform only one task. If a subprogram performs more than one task, it should be split up into more subprograms. They argue that subprograms are key components in maintaining code and their roles in the program must be distinct.

Some advocate that each subprogram should have minimal dependency on other pieces of code. For example, they see the use of global variables as unwise because it adds tight-coupling between subprograms and global variables. If such coupling is not necessary at all, they advise to refactor subprograms to take parameters instead. This practice is controversial because it tends to increase the number of passed parameters to subprograms.

Efficiency and inlining

There is a runtime overhead associated with:-

  • passing parameters (which now frequently also involves copying existing data values if pointers are not used)
  • calling the subprogram
  • returning to the caller

and frequently

  • testing a return code passed by the subroutine (although the test can be eliminated in Assembler - see earlier example)

The actual overhead for each invocation depends on the local context at the point of call and the requirements specified in the architecture's application binary interface.

One technique used to eliminate this overhead is inline expansion of the subprogram call site (literally, not using a subroutine). However, "inlining" often increases code size (if the subroutine is used more than once) and can introduce cache misses into a previously optimised block of code.

Dynamic dispatch can introduce further overheads - although the performance difference between indirect and direct calls on commodity CPUs has narrowed since the 1980s, because of research and work done by CPU designers (driven by the increasing popularity of object-oriented programming, which uses dynamic dispatch extensively). Also, software techniques have been developed to make dynamic dispatch more efficient.

Related terms and clarification

Different programming languages and methodologies possess notions and mechanisms related to subprograms:

  • Subroutine is practically synonymous with "subprogram." The former term may derive from the terminology of assembly languages and Fortran.
  • Function and Procedure are also synonymous with "subprogram" - with the distinction (in some programming languages) that "functions" generate return values and appear in expressions, where "procedures" generate no return values and appear in statements. Hence, a subprogram that calculates the square root of a number would be a "function" (eg: y = sqrt(x)) where a subprogram to print out a number might be a "procedure" (eg: print(x)). This is not a distinction found in all programming languages and notably the C family of programming languages use the two terms interchangeably. See also: Command-Query Separation.
  • Predicate is, in general, a boolean-valued function (a function that returns a boolean). In logic programming languages, often all subroutines are called "predicates", since they primarily determine success or failure.
  • Method or Member function is a special kind of subprogram used in object-oriented programming that describes some behaviour of an object.
  • Closure is a subprogram together with the values of some of its variables captured from the environment in which it was created.
  • Coroutine is a subprogram that returns to its caller before completing.
  • Event handler, or simply "handler," is a subprogram that is called in response to an "event", such as a computer user moving the mouse or typing on the keyboard. The AppleScript scripting language simply uses the term "handler" as a synonym for subprogram. Event handlers are often used to respond to an Interrupt - in which case they may be termed an Interrupt handler.

People who write compilers, and people who write in assembly language, deal with the low-level details involved in implementing subroutines:

Most subprograms typically implement the idea of an algorithm -- they are given a finite amount of information when they start, they are given no more information while they run, and they give out no information until they end. However, each process, job, task, or thread is a program or subprogram that typically sends and receives information many times before it ends.

See also

References

  1. ^ Donald E. Knuth. The Art of Computer Programming, Volume I: Fundamental Algorithms. Addison-Wesley. ISBN ISBN 0-201-89683-4. {{cite book}}: Check |isbn= value: invalid character (help)
  2. ^ O.-J. Dahl (1972). Structured Programming. Academic Press. ISBN 0-12-200550-3. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  3. ^ Wilkes, M. V. (1951). Preparation of Programs for an Electronic Digital Computer. Addison-Wesley. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)