A surrogate model is an engineering method used when an outcome of interest cannot be easily directly measured, so a model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the air flow around the wing for different shape variables (length, curvature, material, ..). For many real world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such as design optimization, design space exploration, sensitivity analysis and what-if analysis become impossible since they require thousands or even millions of simulation evaluations.
One way of alleviating this burden is by constructing approximation models, known as surrogate models, response surface models, metamodels or emulators, that mimic the behavior of the simulation model as closely as possible while being computationally cheap(er) to evaluate. Surrogate models are constructed using a data-driven, bottom-up approach. The exact, inner working of the simulation code is not assumed to be known (or even understood), solely the input-output behavior is important. A model is constructed based on modeling the response of the simulator to a limited number of intelligently chosen data points. This approach is also known as behavioral modeling or black-box modeling, though the terminology is not always consistent. When only a single design variable is involved, the process is known as curve fitting .
Though using surrogate models in lieu of experiments and simulations in engineering design is more common, surrogate modelling may be used in many other areas of science where there are expensive experiments and/or function evaluations.
The scientific challenge of surrogate modeling is the generation of a surrogate that is as accurate as possible, using as few simulation evaluations as possible. The process comprises three major steps which may be interleaved iteratively:
- Sample selection (also known as sequential design, optimal experimental design (OED) or active learning)
- Construction of the surrogate model and optimizing the model parameters (Bias-Variance trade-off)
- Appraisal of the accuracy of the surrogate.
The accuracy of the surrogate depends on the number and location of samples (expensive experiments or simulations) in the design space. Various design of experiments (DOE) techniques cater to different sources of errors, in particular errors due to noise in the data or errors due to an improper surrogate model.
Types of surrogate models
The most popular surrogate models are polynomial response surfaces, Kriging, support vector machines and artificial neural networks. For most problems, the nature of true function is not known a priori so it is not clear which surrogate model will be most accurate. In addition, there is no consensus on how to obtain the most reliable estimates of the accuracy of a given surrogate.
A recent survey of surrogate-assisted evolutionary optimization techniques can be found in.
Recently proposed comparison-based surrogate models (e.g. Ranking Support Vector Machine) for evolutionary algorithms, such as CMA-ES, allow to preserve some invariance properties of surrogate-assisted optimizers: 
- 1. Invariance with respect to monotonous transformations of the function (scaling)
- 2. Invariance with respect to orthogonal transformations of the search space (rotation).
An important distinction can be made between two different applications of surrogate models: design optimization and design space approximation (also known as emulation).
In surrogate model based optimization an initial surrogate is constructed using some of the available budget of expensive experiments and/or simulations. The remaining experiments/simulations are run for designs which the surrogate model predicts may have promising performance. The process usually takes the form of the following search/update procedure.
- 1. Initial sample selection (the experiments and/or simulations to be run)
- 2. Construct surrogate model
- 3. Search surrogate model (the model can be searched extensively, e.g. using a genetic algorithm, as it is cheap to evaluate)
- 4. Run and update experiment/simulation at new location(s) found by search and add to sample
- 5. Iterate steps 2 to 4 until out of time or design 'good enough'
Depending on the type of surrogate used and the complexity of the problem, the process may converge on a local or global optimum, or perhaps none at all.
In design space approximation, one is not interested in finding the optimal parameter vector but rather in the global behavior of the system. Here the surrogate is tuned to mimic the underlying model as closely as needed over the complete design space. Such surrogates are a useful, cheap way to gain insight into the global behavior of the system. Optimization can still occur as a post processing step, although with no update procedure (see above) the optimum found cannot be validated.
- Linear approximation
- Response surface methodology
- Space mapping
- Surrogate endpoint
- Surrogate data
- Fitness approximation
- Computer experiment
- Jin Y (2011). Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation, 1(2):61-70.
- Loshchilov, I.; M. Schoenauer and M. Sebag (2010). "Comparison-Based Optimizers Need Comparison-Based Surrogates". Parallel Problem Solving from Nature (PPSN XI). Springer. pp. 364–373.
- Jones, D.R (2001), "A taxonomy of global optimization methods based on response surfaces," Journal of Global Optimization, 21:345-383.
- Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidyanathan, R., Tucker, P.K. (2005), “Surrogate-based analysis and optimization,” Progress in Aerospace Sciences, 41, 1-28.
- D. Gorissen, I. Couckuyt, P. Demeester, T. Dhaene, K. Crombecq, (2010), “A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design," Journal of Machine Learning Research, Vol. 11, pp. 2051−2055, July 2010.
- T-Q. Pham, A. Kamusella, H. Neubert, “Auto-Extraction of Modelica Code from Finite Element Analysis or Measurement Data," 8th International Modelica Conference, 20–22 March 2011 in Dresden.