What isn't Inverse Problems research about?
For a general (non-scientist, non-mathematician) reader, the topic of inverse problems comes across as quite striking. But it is described in ways sometimes so general (the aim of obtaining parameters from data) that it's a little hard to see why, for instance, statistical theory, at least in its inferential aspects, isn't regarded as a subfield of inverse problems research. Is that in fact the implication? Somehow I get the feeling that such an implication is not intended. Is the phrase "inverse problems" short for something? It doesn't seem to be short for "inverse variational or optimizational problems," but, even after visiting various of the links, I'm unsure. (Still, thank you to whoever wrote the article, I was glad to find it). - Ben Udell
- In my opinion, as someone who works in the field, it is not a very well defined term. "Ill-posed problem" is a better term, but I think perhaps the use of "inverse problem" arose so as not to seem that we were doing something impossible. However among those who work in the area, there are definately a collection of problems we all agree are inverse problems, and I expect some that we would not all agree on as well. It overlaps with statistical inference, and indeed there is a statistical approach to IP, but there many who treat IP from a non-statistical point of view. I'm sorry but there it is. It is a vaugue term. Perhaps User:Tarantola can explain better? Billlion 17:49, 6 June 2006 (UTC)
- Thank you. The picture I'm getting is that it's a question of going from incomplete data to estimate or adjust parameters of a larger set. That's so general that it seems to include statistical inference without necessarily being limited to it. I see nothing objectionable in the idea of something's including statistical inference and inverse optimization as well, in that way. After all, statistical theory deals with a problem inverse to that of probability theory. One seems to discern a whole family of areas dealing with problems inverse to those of deductive maths of optimization, probability, information, even logic. But I just get the feeling from the article that such a general sense of the phrase "inverse problem" would be considered too general for what the article is referring to -- for instance, you speak of "overlap with" statistical inference rather than "inclusion" or "encompassment" of it. Anyway, I've rambled enough, if my question isn't clear at this point, then it's my fault, I'll need to do more than surf around the 'Net, maybe go out and buy a book or something! - Ben Udell
- Update: Well, I haven't bought a book, though I have read some of Tarantola's online discussion. As far as I can discern through the mists of my own ignorance, there are basically two issues involved in distinguishing the inverse problem as discussed in the Wikipedia article (hereinafter capitalized "the Inverse Problem") from the seemingly more general terrain of problems which are "inverse" to those studied in deductive math theories of optimization, probability, information, & logic.
- 1. The Inverse Problem pertains to an actually developing interdisciplinary discipline of research for which it could be counterproductive to mount too ambitiously general a definition, e.g., such as to encompass among its subdivisions the whole of statistical theory, statistical theory of stochastic processes, etc.
- 2. Inverse Problem research is not always carried out in a "general" way but instead is often in the form of research into specific physical questions, where the generation of the explanatory content of hypotheses looms larger as a desideratum than it does, say, in general research in statistics. A field which understands itself as dealing with a problem inverse to that of probability or deductive logic will tend to understand itself as drawing conclusions usually in the form of inductive generalizations (statistically based or otherwise, though, to be sure, I don't mean mathematical induction); Inverse Problem research is not prepared to limit its conclusions to inductive generalizations, statistically based or otherwise, such as tend to answer problems the inverse of strictly deductive math problems of optimization, probability, information, logic -- instead, Inverse Problem research is concerned with "ill-posed" problems generally (or with some cross-section thereof), such that Inverse Problem research includes the generation of explanatory content of hypotheses.
- Are 1. & 2., more or less, the issues? - Ben Udell
Bibliography too long
I suggest we need a severe cull of references, there are simply to many. It might be appropriiate to put them back in more specific articles (eg Inverse problems in geophysics)? Please discuss Billlion 09:48, 20 August 2006 (UTC)
To predict the result of a measurement requires (1) a model of the system under investigation, and (2) a physical theory linking the parameters of the model to the parameters being measured. This prediction of observations, given the values of the parameters defining the model constitutes the "normal problem," or, in the jargon of inverse problem theory, the forward problem. The "inverse problem" consists in using the results of actual observations to infer the values of the parameters characterizing the system under investigation. Inverse problems may be difficult to solve for at least two different reasons: (1) different values of the model parameters may be consistent with the data (knowing the height of the main-mast is not sufficient for calculating the age of the captain), and (2) discovering the values of the model parameters may require the exploration of a huge parameter space (finding a needle in a 100-dimensional haystack is difficult). Although most of the formulations of inverse problems proceed directly to the setting of an optimization problem, it is actually best to start using a probabilistic formulation, the optimization formulation then appearing as a by-product. Consider a manifold ￼with a notion of volume. Then for any ￼, ￼ (1)
A volumetric probability is a function ￼that to any ￼associates its probability ￼ (2)
If ￼is a metric manifold endowed with some coordinates ￼, then ￼ (3)
and ￼ ￼ ￼ (4) ￼ ￼ ￼ (5)
(Note that the volumetric probability ￼is an invariant, but the probability density ￼is not; it is a density.) A basic operation with volumetric probabilities is their product, ￼ (6)
where ￼. This corresponds to a "combination of probabilities" well suited to many basic inference problems. Consider an example in which two planes make two estimations of the geographical coordinates of a shipwrecked man. Let the probabilities be represented by the two volumetric probabilities ￼and ￼. The volumetric probability that combines these two pieces of information is ￼ (7)
This operation of product of volumetric probabilities extends to the following case: 1. There is a volumetric probability ￼defined on a first manifold ￼. 2. There is another volumetric probability ￼defined on a second manifold ￼. 3. There is an application Q=Q(P)" src="/images/equations/InverseProblem/Inline22.gif" width=84 height=14 from ￼into ￼. Then, the basic operation introduced above becomes ￼ (8)
where ￼. In a typical inverse problem, there is: 1. A set of model parameters ￼. 2. A set of observable parameters ￼. 3. A relation ￼predicting the outcome of the possible observations. The model parameters are coordinates on the model parameter manifold ￼while the observable parameters are coordinates over the observable parameter manifold ￼. When the points on ￼are denoted ￼, ￼, ... and the points on ￼are denoted ￼, ￼, ..., the relation between the model parameters an the observable parameters is written O=O(M)" src="/images/equations/InverseProblem/Inline37.gif" width=92 height=14. The three basic elements of a typical inverse problem are: 1. Some a priori information on the model parameters, represented by a volumetric probability ￼defined over ￼. 2. Some experimental information obtained on the observable parameters, represented by a volumetric probability ￼defined over ￼. 3. The 'forward modeling' relation O=O(M)" src="/images/equations/InverseProblem/Inline42.gif" width=92 height=14 that we have just seen. The use of equation (8) leads to ￼ (9)
where ￼is a normalization constant. This volumetric probability represents the resulting information one has on the model parameters (obtained by combining the available information). Equation (9) provides the more general solution to the inverse problem. Common methods (Monte Carlo, optimization, etc.) can be seen as particular uses of this equation. Considering an example from sampling, sample the a priori volumetric probability ￼to obtain (many) random models ￼, ￼, .... For each model ￼, solve the forward modeling problem, ￼. Give to each model ￼a probability of 'survival' proportional to ￼. The surviving models ￼, ￼, ... are samples of the a posteriori volumetric probability ￼ (10)
Considering an example from least-squares fitting, the model parameter manifold may be a linear space, with vectors denoted ￼, ￼, ..., and the a priori information may have the Gaussian form ￼ (11)
The observable parameter manifold may be a linear space, with vectors denoted ￼, ￼, ... and the information brought by measurements may have the Gaussian form ￼ (12)
The forward modeling relation becomes, with these notations, ￼ (13)
Then, the posterior volumetric probability for the model parameters is ￼ (14)
where the misfit function ￼is the sum of squares ￼ (15)
The maximum likelihood model is the model ￼maximizing ￼. It is also the model minimizing ￼. It can be obtained using a quasi-Newton algorithm, ￼ (16)
where the Hessian of ￼is ￼ (17)
and the gradient of ￼is ￼ (18)
Here, the tangent linear operator ￼is defined via ￼ (19)
As we have seen, the model ￼at which the algorithm converges maximizes the posterior volumetric probability ￼. To estimate the posterior uncertainties, one can demonstrate that the covariance operator of the Gaussian volumetric probability that is tangent to ￼at ￼is ￼. Chadman8000 (talk) 02:37, 5 March 2009 (UTC) chad miller
This article needs a new introduction including plain English language that ordinary people might understand, to have some idea what this concept is. -126.96.36.199 22:23, 6 November 2007 (UTC)
- From the article:-
- An inverse problem is the task that often occurs in many branches of science and mathematics where the values of some model parameter(s) must be obtained from the observed data.
- That seems pretty plain to me. What did you have n mind? Billlion 23:49, 6 November 2007 (UTC)
Too vague and too specific?
This article contains some interesting material, but I think it has some rather serious flaws. First, it is quite difficult to follow for everyone except mathematicians—and most people who do inverse modeling are physicists or engineers, not mathematicians. (For example, the discussion of compactness is not going to be relevant to >95% of people who are interested in inverse modeling.)
In the section on nonlinear inverse problems, sweeping—and wrong—generalizations are made. It is not essential to answer the Hadamard questions, and I can't see how answering them "solves" the problem from "the theoretical point of view". Additionally, more useful (i.e., understandable) examples could be given, although there isn't anything wrong with the examples already there.
A confusing statement in the "Conceptual understanding" section
The "Conceptual understanding" section says,
- The forward problem [...]:
- Data → Model parameters
- The inverse problem [...]:
- Model parameters → Data
and that seems opposite to that which the articles says elsewhere, that the inverse problem is to find or adjust model parameters on the basis of observed data, i.e., to move from data to ("→") model parameters. Would it be possible to clarify the "Conceptual understanding" section in regard to that for non-experts like me? The Tetrast (talk) 16:26, 27 January 2013 (UTC).
- I agree with you, I came across this today also found this made no sense - I think it's the wrong way around. In the forward problem, we predict data from parameters, and in the inverse problem, we determine parameters from the data. I think it is most likely a mistake. Astrofrog (talk) 12:35, 13 February 2013 (UTC)