Credibility theory

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Credibility theory is a branch of actuarial science used to quantify how unique a particular outcome will be compared to an outcome deemed as typical. It was developed originally as a method to calculate the risk premium by combining the individual risk experience with the class risk experience.

A Non-technical Example[edit]

Say we have a large box of identical cars, we roll 10 of them down a ramp and one after another they all turn left as they go down the ramp, we would likely expect the next car to go left. Say we then roll the next car, car number 11, and it goes right, car number 11 has now earned some credibility, or credible experience. We might decide this car is broken, or special, but definitely worth noticing. Note, however, that we only tried 11 cars. What if we had rolled 100 or 10,000 cars before we found the odd one? This occurrence would seem even stranger.

A likely thing the person rolling cars would do is to then try that car again - just to make sure it actually is broken or special. So we have our special car and decide to find out how unique it really is by rolling it once more down the ramp. If the car goes left might decide there was a one time fluke, there is nothing special about this car. If it rolls right we might convince ourselves we have found a unique example, a deviation from the norm. It is advantageous for insurers to look for these deviations. Quantifying the difference of what we expect to see given similarities, and what we actually observe given uniqueness uses the statistical tools of credibility theory.

Actuarial credibility[edit]

Actuarial credibility describes an approach used by actuaries to improve statistical estimates. Although the approach can be formulated in either a frequentist or Bayesian statistical setting, the latter is often preferred because of the ease of recognizing more than one source of randomness through both "sampling" and "prior" information. In a typical application, the actuary has an estimate X based on a small set of data, and an estimate M based on a larger but less relevant set of data. The credibility estimate is ZX + (1-Z)M,[1] where Z is a number between 0 and 1 (called the "credibility weight" or "credibility factor") calculated to balance the sampling error of X against the possible lack of relevance (and therefore modeling error) of M.

When an insurance company calculates the premium it will charge, it divides the policy holders into groups. For example it might divide motorists by age, sex, and type of car; a young man driving a fast car being considered a high risk, and an old woman driving a small car being considered a low risk. The division is made balancing the two requirements that the risks in each group are sufficiently similar and the group sufficiently large that a meaningful statistical analysis of the claims experience can be done to calculate the premium. This compromise means that none of the groups contains only identical risks. The problem is then to devise a way of combining the experience of the group with the experience of the individual risk the better to calculate the premium. Credibility theory provides a solution to this problem.

For actuaries, it is important to know credibility theory in order to calculate a premium for a group of insurance contracts. The goal is to set up an experience rating system to determine next year's premium, taking into account not only the individual experience with the group, but also the collective experience.

There are two extreme positions: One is to charge the same premium to everyone, estimated by the overall mean \overline{X} of the data. This makes sense only if the portfolio is homogeneous, which means that all risks cells have identical mean claims. However, if the portfolio is not homogeneous, it is not a good idea to charge premium in this way, since the "good" risks will take their business elsewhere (overcharging "good" people and undercharging "bad" risk people), leaving the insurer with only bad risks. This is an example of adverse selection.

The other way around is to charge to group j its own average claims, being \overline{X_j} as premium charged to the insured. These methods are used if the portfolio is heterogeneous, provided a fairly large claim experience. To compromise these two extreme positions, we take the weighted average of these two extremes:

C = z_j\overline{X_j} + (1 - z_j) \overline{X}\,

z_j has the following intuitive meaning: it expresses how "credible" (acceptability) the individual of cell j is. If it is high, then use higher z_j to attach a larger weight to charging the \overline{X_j}, and in this case, z_j is called a credibility factor, such a premium charged is called a credibility premium.

If the group were completely homogeneous then it would be reasonable to set z_j=0, while if the group were completely heterogeneous then it would be reasonable to set z_j=1. Using intermediate values is reasonable to the extent that both individual and group history are useful in inferring future individual behavior.

For example, an actuary has an accident and payroll historical data for a shoe factory that suggest that the accident rate is 3.1 accidents per million dollars of payroll. She has industry statistics (based on all shoe factories) suggesting that the rate is 7.4 accidents per million. With a credibility, Z, of 30%, she would estimate the rate for the factory as 30%(3.1) + 70%(7.4) = 6.1 accidents per million.


Further reading[edit]

  • Behan, Donald F. (2009) "Statistical Credibility Theory", Southeastern Actuarial Conference, June 18, 2009
  • Whitney, A.W. (1918) The Theory of Experience Rating, Proceedings of the Casualty Actuarial Society, 4, 274-292 (This is one of the original casualty actuarial papers dealing with credibility. It uses Bayesian techniques, although the author uses the now archaic "inverse probability" terminology.)
  • Longley-Cook, L.H. (1962) An introduction to credibility theory PCAS, 49, 194-221.