Jenks natural breaks optimization

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Jenks optimization method, also called the Jenks natural breaks classification method, is a data classification method designed to determine the best arrangement of values into different classes. This is done by seeking to minimize each class’s average deviation from the class mean, while maximizing each class’s deviation from the means of the other groups. In other words, the method seeks to reduce the variance within classes and maximize the variance between classes.[1][2]

History[edit]

George Jenks[edit]

George Frederick Jenks was an American cartographer of the 20th century. Graduating with his Ph.D. in agricultural geography from Syracuse University in 1947, Jenks began his career under the tutelage of Richard Harrison, cartographer for TIME and Fortune magazine.[3] He joined the faculty of the University of Kansas in 1949 and began to build the cartography program. During his 37 year tenure at KU, Jenks developed the Cartography program into one of three programs renowned for their graduate education in the field; the others being the University of Wisconsin and the University of Washington. Much of his time was spent developing and promoting improved cartographic training techniques and programs. He also spent significant time investigating three-dimensional maps, eye-movement research, thematic map communication, and geostatistics.[2][3][4]

Development[edit]

Jenks was a cartographer by profession. His work with statistics grew out of a desire to make choropleth maps more visually accurate for the viewer. In his paper, The Data Model Concept in Statistical Mapping, he claims that by visualizing data in a three dimensional model cartographers could devise a “systematic and rational method for preparing choroplethic maps”.[1] Jenks used the analogy of a “blanket of error” to describe the need to use elements other than the mean to generalize data. The three dimensional models were created to help Jenks visualize the difference between data classes. His aim was to generalize the data using as few planes as possible and maintain a constant “blanket of error”.

The method requires an iterative process. That is, calculations must be repeated using different breaks in the dataset to determine which set of breaks has the smallest in-class variance. The process is started by dividing the ordered data into groups. Initial group divisions can be arbitrary. There are four steps that must be repeated:

  1. Calculate the sum of squared deviations between classes (SDBC).
  2. Calculate the sum of squared deviations from the array mean (SDAM).
  3. Subtract the SDBC from the SDAM (SDAM-SDBC). This equals the sum of the squared deviations from the class means.
  4. After inspecting each of the SDBC, a decision is made to move one unit from the class with the largest SDBC toward the class with the lowest SDBC.

New class deviations are then calculated, and the process is repeated until the sum of the within class deviations reaches a minimal value.[1][5]

Uses[edit]

Choropleth mapping[edit]

Jenks’ goal in developing this method was to create a map that was absolutely accurate, in terms of the representation of data’s spatial attributes. By following this process, Jenks claims, the “blanket of error” can be uniformly distributed across the mapped surface. He developed this with the intention of using relatively few data classes, less than seven, because that was the limit when using monochromatic shading on a choroplethic map.[1]

Alternative methods[edit]

Other methods of data classification include Head/tail Breaks, Natural Breaks (without Jenks Optimization), Equal Interval, Quantile, and Standard Deviation.

Recently, the natural breaks method has been challenged by Bin Jiang for its failure in capturing the underlying scaling hierarchy for data with a heavy tailed distribution. Instead, head/tail breaks was claimed to be more natural than the natural breaks, and has found useful applications in map generalization and cognitive mapping. [6] [7] [8]

References[edit]

  1. ^ a b c d Jenks, George F. 1967. "The Data Model Concept in Statistical Mapping", International Yearbook of Cartography 7: 186–190.
  2. ^ a b McMaster, Robert, "In Memoriam: George F. Jenks (1916–1996)". Cartography and Geographic Information Science. 24(1) p.56-59.
  3. ^ a b McMaster, Robert and McMaster, Susanna. 2002. “A History of Twentieth-Century American Academic Cartography”, Cartography and Geographic Information Science. 29(3) p.312-315.
  4. ^ CSUN Cartography Specialty Group, Winter 1997 Newsletter
  5. ^ ESRI FAQ, What is the Jenks Optimization method
  6. ^ Jiang, Bin 2012a. "Head/Tail Breaks: A New Classification Scheme for Data with a Heavy-Tailed Distribution", Professional Geographer, x: xx–xx. DOI: 10.1080/00330124.2012.700499, Preprint: http://arxiv.org/abs/1209.2801
  7. ^ Jiang, Bin, Liu, Xintao, and Jia, Tao 2012. "Scaling of Geographic Space as a Universal Rule for Map Generalization", Preprint: http://arxiv.org/abs/1102.1561
  8. ^ Jiang, Bin 2012b. "The Image of the City Out of the Underlying Scaling of City Artifacts or Locations", Preprint: http://arxiv.org/abs/1209.1112

External links[edit]