Structural information theory

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Structural information theory (SIT) is a theory about human perception and in particular about visual perceptual organization, which is the neuro-cognitive process that enables us to perceive scenes as structured wholes consisting of objects arranged in space. SIT was initiated, in the 1960s, by Emanuel Leeuwenberg[1][2][3] and has been developed further mainly by Peter van der Helm. It has been applied to a wide range of research topics,[4] mostly in visual form perception but also in, for instance, visual ergonomics, data visualization, and music perception.

SIT began as a quantitative model of visual pattern classification. Nowadays, it includes quantitative models of symmetry perception and amodal completion, and is theoretically sustained by a perceptually adequate formalization of visual regularity, a quantitative account of viewpoint dependencies, and a powerful form of neurocomputation.[5] SIT has been argued to be the best defined and most successful extension of Gestalt ideas.[6] It is the only Gestalt approach providing a formal calculus that generates plausible perceptual interpretations.

The simplicity principle[edit]

Although visual stimuli are fundamentally multi-interpretable, the human visual system usually has a clear preference for only one interpretation. To explain this preference, SIT introduced a formal coding model starting from the assumption that the perceptually preferred interpretation of a stimulus is the one with the simplest code. A simplest code is a code with minimum information load, that is, a code that enables a reconstruction of the stimulus using a minimum number of descriptive parameters. Such a code is obtained by capturing a maximum amount of visual regularity and yields a hierarchical organization of the stimulus in terms of wholes and parts.

The assumption that the visual system prefers simplest interpretations is called the simplicity principle.[7] Historically, the simplicity principle is an information-theoretical translation of the Gestalt law of Prägnanz,[8] which was inspired by the natural tendency of physical systems to settle into relatively stable states defined by a minimum of free-energy. Furthermore, just as the later-proposed minimum description length principle in algorithmic information theory (AIT), a.k.a. the theory of Kolmogorov complexity, it can be seen as a formalization of Occam's Razor, according to which the simplest interpretation of data is the best one.

Structural versus algorithmic information theory[edit]

Since the 1960s, SIT (in psychology) and AIT (in computer science) evolved independently as viable alternatives for Shannon's classical information theory which had been developed in communication theory.[9] In Shannon's approach, things are assigned codes with lengths based on their probability in terms of frequencies of occurrence (as, e.g., in the Morse code). However, in many domains, including perception, such probabilities are hardly quantifiable, if at all. Both SIT and AIT circumvent this problem by turning to descriptive complexities of individual things.

Although SIT and AIT share many starting points and objectives, there are also several relevant differences:

  • SIT makes the perceptually relevant distinction between structural and metrical information, whereas AIT does not.
  • SIT encodes for a restricted set of perceptually relevant kinds of regularities, whereas AIT encodes for any imaginable regularity.
  • In SIT, the relevant outcome of an encoding is a hierarchical organization, whereas in AIT, it is only a complexity value.

Simplicity versus likelihood[edit]

In visual perception research, the simplicity principle contrasts with the Helmholtzian likelihood principle,[10] which assumes that the preferred interpretation of a stimulus is the one most likely to be true in this world. As shown within a Bayesian framework and using AIT findings, the simplicity principle would imply that perceptual interpretations are fairly veridical (i.e., truthful) in many worlds rather than, as assumed by the likelihood principle, highly veridical in only one world.[11] In other words, whereas the likelihood principle suggests that the visual system is a special-purpose system (i.e., adapted to one specific world), the simplicity principle suggests that it is a general-purpose system (i.e., adaptive to many different worlds).

Crucial to the latter finding is the distinction between, and integration of, viewpoint-independent and viewpoint-dependent factors in vision, as proposed in SIT's empirically successful model of amodal completion.[12] In the Bayesian framework, these factors correspond to prior probabilities and conditional probabilities, respectively. In SIT's model, however, both factors are quantified in terms of complexities, that is, complexities of objects and of their spatial relationships, respectively. This approach is consistent with neuroscientific ideas about the distinction and interaction between the ventral ("what") and dorsal ("where") streams in the brain.[13]

SIT versus connectionism and dynamic systems theory[edit]

A representational theory like SIT seems opposite to dynamic systems theory (DST), while connectionism can be seen as something in between. That is, connectionism flirts with DST when it comes to the usage of differential equations and flirts with theories like SIT when it comes to the representation of information. In fact, the different operating bases of SIT, connectionism, and DST, correspond to what Marr called the computational, the algorithmic, and the implementational levels of description, respectively. According to Marr, these levels of description are complementary rather than opposite, thus reflecting epistemological pluralism.

What SIT, connectionism, and DST have in common is that they describe nonlinear system behavior, that is, a minor change in the input may yield a major change in the output. Their complementarity expresses itself in that they focus on different aspects:

  • Whereas DST focuses primarily on how the state of a physical system as a whole (in this case, the brain) develops over time, both SIT and connectionism focus primarily on what a system does in terms of information processing (which, in this case, can be said to constitute cognition) and both assume that this information processing relies on interactions between pieces of information in distributed representations, that is, in networks of connected pieces of information.
  • Whereas connectionism focuses on concrete interaction mechanisms (i.c., activation spreading) in a prefixed network that is assumed to be suited for many inputs, SIT focuses on the nature of the outcome of interactions that are assumed to take place in transient, input-dependent, networks.

Modeling principles[edit]

In SIT's formal coding model, candidate interpretations of a stimulus are represented by symbol strings, in which identical symbols refer to identical perceptual primitives (e.g., blobs or edges). Every substring of such a string represents a spatially contiguous part of an interpretation, so that the entire string can be read as a reconstruction recipe for the interpretation and, thereby, for the stimulus. These strings then are encoded (i.e., they are searched for visual regularities) to find the interpretation with the simplest code.

This encoding is performed by way of symbol manipulation, which, in psychology, has led to critical statements of the sort of "SIT assumes that the brain performs symbol manipulation". Such statements, however, fall in the same category as statements such as "physics assumes that nature applies formulas such as Einstein's E=mc2 or Newton's F=ma" and "DST models assume that dynamic systems apply differential equations". That is, these statements ignore that the very concept of formalization means that potentially relevant things are represented by symbols — not as a goal in itself but as a means to capture potentially relevant relationships between these things.

Visual regularity[edit]

To obtain simplest codes, SIT applies coding rules that capture the kinds of regularity called iteration, symmetry, and alternation. These have been shown to be the only regularities that satisfy the formal criteria of (a) being holographic regularities that (b) allow for hierarchically transparent codes.[14]

A crucial difference with respect to the traditionally considered transformational formalization of visual regularity is that, holographically, mirror symmetry is composed of many relationships between symmetry pairs rather than one relationship between symmetry halves. Whereas the transformational characterization may be suited better for object recognition, the holographic characterization seems more consistent with the buildup of mental representations in object perception.

The perceptual relevance of the criteria of holography and transparency has been verified in the holographic approach to visual regularity.[15] This approach provides an empirically successful model of the detectability of single and combined visual regularities, whether or not perturbed by noise. For instance, it explains that mirror symmetries and Glass pattens are about equally detectable and usually better detectable than repetitions. It also explains that the detectability of mirror symmetries and Glass pattens in the presence of noise follows a psychophysical law that improves on Weber's law.[16]

Cognitive architecture[edit]

The transparent holographic regularities have been shown to lend themselves for transparallel processing on single-processor classical computers.[17] Whereas parallel processing means that items are processed simultaneously by many processors (each processor dealing with one item at a time), transparallel processing means that items are processed simultaneously by one processor (i.e., in one go, as if only one item were concerned). To enable transparallel processing, SIT's formal process model gathers regularities (in only the input at hand) in special distributed representations, called hyperstrings, which allow their codes to be processed in a transparallel fashion. Thus, during the process of selecting a simplest code from among all candidate codes, sets of O(2N) candidate codes can be taken into account as if only one code of length N were concerned. This supports the computational tractability of simplest codes and, thereby, the plausibility of the simplicity principle in human perceptual organization.

Transparallel processing by hyperstrings has the same extraordinary computing power as that promised by quantum computers.[18] In fact, quantum computers can be said to promise a hardware solution to perform transparallel processing (for some computing problems), while hyperstrings provide an already feasible software solution to perform transparallel processing on single-processor classical computers (for at least one computing problem). Furthermore, hyperstrings reflect a form of feature binding and can therefore be seen as formal counterparts of transient neural assemblies that mediate feature binding and, moreover, signal their presence by synchronization of the neurons involved. This gave rise to the transparallel mind hypothesis, which — different than the quantum mind hypothesis — presents a concrete picture of flexible cognitive architecture implemented in the relatively rigid neural architecture of the brain.[18][19] That is, those temporarily synchronized neural assemblies can be called gnosons (i.e., fundamental particles of cognition), whose synchronization — which is not required for parallel processing — might well be a manifestation of transparallel feature processing.

See also[edit]


  1. ^ Leeuwenberg, E. L. J. (1968). Structural information of visual patterns: an efficient coding system in perception. The Hague: Mouton.
  2. ^ Leeuwenberg, E. L. J. (1969). Quantitative specification of information in sequential patterns. Psychological Review, 76, 216—220.
  3. ^ Leeuwenberg, E. L. J. (1971). A perceptual coding language for visual and auditory patterns. American Journal of Psychology, 84, 307—349.
  4. ^ Leeuwenberg, E. L. J. & van der Helm, P. A. (2013). Structural information theory: The simplicity of visual form. Cambridge, UK: Cambridge University Press.
  5. ^ van der Helm, P. A. (2014). Simplicity in vision: A multidisciplinary account of perceptual organization. Cambridge, UK: Cambridge University Press.
  6. ^ Palmer, S. E. (1999). Vision science: Photons to phenomenology. Cambridge, MA: MIT Press.
  7. ^ Hochberg, J. E., & McAlister, E. (1953). A quantitative approach to figural "goodness". Journal of Experimental Psychology, 46, 361—364.
  8. ^ Koffka, K. (1935). Principles of gestalt psychology. London: Routledge & Kegan Paul.
  9. ^ Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379-423, 623—656.
  10. ^ von Helmholtz, H. L. F. (1962). Treatise on Physiological Optics (J. P. C. Southall, Trans.). New York: Dover. (Original work published 1909)
  11. ^ van der Helm, P. A. (2000). Simplicity versus likelihood in visual perception: From surprisals to precisals. Psychological Bulletin, 126, 770—800. doi:10.1037//0033-2909.126.5.770.
  12. ^ van Lier, R. J., van der Helm, P. A., & Leeuwenberg, E. L. J. (1994). Integrating global and local aspects of visual occlusion. Perception, 23, 883—903. doi:10.1068/p230883.
  13. ^ Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of Visual Behavior (pp. 549—586). Cambridge, MA: MIT Press.
  14. ^ van der Helm, P. A., & Leeuwenberg, E. L. J. (1991). Accessibility, a criterion for regularity and hierarchy in visual pattern codes. Journal of Mathematical Psychology, 35, 151—213. doi:10.1016/0022-2496%2891%2990025-O.
  15. ^ van der Helm, P. A., & Leeuwenberg, E. L. J. (1996). Goodness of visual regularities: A nontransformational approach. Psychological Review, 103, 429—456. doi:10.1037/0033-295X.103.3.429.
  16. ^ van der Helm, P. A. (2010). Weber-Fechner behaviour in symmetry perception? Attention, Perception, & Psychophysics, 72, 1854—1864. doi:10.3758/APP.72.7.1854.
  17. ^ van der Helm, P. A. (2004). Transparallel processing by hyperstrings. Proceedings of the National Academy of Sciences USA, 101 (30), 10862—10867. doi:10.1073/pnas.0403402101.
  18. ^ a b van der Helm, P. A. (2015). Transparallel mind: Classical computing with quantum power. Artificial Intelligence Review, 44, 341—363. doi:10.1007/s10462-015-9429-7.
  19. ^ van der Helm, P. A. (2012). Cognitive architecture of perceptual organization: From neurons to gnosons. Cognitive Processing, 13, 13—40. doi:10.1007/s10339-011-0425-9.