Neural coding

From Wikipedia, the free encyclopedia
  (Redirected from Sparse coding)
Jump to navigation Jump to search

Neural coding is the transduction of environmental signals and internal signals of the body into neural activity patterns as representations forming a model of reality suitable for purposeful actions and adaptation, preserving the integrity and normal functioning of the body. It also describes the study of information processing by neurons along with learning on what the information is used for and how it is transformed when it is being passed through from one another.

Deciphering the neural code is considered a large-scale task because of its complexity and significance. It is difficult to overestimate the prospects that the ability to read and write neural code would open to us. They concern the treatment to all diseases of the body, from the main three (cardiovascular, cancer and diabetes) to the less significant ones, but still affecting the quality of life. And last but not least, it will give a new perspective for the development of artificial intelligence technologies and their integration with natural technologies of the brain.


Neurons are really noticeable among the cells of the body in their ability to process signals (i.e., light, sound, taste, smell, touch, and others) rapidly and transmit information about them over large distances and among vast neural populations. The brain is the highest achievement in the evolution of natural information technologies in terms of speed and efficiency. It follows that, of all coding schemes, the most likely candidate for neural code is the one that produces information (code patterns) most efficiently.

Neurons generate voltage oscillations called action potentials. All models consider the action potential as a fundamental element of the brain's language. However, the critical issue is the approach to this phenomenon. Physically action potentials are continuous oscillatory processes that vary in duration, amplitude and shape. Neurons demonstrate graded potentials that can provide high capacity and efficiency of the code.[1] Nevertheless, most models regard neural activity as identical discrete events (spikes). If the internal parameters of an action potential are ignored, a spike train can be characterized simply by a series of all-or-none point events in time.[2] The lengths of interspike intervals can also vary.[3] But they are usually ignored in the currently prevailing models of the neural code.

Such theories assume that the information is contained in the number of spikes in a particular time window (rate code) or their precise timing (temporal code). Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean. Anyway, all these theories are variations of a spiking neuron model.[4] Statistical methods and methods of probability theory and stochastic point processes are widely applied to describe and analyze neuronal firing. Some studies claim that they cracked the neural code [5][6][7] and there are several large-scale brain decoding projects.[8][9] But the actual reading and writing of the neural code remain a challenge facing neuroscience. The problem is that the spiking neuron models run counter to the actual efficiency and speed of the brain. At best, they cover only a part of the observed phenomena and cannot explain others. Recently, models have appeared that answer questions that are unsolvable within the framework of paradigms that consider the action potentials as similar spikes.[citation needed]. As technology has advanced, new architecture has been proposed which consist of neurons that can potentially carry a larger number of synapses. These synapses have not only make connections but they are capable of computing their excitations level themselves and adjust those connections.[10]

Encoding and decoding[edit]

The normal approach for studying the neural code is to look for the similar between the incoming signal and the neuronal response and the reverse process of recovering the signal from the observed neuronal activity. However, without a code model, such analysis is like trying to read or write a text without knowing grammar. It is a kind of vicious circle: to read the code, we need to know it, but to cognize it, we need to read it. However, any process of converting an unknown code is based on searching for specific patterns and identifying their correlation with the encoded message. In other words, to read the neural code, we need to find the correspondence between patterns of signal parameters and neural activity.

Any sign of the environment is an oscillatory energy process with a certain amplitude, frequency and development of phases in time. These are the two main axes of signal measurement: spatial and temporal. Accordingly, the neural code must also have spatial and temporal characteristics that create a model of the encoded signal. They may be locked to an external stimulus[11] or be generated intrinsically by the neural circuitry.[12] As we move along the hierarchy of the technological chain of the nervous system from sensors at the periphery to the integrative structures of the cerebral cortex, the neural activity is less and less directly associated with the original signal. It is natural since neurons do not reflect signals but encode them, i.e., create representations. Consciousness is not a mirror of reality but a small exact copy of reality. However, a representation should still contain all the same axes of parameters measurement. Thus, the neural code unfortunately must be a complex multidimensional structure. At the same time, information density should combine with efficiency and speed.

Do the proposed coding models reflect these requirements? This question should be a "litmus test" for their adequacy to actual processes in the nervous system.

Hypothesized coding schemes[edit]

Spiking neuron models[edit]

Rate coding[edit]

The rate coding model hypothesizes that information about a signal is contained in the spike firing rate. It is sometimes called frequency coding though strictly speaking rate of discrete events is not a frequency but a tempo. Thus, calling this model a tempo code would be physically correct.

It appeared after experiments by ED Adrian and Y Zotterman in 1926.[13] In this simple experiment, different weights were hung from a muscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. The authors concluded that action potentials were discrete events and that their tempo, rather than individual parameters, was the basis of neural communication. In the following decades, the measurement of firing rates became a standard tool for describing the properties of all types of neurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes and interspike intervals and the internal parameters of each action potential. In recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.[3] Even at the peripheral level (sensors and effectors), the firing rate increases non-linearly with increasing stimulus intensity, and there is no direct connection between the spike rate and the signal.[14] In addition, the sequence of action potentials generated by a given stimulus varies from trial to trial, so neuronal responses are typically treated statistically or probabilistically. Even the term "firing rate" has various definitions, which refer to different averaging procedures, such as an average over time or an average over several repetitions of an experiment.[citation needed]

Spike-count rate (average over time)[edit]

The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial.[4] The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter.(Chapter 1.5 in the textbook 'Spiking Neuron Models' [4])

This procedure stems from the assumption that neurons average their rates. If we accept this hypothesis, we must understand that neurons compute this average relative to the time window that has a meaning for them, not for an experimenter. If we analyse an activity that repeats with strict periodicity, it is not difficult to determine its period and calculate the average value. But neurons do not exhibit monotonous spiking. So, we do not know whether the neural code is actually average rate, and it is difficult to confirm or refute it without knowing the system clock of the brain. Additional theory and biological evidence is needed in order to support this hypothesis of neural coding.

The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of the organism — and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans perform saccades, rapid changes of the direction of gaze. The image projected onto the retinal photoreceptors changes therefore every few hundred milliseconds (Chapter 1.5 in [4]). More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.

Time-dependent firing rate (averaging over several trials)[edit]

The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval.[4] It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in a Peri-Stimulus-Time Histogram (PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5 in [4]).

For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also the fraction of trials on which a spike occurred between those times. Equivalently, r(t)Δt is the probability that a spike occurs during this time interval.

As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.[4] Moreover, the dynamics of many environmental signals are measured in milliseconds, and during these milliseconds, neurons can only fire once or twice. With such a number of spikes, it is impossible to encode the signal by their average rate. But there are also faster signals. For example, a bat is capable of echolocation with a resolution of microseconds.[15] Thus, the signal measurement time window is within one spike. This is completely contrary to the average rate paradigm.

Can a code consisting of identical spikes provide the nervous system's observable information density, speed, and efficiency? Unfortunately for the adherents of the rate code paradigm, the answer to this question is negative. Such code is ineffective in all respects. Tempo variation does not carry enough information to represent a complex multi-parameter signal. It requires the creation of many spikes to encode simple parameters. Thus, it is too slow and energetically expensive. That is why it does not correspond to the reality of the brain. However, this model is still widely used not only in experiments but also in neural networks models. As a result, over the past decades, a vast amount of data has accumulated, but it has not brought us any closer to deciphering the meaning of the code.

Temporal coding[edit]

Temporal code models assume that precise timing of spikes and interspike intervals carries information.[4][16] There is a growing body of evidence supporting this hypothesis.[17][18][19][20][21][22]

Rate coding models suggest that the irregularities of neuronal firing are noise and average them. Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing.[23] To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences.[24] Thus, the model can be called the digital code.

Until recently, scientists had put the most emphasis on rate encoding as an explanation for post-synaptic potential patterns. However, functions of the brain are more temporally precise than the rate encoding allows. In addition, responses to the similar stimuli are different enough to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.[25] The temporal structure of a spike train evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes.[26] Temporal codes (also called spike codes [4]), employ those features of the spiking activity that cannot be described by the firing rate. For example, time-to-first-spike after the stimulus onset, phase-of-firing with respect to background oscillations, characteristics based on the second and higher statistical moments of the ISI probability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes.[27] As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to an ongoing brain oscillation (phase of firing).[21][3] One of the possible mechanisms of the temporal code is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing the post-synaptic neuron.[28] In temporal coding, learning can be explained by activity-dependent synaptic delay modifications.[29] The modifications can themselves depend on spike timing patterns (temporal coding), i.e., can be a special case of spike-timing-dependent plasticity.[30]

For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average rate of action potentials over a given period of time. This model is especially important for sound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds or less.[24]

To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike.[31] This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations.[32] In the primary visual cortex of macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.[33]

As with the visual system, in mitral/tufted cells in the olfactory bulb of mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.[24]

The mammalian gustatory system is useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism.[34] Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.[35] Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain behaviors. However, little information could be determined from the patterns. One of the explanations is that the activity of cortical neurons does not correspond linearly to the dynamics of the incoming signal parameters as the technological chain consists of primary signal converters (sensors), modulators (subcortical structures), and integrators (cortical populations) that do not 'reflect' the signal but transduce it and create representations.[36]

The assumption that the neural code is binary (spikes and interspike intervals as 1 and 0) significantly increases the capacity of the code and makes the model more plausible. But the same question arises of correlating the information capacity of the code and the real speed of the brain, which manages to encode a complex multi-parameter signal within one or two spikes. The brain does not have time to build a long binary chain that could contain all the information. In this it is fundamentally different from artificial digital systems. For all the tremendous speed of their processors, which are orders of magnitude higher than the frequencies of the brain, they cannot match it in performance, speed and energy efficiency. The problem is they need to handle long binary code chains. The brain must be using some additional capacity in its code.

In addition, the question of the system clock arises again. Two zeros of the code is a pause twice as long as one zero. But how can we determine that the interspike pause means two zeros or one if we do not know the time scale of the system under study? Measuring the pause by an external clock gives a lot of data, but says nothing about how many zeros are in that particular pause and how they relate to the spike units. In other words, we cannot determine if neuron activity means 0001 or 001. For a real qualitative analysis, it is necessary to normalize the system data by its own time. Then we can express our analysis in any unit of measurement. Finding this fundamental frequency as a basis for normalisation is probably of paramount importance when trying to decipher the brain code no matter which code model we are testing, since the time parameter remains anyway.

Phase-of-firing code[edit]

Phase-of-firing code is a neural coding scheme that combines the spike count code with a time reference based on oscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low[37] or high frequencies.[38] It has been shown that neurons in some cortical sensory areas encode complex natural signals in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count.[37][39] The phase-of-firing code is often categorised as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on the phase precession phenomena observed in place cells of the hippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.[40] Phase code has been shown in visual cortex to involve also high-frequency oscillations.[40] Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.[40]

This version of the code aims to overcome the limitations of the previous models. It shows that spike counting requires a frame of reference and suggests searching for it in the frequencies of the brain. But this model continues to consider actions potentials to be similar impulses and looks for information only in the rhythmic structure of neuronal activation. Thus, it faces the same question: how can neurons encode signals that change within the time frame of a single spike? Placing a discrete event on the exact timescale is an essential part of the encoding process. Still, it is not enough to represent all the parameters of a signal within tight temporal limits that the natural environment sets for the brain.

Population coding[edit]

Population coding is a method to represent signals by the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine the value about the inputs.

For example, in the area of visual medial temporal lobe (MT), neurons are tuned to the moving direction.[41] Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus. Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels;[42] ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch,[43] and formant representations in consonant-vowel syllables.[44]

In general, the population version of the code simply indicates that signal representations are the result of the activity of many neurons. It cannot be called a separate coding model as the question of how individual neurons encode their part of the signal representation remains.

Some models try to surpass this difficulty by claiming that the individual activity does not contain any information and the meaning should be sought in the combined patterns. In such models, neurons are considered to fire in random order with a Poisson distribution, and such chaos creates order in the form of a population code.[45] This hypothesis can be called a reaction to the fact that decades of attempts to decipher the neural code by counting spikes and searching for meaning in the rate or temporal structure of their sequences have not led to a meaningful result.

But such population models do not say anything about the mechanism of operation and the rules of such a code. Moreover, they contradict the reality of neural activity. Subtle measurement methods using implantable electrodes and a detailed study of the temporal structure of the spikes and interspike intervals show that it does not have the character of a Poisson distribution, and each of the stimulus attributes changes not only the absolute number of spikes but also their temporal pattern.[46]

Despite the enormous variability in neuronal activity, the spike sequences are very accurate. This accuracy is essential for the transmission of information using high-resolution code. Each neuron has its place in forming meanings and specialisation as a filter processing specific signal parameters. However, the question arises of how the patterns of each neuron activity integrate into a general representation of a signal with all parameters and how representations of individual signals merge into a single and coherent model of reality while maintaining their individuality. In neuroscience, this is called a "binding problem."

Some population code models describe this process mathematically as the sum of the vectors of all neurons involved in encoding a given signal. This particular population code is referred to as population vector coding and is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of maximum likelihood based on a multivariate distribution of the neuronal responses.[47] These models can assume independence, second order correlations,[48] or even more detailed dependencies such as higher order maximum entropy models,[49] or copulas.[50]

However, a common problem with such mathematical models is the lack of an explanation of the physical mechanism that could implement the observed unity of the model of reality created by the brain while preserving the individuality of signal representations.

Correlation coding[edit]

The correlation coding model of neuronal firing claims that correlations between action potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total mutual information present in the two spike trains about a stimulus feature.[51] However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign.[52] Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.[53]

The idea about correlations between action potentials can be called a movement from the average rate code towards an adequate model, which speaks of the information density of the spatial-temporal patterns of neuronal activity. However, it cannot be called a neural code per se.

Independent-spike coding[edit]

The independent-spike coding model of neuronal firing claims that each individual action potential, or "spike", is independent of each other spike within the spike train.[54][55]

Position coding[edit]

Plot of typical position coding

A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response.

For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in grid cells that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.[56]

This coding scheme tries to overcome the problems of rate coding model by stating that if any individual neuron is too noisy to faithfully encode the variable using rate coding, an entire population ensures greater fidelity and precision as the maximum likelihood estimation function is more accurate. It remains to answer the question: if individual neurons are too slow to encode the signals, how can the population be fast enough? We are back to the issue of the neural code essence.

Sparse coding[edit]

Code sparseness may refer to the temporal sparseness ("a relatively small number of time periods are active") or to the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population.[57] For each item to be encoded, this is a different subset of all available neurons. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces wavelet-like oriented filters that resemble the receptive fields of simple cells in the visual cortex.[58] The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.[59]

Code sparseness may also refer to a small number of basic patterns used to encode the signals. Given a potentially large set of input patterns, sparse coding algorithms (e.g. sparse autoencoder) use a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.

Mathematical modelling[edit]

Most models of sparse coding are based on the linear generative model.[60] In this model, the symbols are combined in a linear fashion to approximate the input.

More formally, given a k-dimensional set of real-numbered input vectors , the goal of sparse coding is to determine n k-dimensional basis vectors along with a sparse n-dimensional vector of weights or coefficients for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector: .[61]

The codings generated by algorithms implementing a linear generative model can be classified into codings with soft sparseness and those with hard sparseness.[60] These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth Gaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, no or hardly any small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.[60]

Another measure of coding is whether it is critically complete or overcomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is overcomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise.[62] The human primary visual cortex is estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.[60]

Other models are based on matching pursuit, a sparse approximation algorithm which finds the "best matching" projections of multidimensional data, and dictionary learning, a representation learning method which aims to find a sparse matrix representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.[63][64][65]

Overall, despite rigorous mathematical descriptions, the above models stumble when it comes to describing the physical mechanism that can perform such algorithms.

Biological evidence[edit]

Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific associative memories in which only a few neurons out of a population respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli. Theoretical work on sparse distributed memory has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations.[66] Experimentally, sparse representations of sensory information have been observed in many systems, including vision,[67] audition,[68] touch,[69] and olfaction.[70] In the Drosophila olfactory system, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories.[71] Sparseness is controlled by a negative feedback circuit between Kenyon cells and GABAergic anterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.[72]

However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.

Implications of spiking neuron models[edit]

What are the tasks of neurons when processing signals from the external and internal environment? First, neurons must create information-efficient code. Second, neurons must create energy-efficient code. These requirements lead to code sparseness in the sense of a small number of elements in a fast time window and a small set of basic code units that can encode complex information in their combinations. It follows that each component should be information-rich. In other words, the neural code must combine sparseness and richness. These are not mutually exclusive but complementary requirements.

The question arises: can a neuron spike be information-rich if it is a discrete event that does not have internal characteristics? In this formulation, the question becomes rhetorical, and the answer is negative. Unfortunately, all of the above models are based on the assumption that the action potentials of the neurons are the same. But is it really so? Moreover, the question arises: are the action potentials spikes? It may sound strange as most studies use the words as synonyms. The reason is that there is an old tradition to portrait action potentials as sharp points distributed along the time axis with varying density:

Bursting Examples.gif

Maybe the action potentials are actually sharp points? Not really. They are simplified this way to make them convenient for the model. "The spike is added manually for aesthetic purposes and to fool the reader into believing that this is a spiking neuron …All spikes are implicitly assumed to be identical in size and duration … Despite all these drawbacks, the integrate-and-fire model is an acceptable sacrifice for a mathematician who wants to prove theorems and derive analytical expressions. However, using the model might be a waste of time."[73]

To understand whether any spiking neuron model reflects reality or not, we must turn to the temporal level of the neuron itself. If we increase the resolution along the time axis, the picture changes dramatically and shows that neurons do not fire with sharp spikes but vibrate with soft waves. After decades of looking at discrete units where there are actually are waves, we come back to the question: "What is the structure of a neural code that allows such high rates of information transmission? ... Nature has built computing machinery of surprising precision and adaptability ... Our story began, more or less, with Adrian’s discovery that spikes are the units out of which our perceptions must be built. We end with the idea that each of these units makes a definite and measurable contribution to those perceptions. The individual spike, so often averaged in with its neighbors, deserves more respect."[74]

Symphonic Neural Code[edit]

The symphonic neural code hypothesis assumes that the neural code is not digital but analog-digital, meaning that each action potential being a discrete unit of code contains internal parameters as a continuous phenomenon.[36]

This model uses an analogy with a musical code (musical notation). In this sense, each action potential is a note of the music of the brain, i.e. has individual characteristics of the waveform (period, amplitude, phase). These notes form a pattern of the activity of a given neuron with a precise spatial-temporal organisation, which allows it to be part of the overall brain symphony with its melodies (frequency patterns), rhythms (phase patterns) and harmonies (the simultaneous existence of different patterns). The information density of each note (action potential) and each pause (resting potential) is very high. Thus, complex information can be encoded in a short activation/pause sequence and even within a single action potential. Due to this, the system as a whole has tremendous computing power, efficiency and speed.

In some sense, this model combines many previous hypotheses. It shows that firing rate has a place in the overall code structure. But, as in music, tempo does not have an independent meaning. The model stresses the importance of the temporal (rhythmic) structure that carries heavy information load. It is certainly a phase-of-firing model as the dynamics of activation/deactivation and the internal structure of each action potential phase portrait allow to encode signals even within the time window of phase shift within one activation cycle.

It is a population code model that considers neurons as not senseless or noisy components of the system that can produce order out of chaos by some magic, but as members of the brain orchestra consisting of billions of musicians that play a unified symphony where each part has its own meaning. It is certainly a correlation coding as each part has a position within the context of the whole structure. And it is a sparse coding model, since a symphony can consist of potentially endless combinations of a small set of base notes (code elements), and a small number of musicians (neurons) can take part in creating a complex combination. At the same time, it is dense coding model since each note has high informational content.

The use of musical terminology is not a metaphor but a physical analogy. The physics of the neural coding process is based on oscillatory and wave phenomena, the same as the creation of sounds, which we call music. A cardinal paradigm shift is that a neuron is viewed not as a producer of identical shots but as an oscillator with a complex phase portrait. Each action potential is paid due respect. All the fine logistics of organisation and kinetics of processes at the intracellular and intercellular levels are there for creating the parameters of neural oscillations. The model provides a detailed physical, mathematical and technological description of the neural coding process that elucidates the brain's informational, temporal and energy efficiency.

Moreover, the approach to the coding process as to the interaction of oscillators with different parameters allows the model to look at the binding problem in a completely different way and shows there is an actual physical mechanism of frequency and phase coupling, which creates a symphony of the mind as a harmonic structure while preserving the individual characteristics of each representation.[75] It also allows us to look at the pathologies that are currently considered enigmatic "mental disorders" (for example, autism and schizophrenia) as the specific disturbances of encoding the world's signals and creating a coherent reality model.[76]

Traditional instruments (for example, EEG, MEG, fMRI) are not suited for deciphering the symphonic code as they currently do not possess the necessary spatial-temporal resolution and do not measure the neural activity directly. Other technologies (for example, microelectrode arrays, patch clamp technique) are better suited for the task but have their own drawbacks. Some new technologies like optogenetics allow measuring and even controlling individual neuron activity.[77][78] The technologies will follow if the theoretical paradigm pays due respect to the individual action potential and to all delicate dynamics of spatial-temporal patterns of neural activity, which constitute the essence of the coding process.

See also[edit]


  1. ^ Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward (23 January 2014). "Consequences of Converting Graded to Action Potentials upon Neural Information Coding and Energy Efficiency". PLOS Computational Biology. 10 (1): e1003439. Bibcode:2014PLSCB..10E3439S. doi:10.1371/journal.pcbi.1003439. PMC 3900385. PMID 24465197.
  2. ^ Gerstner, Wulfram; Kistler, Werner M. (2002). Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. ISBN 978-0-521-89079-3.
  3. ^ a b c Stein RB, Gossen ER, Jones KE (May 2005). "Neuronal variability: noise or part of the signal?". Nat. Rev. Neurosci. 6 (5): 389–97. doi:10.1038/nrn1668. PMID 15861181. S2CID 205500218.
  4. ^ a b c d e f g h i Gerstner, Wulfram. (2002). Spiking neuron models : single neurons, populations, plasticity. Kistler, Werner M., 1969-. Cambridge, U.K.: Cambridge University Press. ISBN 0-511-07817-X. OCLC 57417395.
  5. ^ The Memory Code.
  6. ^ Chen, G; Wang, LP; Tsien, JZ (2009). "Neural population-level memory traces in the mouse hippocampus". PLOS ONE. 4 (12): e8256. Bibcode:2009PLoSO...4.8256C. doi:10.1371/journal.pone.0008256. PMC 2788416. PMID 20016843.
  7. ^ Zhang, H; Chen, G; Kuang, H; Tsien, JZ (Nov 2013). "Mapping and deciphering neural codes of NMDA receptor-dependent fear memory engrams in the hippocampus". PLOS ONE. 8 (11): e79454. Bibcode:2013PLoSO...879454Z. doi:10.1371/journal.pone.0079454. PMC 3841182. PMID 24302990.
  8. ^ Brain Decoding Project.
  9. ^ The Simons Collaboration on the Global Brain.
  10. ^ Fernando, Subha; Yamada, Koichi; Marasinghe, Ashu (July 2011). "Observed Stent's anti-Hebbian postulate on dynamic stochastic computational synapses". The 2011 International Joint Conference on Neural Networks. IEEE: 1336–1343. doi:10.1109/ijcnn.2011.6033379. ISBN 978-1-4244-9635-8. S2CID 14983385.
  11. ^ Burcas G.T & Albright T.D. Gauging sensory representations in the brain.
  12. ^ Gerstner W, Kreiter AK, Markram H, Herz AV (November 1997). "Neural codes: firing rates and beyond". Proc. Natl. Acad. Sci. U.S.A. 94 (24): 12740–1. Bibcode:1997PNAS...9412740G. doi:10.1073/pnas.94.24.12740. PMC 34168. PMID 9398065.
  13. ^ Adrian ED, Zotterman Y (1926). "The impulses produced by sensory nerve endings: Part II: The response of a single end organ". J Physiol. 61 (2): 151–171. doi:10.1113/jphysiol.1926.sp002281. PMC 1514782. PMID 16993780.
  14. ^ Kandel, E.; Schwartz, J.; Jessel, T.M. (1991). Principles of Neural Science (3rd ed.). Elsevier. ISBN 978-0444015624.
  15. ^ McKenna, T.M.; McMullen, T.A.; Shlesinger, M.F. (1994). "The brain as a dynamic physical system". Neuroscience. 60 (3): 587–605. doi:10.1016/0306-4522(94)90489-8. PMID 7936189. S2CID 20711473.
  16. ^ Dayan, Peter; Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Massachusetts Institute of Technology Press. ISBN 978-0-262-04199-7.
  17. ^ Gollisch, T.; Meister, M. (2008-02-22). "Rapid Neural Coding in the Retina with Relative Spike Latencies". Science. 319 (5866): 1108–1111. Bibcode:2008Sci...319.1108G. doi:10.1126/science.1149639. ISSN 0036-8075. PMID 18292344. S2CID 1032537.
  18. ^ Forrest MD (2014). "Intracellular Calcium Dynamics Permit a Purkinje Neuron Model to Perform Toggle and Gain Computations Upon its Inputs". Frontiers in Computational Neuroscience. 8: 86. doi:10.3389/fncom.2014.00086. PMC 4138505. PMID 25191262.
  19. ^ Forrest MD (December 2014). "The sodium-potassium pump is an information processing element in brain computation". Frontiers in Physiology. 5 (472): 472. doi:10.3389/fphys.2014.00472. PMC 4274886. PMID 25566080.
  20. ^ Singh & Levy, "A consensus layer V pyramidal neuron can sustain interpulse-interval coding ", PLoS ONE, 2017
  21. ^ a b Thorpe, S.J. (1990). "Spike arrival times: A highly efficient coding scheme for neural networks". In Eckmiller, R.; Hartmann, G.; Hauske, G. (eds.). Parallel processing in neural systems and computers (PDF). North-Holland. pp. 91–94. ISBN 978-0-444-88390-2.
  22. ^ Butts DA, Weng C, Jin J, et al. (September 2007). "Temporal precision in the neural code and the timescales of natural vision". Nature. 449 (7158): 92–5. Bibcode:2007Natur.449...92B. doi:10.1038/nature06105. PMID 17805296. S2CID 4402057.
  23. ^ J. Leo van Hemmen, TJ Sejnowski. 23 Problems in Systems Neuroscience. Oxford Univ. Press, 2006. p.143-158.
  24. ^ a b c Theunissen, F; Miller, JP (1995). "Temporal Encoding in Nervous Systems: A Rigorous Definition". Journal of Computational Neuroscience. 2 (2): 149–162. doi:10.1007/bf00961885. PMID 8521284. S2CID 206786736.
  25. ^ Zador, Stevens, Charles, Anthony. "The enigma of the brain". © Current Biology 1995, Vol 5 No 12. Retrieved August 4, 2012.
  26. ^ Jolivet, Renaud; Rauch, Alexander; Lüscher, Hans-Rudolf; Gerstner, Wulfram (2006-08-01). "Predicting spike timing of neocortical pyramidal neurons by simple threshold models". Journal of Computational Neuroscience. 21 (1): 35–49. doi:10.1007/s10827-006-7074-5. ISSN 1573-6873. PMID 16633938. S2CID 8911457.
  27. ^ Kostal L, Lansky P, Rospars JP (November 2007). "Neuronal coding and spiking randomness". Eur. J. Neurosci. 26 (10): 2693–701. doi:10.1111/j.1460-9568.2007.05880.x. PMID 18001270. S2CID 15367988.
  28. ^ Gupta, Nitin; Singh, Swikriti Saran; Stopfer, Mark (2016-12-15). "Oscillatory integration windows in neurons". Nature Communications. 7: 13808. Bibcode:2016NatCo...713808G. doi:10.1038/ncomms13808. ISSN 2041-1723. PMC 5171764. PMID 27976720.
  29. ^ Geoffrois, E.; Edeline, J.M.; Vibert, J.F. (1994). "Learning by Delay Modifications". In Eeckman, Frank H. (ed.). Computation in Neurons and Neural Systems. Springer. pp. 133–8. ISBN 978-0-7923-9465-5.
  30. ^ Sjöström, Jesper, and Wulfram Gerstner. "Spike-timing dependent plasticity." Spike-timing dependent plasticity 35 (2010).
  31. ^ Gollisch, T.; Meister, M. (22 February 2008). "Rapid Neural Coding in the Retina with Relative Spike Latencies". Science. 319 (5866): 1108–1111. Bibcode:2008Sci...319.1108G. doi:10.1126/science.1149639. PMID 18292344. S2CID 1032537.
  32. ^ Wainrib, Gilles; Michèle, Thieullen; Khashayar, Pakdaman (7 April 2010). "Intrinsic variability of latency to first-spike". Biological Cybernetics. 103 (1): 43–56. doi:10.1007/s00422-010-0384-8. PMID 20372920. S2CID 7121609.
  33. ^ Victor, Johnathan D (2005). "Spike train metrics". Current Opinion in Neurobiology. 15 (5): 585–592. doi:10.1016/j.conb.2005.08.002. PMC 2713191. PMID 16140522.
  34. ^ Hallock, Robert M.; Di Lorenzo, Patricia M. (2006). "Temporal coding in the gustatory system". Neuroscience & Biobehavioral Reviews. 30 (8): 1145–1160. doi:10.1016/j.neubiorev.2006.07.005. PMID 16979239. S2CID 14739301.
  35. ^ Carleton, Alan; Accolla, Riccardo; Simon, Sidney A. (2010). "Coding in the mammalian gustatory system". Trends in Neurosciences. 33 (7): 326–334. doi:10.1016/j.tins.2010.04.002. PMC 2902637. PMID 20493563.
  36. ^ a b Tregub, S. (2021). "Algorithm of the Mind: Teleological Transduction Theory." in Symphony of Matter and Mind. ISBN 9785604473948
  37. ^ a b Montemurro, Marcelo A.; Rasch, Malte J.; Murayama, Yusuke; Logothetis, Nikos K.; Panzeri, Stefano (2008). "Phase-of-Firing Coding of Natural Visual Stimuli in Primary Visual Cortex". Current Biology. 18 (5): 375–380. doi:10.1016/j.cub.2008.02.023. PMID 18328702.
  38. ^ Fries P, Nikolić D, Singer W (July 2007). "The gamma cycle". Trends Neurosci. 30 (7): 309–16. doi:10.1016/j.tins.2007.05.005. PMID 17555828. S2CID 3070167.
  39. ^ Spike arrival times: A highly efficient coding scheme for neural networks Archived 2012-02-15 at the Wayback Machine, SJ Thorpe - Parallel processing in neural systems, 1990
  40. ^ a b c Havenith MN, Yu S, Biederlack J, Chen NH, Singer W, Nikolić D (June 2011). "Synchrony makes neurons fire in sequence, and stimulus properties determine who is ahead". J. Neurosci. 31 (23): 8570–84. doi:10.1523/JNEUROSCI.2817-10.2011. PMC 6623348. PMID 21653861.
  41. ^ Maunsell JH, Van Essen DC (May 1983). "Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation". J. Neurophysiol. 49 (5): 1127–47. doi:10.1152/jn.1983.49.5.1127. PMID 6864242. S2CID 8708245.
  42. ^ Sachs, Murray B.; Young, Eric D. (November 1979). "Representation of steady-state vowels in the temporal aspects of the discharge patterns of populations of auditory-nerve fibers". The Journal of the Acoustical Society of America. 66 (5): 1381–1403. Bibcode:1979ASAJ...66.1381Y. doi:10.1121/1.383532. PMID 500976.
  43. ^ Miller, M.I.; Sachs, M.B. (June 1984). "Representation of voice pitch in discharge patterns of auditory-nerve fibers". Hearing Research. 14 (3): 257–279. doi:10.1016/0378-5955(84)90054-6. PMID 6480513. S2CID 4704044.
  44. ^ Miller, M.I.; Sachs, M.B. (1983). "Representation of stop consonants in the discharge patterns of auditory-nerve fibrers". The Journal of the Acoustical Society of America. 74 (2): 502–517. Bibcode:1983ASAJ...74..502M. doi:10.1121/1.389816. PMID 6619427.
  45. ^ Freeman, Walter J. (1992). "Tutorial on Neurobiology: From Single Neurons to Brain Chaos". International Journal of Bifurcation and Chaos. 02 (3): 451–482. Bibcode:1992IJBC....2..451F. doi:10.1142/S0218127492000653. ISSN 0218-1274.
  46. ^ Victor, J. D.; Purpura, K. P. (1996). "Nature and precision of temporal coding in visual cortex: a metric-space analysis". Journal of Neurophysiology. 76 (2): 1310–1326. doi:10.1152/jn.1996.76.2.1310. ISSN 0022-3077. PMID 8871238.
  47. ^ Wu S, Amari S, Nakahara H (May 2002). "Population coding and decoding in a neural field: a computational study". Neural Comput. 14 (5): 999–1026. doi:10.1162/089976602753633367. PMID 11972905. S2CID 1122223.
  48. ^ Schneidman, E; Berry, MJ; Segev, R; Bialek, W (2006), "Weak Pairwise Correlations Imply Strongly Correlated Network States in a Neural Population", Nature, 440 (7087): 1007–1012, arXiv:q-bio/0512013, Bibcode:2006Natur.440.1007S, doi:10.1038/nature04701, PMC 1785327, PMID 16625187
  49. ^ Amari, SL (2001), "Information Geometry on Hierarchy of Probability Distributions", IEEE Transactions on Information Theory, 47 (5): 1701–1711, CiteSeerX, doi:10.1109/18.930911
  50. ^ Onken, A; Grünewälder, S; Munk, MHJ; Obermayer, K (2009), "Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation", PLOS Comput Biol, 5 (11): e1000577, Bibcode:2009PLSCB...5E0577O, doi:10.1371/journal.pcbi.1000577, PMC 2776173, PMID 19956759
  51. ^ Johnson, KO (Jun 1980). "Sensory discrimination: neural processes preceding discrimination decision". J Neurophysiol. 43 (6): 1793–815. doi:10.1152/jn.1980.43.6.1793. PMID 7411183.
  52. ^ Panzeri; Schultz; Treves; Rolls (1999). "Correlations and the encoding of information in the nervous system". Proc Biol Sci. 266 (1423): 1001–12. doi:10.1098/rspb.1999.0736. PMC 1689940. PMID 10610508.
  53. ^ Merzenich, MM (Jun 1996). "Primary cortical representation of sounds by the coordination of action-potential timing". Nature. 381 (6583): 610–3. Bibcode:1996Natur.381..610D. doi:10.1038/381610a0. PMID 8637597. S2CID 4258853.
  54. ^ Dayan P & Abbott LF. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, Massachusetts: The MIT Press; 2001. ISBN 0-262-04199-5
  55. ^ Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W. Spikes: Exploring the Neural Code. Cambridge, Massachusetts: The MIT Press; 1999. ISBN 0-262-68108-0
  56. ^ Mathis A, Herz AV, Stemmler MB (July 2012). "Resolution of nested neuronal representations can be exponential in the number of neurons". Phys. Rev. Lett. 109 (1): 018103. Bibcode:2012PhRvL.109a8103M. doi:10.1103/PhysRevLett.109.018103. PMID 23031134.
  57. ^ Földiák P, Endres D, Sparse coding, Scholarpedia, 3(1):2984, 2008.
  58. ^ Olshausen, Bruno A; Field, David J (1996). "Emergence of simple-cell receptive field properties by learning a sparse code for natural images" (PDF). Nature. 381 (6583): 607–609. Bibcode:1996Natur.381..607O. doi:10.1038/381607a0. PMID 8637596. S2CID 4358477. Archived from the original (PDF) on 2015-11-23. Retrieved 2016-03-29.
  59. ^ Gupta, N; Stopfer, M (6 October 2014). "A temporal channel for information in sparse sensory coding". Current Biology. 24 (19): 2247–56. doi:10.1016/j.cub.2014.08.021. PMC 4189991. PMID 25264257.
  60. ^ a b c d Rehn, Martin; Sommer, Friedrich T. (2007). "A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields" (PDF). Journal of Computational Neuroscience. 22 (2): 135–146. doi:10.1007/s10827-006-0003-9. PMID 17053994. S2CID 294586.
  61. ^ Lee, Honglak; Battle, Alexis; Raina, Rajat; Ng, Andrew Y. (2006). "Efficient sparse coding algorithms" (PDF). Advances in Neural Information Processing Systems.
  62. ^ Olshausen, Bruno A.; Field, David J. (1997). "Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?" (PDF). Vision Research. 37 (23): 3311–3325. doi:10.1016/s0042-6989(97)00169-7. PMID 9425546.
  63. ^ Zhang, Zhifeng; Mallat, Stephane G.; Davis, Geoffrey M. (July 1994). "Adaptive time-frequency decompositions". Optical Engineering. 33 (7): 2183–2192. Bibcode:1994OptEn..33.2183D. doi:10.1117/12.173207. ISSN 1560-2303.
  64. ^ Pati, Y. C.; Rezaiifar, R.; Krishnaprasad, P. S. (November 1993). Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. Proceedings of 27th Asilomar Conference on Signals, Systems and Computers. pp. 40–44 vol.1. CiteSeerX doi:10.1109/ACSSC.1993.342465. ISBN 978-0-8186-4120-6. S2CID 16513805.
  65. ^ Needell, D.; Tropp, J.A. (2009-05-01). "CoSaMP: Iterative signal recovery from incomplete and inaccurate samples". Applied and Computational Harmonic Analysis. 26 (3): 301–321. arXiv:0803.2392. doi:10.1016/j.acha.2008.07.002. ISSN 1063-5203.
  66. ^ Kanerva, Pentti. Sparse distributed memory. MIT press, 1988
  67. ^ Vinje, WE; Gallant, JL (2000). "Sparse coding and decorrelation in primary visual cortex during natural vision". Science. 287 (5456): 1273–1276. Bibcode:2000Sci...287.1273V. CiteSeerX doi:10.1126/science.287.5456.1273. PMID 10678835.
  68. ^ Hromádka, T; Deweese, MR; Zador, AM (2008). "Sparse representation of sounds in the unanesthetized auditory cortex". PLOS Biol. 6 (1): e16. doi:10.1371/journal.pbio.0060016. PMC 2214813. PMID 18232737.
  69. ^ Crochet, S; Poulet, JFA; Kremer, Y; Petersen, CCH (2011). "Synaptic mechanisms underlying sparse coding of active touch". Neuron. 69 (6): 1160–1175. doi:10.1016/j.neuron.2011.02.022. PMID 21435560.
  70. ^ Ito, I; Ong, RCY; Raman, B; Stopfer, M (2008). "Sparse odor representation and olfactory learning". Nat Neurosci. 11 (10): 1177–1184. doi:10.1038/nn.2192. PMC 3124899. PMID 18794840.
  71. ^ A sparse memory is a precise memory. Oxford Science blog. 28 Feb 2014.
  72. ^ Lin, Andrew C., et al. "Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination." Nature Neuroscience 17.4 (2014): 559-568.
  73. ^ Izhikevich, Eugene M. (2010). Dynamical systems in neuroscience : the geometry of excitability and bursting. Cambridge, Mass.: MIT Press. ISBN 978-0-262-51420-0. OCLC 457159828.
  74. ^ Rieke, F. (1999). Spikes : exploring the neural code. Cambridge, Mass.: MIT. ISBN 0-262-68108-0. OCLC 42274482.
  75. ^ Tregub, S. (2021). "Harmonies of the Mind: Physics and Physiology of Self." in Symphony of Matter and Mind. ISBN 9785604473962
  76. ^ Tregub, S. (2021). Dissonances of the Mind: Psychopathology as Disturbance of the Brain Technology. in Symphony of Matter and Mind. ISBN 9785604473986
  77. ^ Karl Diesseroth, Lecture. "Personal Growth Series: Karl Diesseroth on Cracking the Neural Code." Google Tech Talks. November 21, 2008.
  78. ^ Han X, Qian X, Stern P, Chuong AS, Boyden ES. "Informational lesions: optical perturbations of spike timing and neural synchrony via microbial opsin gene fusions." Cambridge, Massachusetts: MIT Media Lad, 2009.