User:Aluokkala/sandbox

From Wikipedia, the free encyclopedia

In psycholinguistics and neurolinguistics, the spreading-activation model is a computational model of sentence and lexical production that was first proposed by Gary S. Dell in the late 1970s.[1] The model attempts to describe the processes of sentence and word production, which are both thought to involve mapping from a conceptual (semantic) representation to a phonological representation. The model proposes that this process of mapping is accomplished via spreading activation within a lexical network. This lexical network consists of roughly three layers of nodes, representing semantic features, words, and phonemes (morpheme nodes are present in some versions). These three nodes (semantic, lexical, and phonological) are connected in an excitatory, interactive, and bi-directional manner. In addition to describing the mental processes associated with (spoken) word and sentence production, the model attempts to account for speech errors (e.g., Spoonerisms, Freudian slips, malapropisms, etc.) in both normal and brain lesioned individuals.

Spreading Activation[edit]

According to the model, the resting level of activation for a node is equal to zero. In addition to this proposition, there are three other components of spreading activation that are important:

  1. The concept of spreading specifies that when a node has an activation level greater than zero, it will send some proportion of its activation level to all nodes connected to it.
  2. Summation specifies that when the activation that was sent out reaches its destination node, it adds to that node's current activation level.
  3. Decay specifies that the activation level of a node will decrease exponentially toward zero over time.

Linguistic Assumptions[edit]

The model is proposed with respect to two general linguistic assumptions:

  1. Knowledge about language is organized by the brain at different levels, with each level having its own generative (or combinatorial) rules that govern how units at that level may be combined. For example, generative rules at the syntactic level dictate how sentences are formed from words, rules at the morphological level determine how words are formed from morphemes, and rules at the phonological level determine how words are formed from sounds.
  2. There is a clear distinction between the aforementioned generative rules, which code productive knowledge, and the lexical network, which codes nonproductive knowledge.

According to the model, generative rules are stated in terms of the categories at each unit level. For instance, syntactic rules are stated in terms of syntactic categories (noun, verb, adjective, etc.), morphological rules are state in terms of morphological categories (stem, derivational affix, inflectional affix, etc.), and phonological rules with respect to phonological categories (consonant, vowel, etc.). For example, in the sentence The boy hit the ball the word 'boy' would be encoded in the lexicon in the following manner: at the syntactic level, the categorical unit of noun would be activated at the word node, with generative rules defining with what words 'boy' may occur; at the morphological level, the categorical unit of stem would be activated at the morpheme node, with generative rules specifying how other morphemes may be added to the word; and at the phonological level, the categorical units of initial consonant and vowel would be activated at the phoneme node, with generative rules defining how the sounds may combine. As such, the model proposes that these two linguistic knowledge stores - namely, productive knowledge and nonproductive knowledge - are interactive.

Model[edit]

According to previous research, semantic information of the to-be-retrieved word is strongly activated early in the word production process; in contrast, there is no evidence that phonological information of the sought-after word is retrieved.[2] In accordance with these findings, the spreading-activation model proposes a two-stage, interactive model of word production. The two steps occur in accordance with a retrieval mechanism that is based on interactive, spreading activation. The two steps are as follows:

  1. In the first step, the conceptual representation (which is represented at the semantic level) is mapped onto a lexical representation. During this stage, the semantic features of the target concept are given a "jolt of activation," which spreads throughout the network. Dell et al.[2] developed a mathematical function, referred to as the "activation function," for determining the activation level (A) of a particular node (j) at a particular time (t) as a results of spreading activation. The function is as follows:

    where is the activation of node at time , is the decay rate, and is the connection weight from the source node to the receiving node . According to the model's activation function, each unit's activation level is interfered with by noise, which results from two components: (1) intrinsic noise, and (2) activation noise. [3] After n time steps of spreading activation, the most activated word node of the proper syntactic category is selected.
  2. During the second stage, the word's sound pattern is retrieved, this is referred to by Dell as "phonological encoding." Phonological encoding begins with a "jolt of activation" to the word node that was selected in the previous stage. Activation spreads in the same manner as in the first stage, in accordance with the activation function. After n time, the most highly activated phoneme nodes are selected and linked to slots in a phonological frame, which specifies the order in which they are to occur.

The construction of a representation at each level (i.e., semantic, syntactic, and phonological levels) occurs simultaneously with that of other levels; however, processing at lower levels (e.g., the phonological level) is more dependent on higher levels (e.g., the syntactic level) – thus, higher level information is likely to be processed to a greater degree earlier in word retrieval than lower level information. Despite the greater dependency of lower levels on higher levels, all levels still interact in both a top-down and bottom-up direction.

Speech Errors[edit]

According to the model, speech errors occur when the incorrect word, morpheme, or phoneme unit is more activated than the correct one, thus resulting in the selection of the incorrect unit. According to Dell, speech errors are largely a result of spreading of activation and the construction of multiple representations. According to the model, for an incorrect item to be selected it must be a member of the same category as the target item. According to the model, semantic errors such as saying "dog" when one meant to say "cat," occur because semantically related words share features, and therefore the word node for 'dog' receives activation from the target features of 'cat.' If the activation level for 'dog' is greater than that for 'cat,' it will be mistakenly selected. Form errors (e.g., saying "mat" instead of "cat") are thought to occur because of the interactive flow of activation between the word and phoneme nodes. For example, when producing the word 'cat,' its corresponding phonemes (/k/, /æ/, and /t/) are activated, which in turn send activation to words that share phonemes with 'cat.' According to the model, mixed errors (e.g., saying "rat" when one meant to say "cat") reflect both form and meaning similarity with the target; in the current model, they are predicted to occur often. Unrelated errors (e.g., saying "fog" instead of "cat") involves the substitution of words that are unrelated - both semantically and phonologically - to the target word; this type of error is not likely unless the system is degraded enough for noise to influence selection. Lastly, non-word errors occur (e.g., saying "zat" instead of "cat") when noise and interference from activated non-target words results in incorrect phonemes being more activated than correct ones.

Experimental Simulation Variations[edit]

Weight-Decay Model[edit]

In one study, Foygel and Dell[2] adjusted the model's parameters (global weight and decay weight) and network structure to fit the error patterns of normal control speakers who has been given the Philadelphia Naming Test (PNT). The fitting process involved selecting weight and decay parameters that minimized the difference between the patient and model error patterns. The normal value for global weight (i.e., the weight of the connection between nodes) was 0.1 and the normal value for decay rate was 0.5.[2] The model was then fit to twenty-one fluent aphasic patients' performance on the PNT. Aphasic participants were divided into two groups based on their type of brain damage. Brain damage was associated with one of two dimensions -- namely, global weight or decay rate. Patients whose weight value was less than the normal value tended to produce more errors that were unrelated to the target word - both semantically and phonologically. Such results are consistent with the prediction that weakened connection weights will produce errors that are less consistent with the target word. In contrast, patients with a decay rate higher than the normal rate (and relatively normal global weight values) tended to make word errors that were related to the target word - specifically, these patients made more semantic and mixed errors (relative to patients with roughly normal decay rates and lower than average global weight values).

Semantic-Phonological Model[edit]

In this version of the model, brain damage was also associated with values on two parameters, however the parameters in this version are different. The first parameter represents the weight between semantic and lexical units, while the second parameter represents the weights between lexical and phonological units. This model differs from the weight-decay model in that there are now two components that make up the global weight parameter - namely, a semantic-lexical component and a lexical-phonological component - and the decay rate is no longer considered. Foygel and Dell suggest that tasks involving the use of word meaning activate the brain differently than do tasks involving the use of the acoustic or articulatory properties of words, meaning that semantic knowledge and phonological knowledge are separable.[2] A study by Levelt et al.[4], which found that lexical selection is associated with areas of activity in the occipital and parietal lobes, while phonological encoding is associated with activation around what is traditionally referred to as Wernicke's area. According to the results obtained from fitting aphasic patients to the model, patients with semantic-lexical lesions tend to make word semantic, form, mixed, and unrelated errors (i.e., semantic lesions are analogous to decay rate lesions). On the other hand, aphasics with lexical-phonological lesions tend to produce errors that are non-words (i.e., lexical-phonological lesions are analogous to global weight lesions).

References[edit]

  1. ^ Dell, G. S. (1986). "A spreading-activation theory of retrieval in sentence production." Psychological Review, Vol. 93(3). pp. 283-321.
  2. ^ a b c d e Foygel, D., & Dell, G. S. (2000). Models of impaired lexical access in speech production. Journal of Memory and Language, Vol. 43(2). pp. 182-216.
  3. ^ Dell, G.S., Lawler, E.N., Harris, H.D., Gordon, J.K. (2004). Models of errors of omission in aphasic naming. Cognitive Neuropsychology, Vol. 21 (2/3/4/). pp. 125-145.
  4. ^ Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, Vol. 10. pp. 553–567.