Artificial grammar learning
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Artificial grammar learning (AGL) is a paradigm of study within cognitive psychology and linguistics. Its goal is to investigate the processes that underlie human language learning by testing subjects' ability to learn a made-up grammar in a laboratory setting. It was developed to evaluate the processes of human language learning but has also been utilized to study implicit learning in a more general sense. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.
Many researchers propose that the rules of the artificial grammar are learned on an implicit level since the rules of the grammar are never explicitly presented to the participants. The paradigm has also recently been utilized for other areas of research such as language learning aptitude and to investigate which brain structures are involved in syntax acquisition and implicit learning.
More than half a century ago George A. Miller established the paradigm of artificial grammar learning in order to investigate the influence of explicit grammar structures on human learning, he designed a grammar model of letters with different sequences. His research demonstrated that it was easier to remember a structured grammar sequence than a random sequence of letters. His explanation was that learners could identify the common characteristics between learned sequences and accordingly encode them to a memory set. He predicted that subjects could identify which letters will most likely appear together as a sequence repeatedly and which letters would not and that the subjects would use this information to form memory sets. Those memory sets served participants as a strategy later on during their memory tests.
Reber doubted Miller's explanation. He claimed that if participants could encode the grammar rules as productive memory sets, then they should be able to verbalize their strategy in detail. He conducted research that led to the development of the modern AGL paradigm. This research used a synthetic grammar learning model to test implicit learning. AGL became the most used and tested model in the field. As in the original paradigm developed by Miller, participants were asked to memorize a list of letter strings which were created from an artificial grammar rule model. It was only during the test phase that participants were told that there was a set of rules behind the letter sequences they memorized. They were then instructed to categorize new letter strings based on the same set of rules which they had not previously been exposed to. They classified new letter strings as "grammatical" (constructed from the grammar rule), vs. "randomly constructed" sequences. If subjects correctly sorted the new strings above chance level, it could be inferred that subjects had acquired the grammatical rule structure without any explicit instruction of the rules. Reber  found that participants sorted out new strings above chance level. While they reported using strategies during the sorting task, they could not actually verbalize those strategies. Subjects could identify which strings were grammatically correct but could not identify the rules that composed grammatical strings.
This research was replicated and expanded upon by many others. The conclusions of most of these studies were congruent with Reber's hypothesis: the implicit learning process was done with no intentional learning strategies. These studies also identified common characteristics for the implicitly acquired knowledge:
1. Abstract representation for the set of rules.
2. Unconscious strategies that can be tested with performance.
The Modern AGL Paradigm
The modern AGL paradigm can be used to investigate explicit and implicit learning, although it is most often used to test implicit learning. In a typical AGL experiment, participants are required to memorize strings of letters previously generated by a specific grammar. The length of the strings usually ranges from 2-9 letters per string. An example of such a grammar is shown in figure 1.
Figure 1: Example of an artificial grammar rule • Ruleful strings:VXVS, TPTXVS Unruleful strings:VXXXS, TPTPS
In order to compose a grammatically "ruleful" string of letters, according to the predetermined grammar rule, a subject must follow the rules for the pairing of letters as represented in the model (figure 1). When observing a violation of the grammatical rule system that composes the string, it is considered an "unruleful" or randomly constructed string.
In the case of a standard AGL implicit learning task, subjects are not told that the strings are based on a specific grammar. Instead, they are simply given the task to memorize the letter strings for a memory. After the learning phase, subjects are told that the letter strings presented during the learning phase were based on specific rules, but are not explicitly told what the rules are. During a test phase, the subjects are instructed to categorize new letter strings as "ruleful" or "unruleful". The dependent variable usually measured is the percentage of correctly categorized strings. Implicit learning is considered to be successful when the percentage of correctly sorted strings is significantly higher than chance level. If this significant difference is found, it indicates the existence of a learning process that is more involved than memorizing the presented letter strings.
The mechanism behind the implicit learning that is hypothesized to occur while people engage in Artificial Grammar Learning is statistical learning or, more specifically, Bayesian learning. Bayesian learning takes into account types of biases or “prior probability distributions” individuals have that contribute to the outcome of implicit learning tasks. These biases can be thought of as a probability distribution that contains the probability that each possible hypothesis is likely to be correct. Due to the structure of the Bayesian model, the inferences output by the model are in the form of a probability distribution rather than a single most probable event. This output distribution is a “posterior probability distribution”. The posterior probability of each hypothesis in the original distribution is the probability of the hypothesis being true given the data and the probability of data given the hypothesis is true. This Bayesian model for learning is fundamental for understanding the pattern detection process involved in implicit learning and, therefore, the mechanisms that underlie the acquisition of artificial grammar learning rules. It is hypothesized that the implicit learning of grammar involves predicting co-occurrences of certain words in a certain order. For example, “the dog chased the ball” is a sentence that can be learned as grammatically correct on an implicit level due to the high co-occurrence of “chase” being one of the words to follow “dog”. A sentence like “the dog cat the ball” is implicitly recognized as grammatically incorrect due to the lack of utterances that contain those words paired in that specific order. This process is important for teasing apart thematic roles and parts of speech in grammatical processing (see grammar). While the labeling of the thematic roles and parts of speech is explicit, the identification of words and parts of speech is implicit.
Traditional approaches to AGL claim that the stored knowledge obtained during the learning phase is abstract. Other approaches  argue that this stored knowledge is concrete and consists of exemplars of strings encountered during the learning phase or "chunks" of these exemplars. In any case, it is assumed that the information stored in memory is retrieved in the test phase and is used to aid decisions about letter strings. 3 main approaches attempt to explain the AGL phenomena:
- Abstract Approach: According to this traditional approach, participants acquire an abstract representation of the artificial grammar rule in the learning stage. That abstract structure helps them to decide if the new string presented during the test phase is grammatical or randomly constructed.
- Concrete knowledge approach: This approach proposes that during the learning stage participants learn specific examples of strings and store them in their memory. During the testing stage, participants do not sort the new strings according to an abstract rule; instead they will sort them according to their similarity to the examples stored in memory from the learning stage. There are multiple opinions concerning how concrete the learned knowledge really is. Brooks & Vokey argue that all of the knowledge stored in memory is represented as concrete examples of the full examples studied during the learning stage. The strings are sorted during the testing stage according to a full representation of the string examples from the learning stage. On the other hand, Perruchet & Pacteau claimed that the knowledge of the strings from the learning stage is stored in the form of "memory chunks" where 2 - 3 letters are learned as a sequence along with knowledge about their permitted location in the full string.
- Dual Factor approach: Dual process learning model, combines the approaches described above. This approach proposes that a person will rely on concrete knowledge when they can. When they cannot rely on concrete knowledge (for example on a transfer of learning task), the person will use abstract knowledge of the rules.
Research with amnesia patients suggests the “Dual Factor approach” may be the most accurate model. A series of experiments with amnesiac patients support the idea that AGL involves both abstract concepts and concrete exemplars. Amnesiacs were able to classify stimuli as "grammatical" vs. "randomly constructed" just as well as participants in the control group. While able to successfully complete the task, amnesiacs were not able to explicitly recall grammatical “chunks” of the letter sequence while the control group was able to explicitly recall them. When performing the task with the same grammar rules but a different sequence of letters than those that they were previously tested on, both amnesiacs and the control group were able to complete the task (although performance was better when the task was completed using the same set of letters used for training). The results of the experiment support the dual factor approach to artificial grammar learning in that people use abstract information to learn rules for grammars and use concrete, exemplar-specific memory for chunks. Since the amnesiacs were unable to store specific "chunks" in memory, they completed the task using an abstract set of rules. The control group was able to store these specific chunks in memory and (as evidenced by recall) did store these examples in memory for later reference.
AGL research has been criticized due to the "automatic question": Is AGL considered to be an automatic process? During encoding (see encoding (memory)), performance can be automatic in the sense of occurring without conscious monitoring (without conscious guidance by the performer’s intentions). In the case of AGL, it was claimed that implicit learning is an automatic process due to the fact that it is done with no intention of learning a specific grammar rule. This complies with the classic definition of an "automatic process" as a fast, unconscious, effortless process that may start unintentionally. When aroused, it continues until it is over without the ability to stop or ignore its consequences. This definition has been challenged many times. Alternative definitions for automatic process have been given. Reber's presumption that AGL is automatic could be problematic by implying that an unintentional process is an automatic process in its essence. When focusing on AGL tests, a few issues need to be addressed. The process is complex and contains encoding and recall or retrieval. Both encoding and retrieval could be interpreted as automatic processes since what was encoded during the learning stage is not necessary for the task intentionally performed during the test stage. Researchers need to differentiate between implicitness as referring to the process of learning or knowledge encoding and also as referring to performance during the test phase or knowledge retrieval. Knowledge encoded during training may include many aspects of the presented stimuli (whole strings, relations among elements, etc.). The contribution of the various components to performance depends on both the specific instruction in the acquisition phase and the requirements of the retrieval task. Therefore, the instructions on every phase are important in order to determine whether or not each stage will require automatic processing. Each phase should be evaluated for automaticity separately.
One hypothesis that contradicts the automaticity of AGL is the “mere exposure effect”. The mere exposure effect is increased affect towards a stimulus that is the result of nonreinforced, repeated exposure to the stimulus. Results from over 200 experiments on this effect indicate that there is a positive relationship between mean “goodness” rating and frequency of stimulus exposure. Stimuli for these experiments included line drawings, polygons and nonsense words (which are types of stimuli used in AGL research). These experiments exposed participants to each stimulus up to 25 times. Following each exposure participants were asked to rate the degree to which each stimulus suggested “good” vs. “bad” affect on a 7-point scale. In addition to the main pattern of results, it was also found in several experiments that participants rated higher positive affect for previously exposed items than for novel items. Since implicit cognition should not reference previous study episodes, the effects on affect ratings should not have been observed if processing of this stimuli is truly implicit. The results of these experiments suggests that different categorization of the strings may occur due to differences in affect associated with the strings and not due to implicitly learned grammar rules.
AI and Artificial Grammar Learning
Since the advent of computers and artificial intelligence, computer programs have been adapted that attempt to simulate the implicit learning process observed in the AGL paradigm. The AI programs first adapted to simulate both natural and artificial grammar learning used the following basic structure:
Given: A set of grammatical sentences from some language.
Find: A procedure for recognizing and/or generating all grammatical sentences in that language.
An early model for AI grammar learning is Wolff's SNPR System. The program acquires a series of letters with no pauses or punctuation between words and sentences. The program then examines the string in subsets and looks for common sequences of symbols and defines “chunks” in terms of these sequences (these chunks are akin to the exemplar-specific chunks described for AGL). As the model acquires these chunks through exposure, the chunks begin to replace the sequences of unbroken letters. When a chunk precedes or follows a common chunk, then the model determines disjunctive classes in terms of the first set. For example, when the model encounters “the-dog-chased” and “the-cat-chased” it classifies “dog” and “cat” as being members of the same class since they both precede “chase”. While the model sorts chunks into classes, it does explicitly define these groups (e.g., noun, verb). Early AI models of grammar learning such as these ignored the importance of negative instances of grammar's effect on grammar acquisition and were also lacking in the ability to connect grammatical rules to pragmatics and semantics. Newer models have attempted to factor these details in. The Unified Model  attempts to take both of these factors into account. The model breaks down grammar according to “cues”. Languages mark case roles using five possible cue types: word order, case marking, agreement, intonation and verb-based expectation (see grammar). The influence that each cue has over a language's grammar is determined by its “cue strength” and “cue validity”. Both of these values are determined using the same formula, except that cue strength is defined through experimental results and cue validity is defined through corpus counts from language databases. The formula for cue strength/validity is as follows:
Cue strength/cue validity = cue availability * cue reliability
Cue availability is the proportion of times that the cue is available over the times that it is needed. Cue reliability is the proportion of times that the cue is correct over the total occurrences of the cue. By incorporating cue reliability along with cue availability, The Unified Model is able to account for the effects of negative instances of grammar since it takes accuracy and not just frequency into account. As a result, this also accounts for the semantic and pragmatic information since cues that do not produce grammar in the appropriate context will have low cue strength and cue validity. While MacWhinney's model  also simulates natural grammar learning, it attempts to model the implicit learning processes observed in the AGL paradigm.
Cognitive Neuroscience and the AGL Paradigm
Contemporary studies with AGL have attempted to identify which structures are involved in the acquisition of grammar and implicit learning. Agrammatic aphasic patients (see Agrammatism) were tested with the AGL paradigm. The results show that breakdown of language in agrammatic aphasia is associated with an impairment in artificial grammar learning, indicating damage to domain-general neural mechanisms sub serving both language and sequential learning. De Vries, Barth, Maiworm, Knecht, Zwitserlood & Flöel  found that electrical stimulation of Broca's area enhances implicit learning of an artificial grammar. Direct current stimulation may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia. Petersson, Vasiliki & Hagoort, examine the neurobiological correlates of Syntax, the processing of structured sequences, by comparing fMRI results on artificial and natural language syntax. They argue that the "Chomsky hierarchy" is not directly relevant for neurobiological systems through AGL testing.
- Miller, G.A. (1958). "Free recall of redundant strings of letters.". Journal of Experimental Psychology. 56 (6): 485–491. doi:10.1037/h0044933.
- Reber, A.S. (1967). "Implicit learning of artificial grammars.". Verbal Learning and Verbal Behavior. 5 (6): 855–863. doi:10.1016/s0022-5371(67)80149-x.
- Mathews, R.C.; Buss, R. R.; Stanley, W. B.; Blanchard-Fields, F.; Cho, J. R.; Druhan, B. (1989). "Role of implicit and explicit processes in learning from examples: A synergistic effect". Journal of Experimental Psychology: Learning, Memory, and Cognition. 15: 1083–1100. doi:10.1037/0278-7318.104.22.1683.
- Brooks, L.R.; Vokey, J.R. (1991). "Abstract analogies and abstracted grammars: Comments on Reber (1989) and Mathews et al. (1989)". Journal of Experimental Psychology: General. 120: 316–323. doi:10.1037/0096-3422.214.171.1246.
- Perruchet, P.; Pacteau, C. (1990). "Synthetic grammar learning: Implicit rule abstraction or explicit fragmentary knowledge". Journal of Experimental Psychology. 119: 264–275. doi:10.1037/0096-34126.96.36.1994.
- Altmann, G.M.T.; Dienes, Z.; Goode, A. (1995). "Modality Independence of Implicitly Learned Grammatical Knowledge". Journal of Experimental Psychology: Learning, Memory & Cognition. 21 (4): 899–912. doi:10.1037/0278-73188.8.131.529.
- Seger, C.A. (1994). "Implicit learning.". Psychological Bulletin. 115 (2): 163–196. doi:10.1037/0033-2909.115.2.163. PMID 8165269.
- Kapatsinski, V. (2009). "The Architecture of Grammar in Artificial Grammar Learning: Formal Biases in the Acquisition of Morphophonology and the Nature of the Learning Task.". Indiana University: 1–260.
- Vokey, J.R.; Brooks, L.R. (1992). "Salience of item knowledge in learning artificial grammar.". Journal of Experimental Psychology: Learning, Memory, and Cognition. 18: 328–344. doi:10.1037/0278-73184.108.40.2068.
- Servan-Schreiber, E.; Anderson, J.R. (1990). "Chunking as a mechanism of implicit learning". Journal of Experimental Psychology: Learning, Memory & Cognition. 16: 592–608. doi:10.1037/0278-73220.127.116.112.
- Pothos, E.M. (2007). "Theories of artificial grammar learning". Psychological Bulletin. 133 (2): 227–244. doi:10.1037/0033-2909.133.2.227.
- Poznanski, Y.; Tzelgov, J. (2010). "What is implicit in implicit artificial grammar learning?". Quarterly Journal of Experimental Psychology. 63: 1495–2015. doi:10.1080/17470210903398121.
- Reber, A.S. (1969). "Transfer of syntactic structure in syntactic languages". Experimental Psychology. 81: 115–119. doi:10.1037/h0027454.
- McAndrews, M.P.; Moscovitch, M. (1985). "Rule-based and exemplar-based classification in artificial grammar learning". Memory & Cognition. 13: 469–475. doi:10.3758/bf03198460.
- Reber, A.S. (1989). "Implicit Learning and Tacit Knowledge". Journal of Experimental Psychology. 118: 219–235. doi:10.1037/0096-3418.104.22.168.
- Reber, A.S.; Allen, R. (1978). "Analogic abstraction strategies in synthetic grammar learning: A functionalist interpretation". Cognition. 6: 189–221. doi:10.1016/0010-0277(78)90013-6.
- Knowlton, B.J.; Squire, L.R. (1996). "Artificial Grammar Learning Depends on Implicit Acquisition of Both Abstract and Exemplar-Specific Information". Journal of Experimental Psychology: Learning Memory and Cognition. 22 (1): 169–181. doi:10.1037/0278-7322.214.171.124.
- Hasher, L.; Zacks, R. (1979). "Automatic and effortful processes in memory". Journal of Experimental Psychology: General. 108: 356–388. doi:10.1037/0096-34126.96.36.1996.
- Schneider, W.; Dumais, S. T.; Shiffrin, R. M. (1984). "Automatic and controlled processing and attention". In R. Parasuraman & D. Davies (Eds.), Varieties of attention. New York: Academic press: 1–17.
- Logan, G.D. (1988). "Automaticity, resources and memory: Theoretical controversies and practical implications". Human factors. 30: 583–598.
- Tzelgov, J. (1999). "Automaticity and processing without awareness". Psyche. 5.
- Logan, G.D. (1980). "Attention and automaticity in Stroop and priming tasks: Theory and data". Cognitive Psychology. 12: 523–553. doi:10.1016/0010-0285(80)90019-5.
- Logan, G.D. (1985). "Executive control of thought and action". Acta Psychologica. 60: 193–210. doi:10.1016/0001-6918(85)90055-1.
- Perlman, A.; Tzelgov, J. (2006). "Interaction between encoding and retrieval in the domain of sequence learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 32: 118–130. doi:10.1037/0278-73188.8.131.52.
- Manza, L.; Zizak, D.; Reber, A.S. (1998). "Artificial grammar learning and the mere exposure effect: Emotional preference tasks and the implicit learning process". In Stadler, M.A. & Frensch, P.A. (Eds.), Handbook of implicit learning. Thousand Oaks, CA: Sage Publications, Inc.: 201–222.
- MacWhinney, B. (1987). Mechanisms of language acquisition. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
- MacWhinney, B. (2008). "A Unified Model". In Robinson, P. & Ellis, N. (Eds.), Handbook of Cognitive Linguistics and Second Language Acquisition. Mahwah, NJ: Lawrence Erlbaum Associates.
- Christiansen, M.H.; Kelly, M.L.; Shillcock, R.C.; Greenfield, K. (2010). "Impaired artificial grammar learning in agrammatism". Cognition. 116 (3): 383–393. doi:10.1016/j.cognition.2010.05.015.
- De Vries, M.H.; Barth, A.C.R.; Maiworm, S.; Knecht, S.; Zwisterlood, P.; Floel, A. (2010). "Electrical stimulation of Broca's area enhances implicit learning of artificial grammar". Cognitive Neuroscience. 22 (11): 2427–2436. doi:10.1162/jocn.2009.21385.
- Petersson, K.M.; Vasiliki, F.; Hagoort, P. (2010). "What artificial grammar learning reveals about the neurobiology of syntax". Brain & Language: 340–353.
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2013) (Learn how and when to remove this template message)|