Jump to content

Conceptual blending

From Wikipedia, the free encyclopedia
(Redirected from View Application)

In cognitive linguistics and artificial intelligence, conceptual blending, also called conceptual integration or view application, is a theory of cognition developed by Gilles Fauconnier and Mark Turner. According to this theory, elements and vital relations from diverse scenarios are "blended" in a subconscious process, which is assumed to be ubiquitous to everyday thought and language. Much like memetics, it is an attempt to create a unitary account of the cultural transmission of ideas.[1]

History

[edit]

The development of this theory began in 1993 and a representative early formulation is found in the online article "Conceptual Integration and Formal Expression".[2] Turner and Fauconnier cite Arthur Koestler's 1964 book The Act of Creation as an early forerunner of conceptual blending: Koestler had identified a common pattern in creative achievements in the arts, sciences and humor that he had termed "bisociation of matrices."[3] A newer version of blending theory, with somewhat different terminology, was presented in Turner and Fauconnier's 2002 book, The Way We Think.[4] Conceptual blending, in the Fauconnier and Turner formulation, is one of the theoretical tools used in George Lakoff and Rafael Núñez's Where Mathematics Comes From, in which the authors assert that "understanding mathematics requires the mastering of extensive networks of metaphorical blends."[5]

Computational models

[edit]

Conceptual blending is closely related to frame-based theories, but goes beyond these primarily in that it is a theory of how to combine frames (or frame-like objects). An early computational model of a process called "view application", which is closely related to conceptual blending (which did not exist at the time), was implemented in the 1980s by Shrager at Carnegie Mellon University and PARC, and applied in the domains of causal reasoning about complex devices[6] and scientific reasoning.[7] More recent computational accounts of blending have been developed in areas such as mathematics.[8] Some later models are based upon structure mapping, which did not exist at the time of the earlier implementations. Recently, within the context of non-monotonic extensions of AI reasoning systems (and in line with the frame-based theories), a general framework able to account for both complex human-like concept combinations (like the PET-FISH problem) and conceptual blending[9] has been tested and developed in both cognitive modelling[10] and computational creativity applications.[11][12]

Philosophical status of the theory

[edit]

In his book The Literary Mind,[13][page needed] conceptual blending theorist Mark Turner states that

Conceptual blending is a fundamental instrument of the everyday mind, used in our basic construal of all our realities, from the social to the scientific.

Insights obtained from conceptual blends constitute the products of creative thinking, however conceptual blending theory is not itself a complete theory of creativity, inasmuch as it does not illuminate the issue of where the inputs to a blend originate. In other words, conceptual blending provides a terminology for describing creative products, but has little to say on the matter of inspiration.[citation needed]

Network model

[edit]

Characteristics of blending

[edit]

As described by Fauconnier and Turner, mental spaces are small conceptual containers used to structure processes behind human reasoning and communication. They are constantly created as people think and talk to serve a specific purpose depending on the context.[14] The basic form of integration network consists of at least four separate and interconnected spaces which can be modified at any moment as a discourse progresses.[14][15] Fauconnier and Turner also suggest that mental spaces are generated in working memory and are connected to the knowledge stored in long-term memory. Elements present in mental spaces are said to resemble the activation of corresponding groups of neurons.[15][16]

The network model

Different types of mental spaces proposed are:

  • Generic space – captures a common structure which is present in all input spaces
  • Input space – provides the contents of a specific situation or idea
  • Blended space – contains a general structure from a generic space as well as some elements from input spaces chosen and mapped onto this space through selective projection[14]

Cross-space mapping of counterparts represents various types of connections, such as metaphoric connections, between matching structures in the input spaces.[14]

In some of the more complex cases of integration networks, there are multiple input and blended spaces.[14][15]

Blending

[edit]

The process of blending results in the creation of an emergent structure in the blended space. This new structure, which is not found directly in any of the input spaces, is necessary to achieve a particular goal. The emergent structure is generated through the three following operations:

  • Composition – provides relations between elements which are only observable by composing together elements from separate input spaces
  • Completion – passes on to the blending space additional meaning which is associated with elements in input spaces
  • Elaboration – represents the idea of dynamically running the blend as if it was a simulation[14]

Selective projection refers to the observation that not everything from the input spaces is projected to the blend.[14]

Example of a blend – Buddhist monk

[edit]

To illustrate how the blend works, Fauconnier and Turner present the riddle of the Buddhist monk, which was originally discussed by Arthur Koestler in his book The Act of Creation (1964):

A Buddhist monk begins at dawn one day walking up a mountain, reaches the top at sunset, meditates at the top for several days until one dawn when he begins to walk back to the foot of the mountain, which he reaches at sunset. Making no assumptions about his starting or stopping or about his pace during the trips, prove that there is a place on the path which he occupies at the same hour of the day on the two separate journeys.

Solving the problem requires imagining the scenario in which the monk simultaneously goes up and down the mountain on the same day. Although this situation is fictional and improbable, it can still lead to the solution. With the problem described in this new way, it is easy now to understand that there must be a place and time when the monk meets himself during his journey. This "meeting" provides the proof that there is a place on the path asked for in the riddle. A scenario in which the monk goes up one day is represented in this case as a one input space, whereas the day he goes down is the second input. The connection between the monk in one input space and the monk in the other input space is considered as an example of cross-space mapping. The generic space includes, for instance, the mountain path as it is the common element present in both inputs. The blended space is where the integration happens. Whereas some elements, such as the day and the mountain’s path, are combined and mapped onto the blended space as one, other elements, such as monks, are projected separately. Because the projection preserved the time of a day and the monk’s motion’s direction during projection, there are two separate monks in the blend. In this space, it is also possible to “run” the new structure leading to the monk’s meeting with himself.[14]

Four main types of integration network

[edit]

Simplex

[edit]

In a simplex network, one of the input spaces contains organising frames, and the other includes specific elements.[15] In this type of integration network, the roles associated with the frame from one input space are projected onto the blended space together with the values as elements from the other input space. Then they are integrated into a new structure.[16]

Mirror

[edit]

A mirror network is characterised by a shared organising frame present in each of the mental spaces. The Buddhist Monk riddle is an example of this network.

Single-scope

[edit]

A single-scope network consists of two input spaces which have different organising frames. In this situation, only one frame is projected into the blended space.

Double-scope

[edit]

In a double-scope network, there are two different organising frames in input spaces, and the blended space contains parts of each of those frames from both input spaces.[16]

Vital relations

[edit]

Vital relations describe some of the connections between the elements of the different input spaces. For example, in the Buddhist Monk riddle, time is treated as a vital relation which is compressed in the blended space, and as a result, the monk can simultaneously walk up and down the mountain. Some of the other types of vital relations include cause-effect, change, space, identity, role and part-whole.[16]

Criticism

[edit]

The main criticism against the conceptual blending theory was proposed by Raymond W. Gibbs Jr. (2000), who pointed out the lack of testable hypotheses which are necessary if the theory is to predict any behaviour. He has explained that the blending theory cannot be treated as a single theory but rather as a framework. However, because there is no one fundamental hypothesis to test, many various hypotheses should be tested instead which can be problematic for the theory. Gibbs has also suggested that inferring information about language processes from the analysis of the products of these processes may not be a correct approach. Furthermore, he has proposed that other linguistic theories are equally effective in explaining the various cognitive phenomena.[17] These criticisms were answered directly by Fauconnier.[18]

The theory has also been criticised for unnecessary complexity. The minimal network model requires at least four mental spaces; however, David Ritchie (2004) argues that many of the proposed blends could be explained by simpler integration processes. He has also demonstrated that some examples of blends such as the Buddhist Monk can have an alternative interpretation.[1]

See also

[edit]

Notes

[edit]
  1. ^ a b Ritchie, L. David (2004). "Lost in "conceptual space": Metaphors of conceptual integration". Metaphor and Symbol. 19: 31–50. doi:10.1207/S15327868MS1901_2. S2CID 144183373. Retrieved 2020-06-14.
  2. ^ Turner, Mark; Fauconnier, Gilles (1995). "Conceptual Integration and Formal Expression". University of California San Diego. Archived from the original on 2006-05-16.
  3. ^ Turner, Mark; Fauconnier, Gilles (2002). The Way We Think. Conceptual Blending and the Mind's Hidden Complexities. New York: Basic Books. p. 37.
  4. ^ Fauconnier, Gilles; Turner, Mark (2008). The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. Basic Books.
  5. ^ Lakoff, George; Núñez, Rafael (2003). Where Mathematics Comes From. Basic Books. p. 48. ISBN 9780465037704 – via Archive.org.
  6. ^ Shrager, Jeff (1987). "Theory Change via View Application in Instructionless Learning". Machine Learning. 2 (3): 247–276. doi:10.1007/bf00058681.
  7. ^ Shrager, Jeff (1990). "Commonsense perception and the psychology of theory formation". In Shrager, Jeff; Langley, Pat (eds.). Computational models of scientific discovery and theory formation. Morgan Kaufmann series in machine learning. San Mateo, Calif.: Morgan Kaufmann. ISBN 9781558601314.
  8. ^ Guhe, Markus; Pease, Alison; Smaill, Alan; Martinez, Maricarmen; Schmidt, Martin; Gust, Helmar; Kühnberger, Kai-Uwe; Krumnack, Ulf (September 2011). "A computational account of conceptual blending in basic mathematics". Cognitive Systems Research. 12 (3–4): 249–265. doi:10.1016/j.cogsys.2011.01.004.
  9. ^ Lieto, Antonio; Pozzato, Gian Luca (2020). "A description logic framework for commonsense conceptual combination integrating typicality, probabilities and cognitive heuristics". Journal of Experimental and Theoretical Artificial Intelligence. 32 (5): 769–804. arXiv:1811.02366. Bibcode:2020JETAI..32..769L. doi:10.1080/0952813X.2019.1672799. S2CID 53224988.
  10. ^ Lieto, Antonio; Perrone, Federico; Pozzato, Gian Luca; Chiodino, Eleonora (2019). "Beyond subgoaling: A dynamic knowledge generation framework for creative problem solving in cognitive architectures". Cognitive Systems Research. 58: 305–316. doi:10.1016/j.cogsys.2019.08.005. hdl:2318/1726157. S2CID 201127492.
  11. ^ Lieto, Antonio; Pozzato, Gian Luca (2019). "Applying a description logic of typicality as a generative tool for concept combination in computational creativity". Intelligenza Artificiale. 13: 93–106. doi:10.3233/IA-180016. hdl:2318/1726158. S2CID 201827292.
  12. ^ Chiodino, Eleonora; Di Luccio, Davide; Lieto, Antonio; Messina, Alberto; Pozzato, Gian Luca; Rubinetti, Davide (2020). A Knowledge-based System for the Dynamic Generation and Classification of Novel Contents in Multimedia Broadcasting (PDF). ECAI 2020, 24th European Conference on Artificial Intelligence. doi:10.3233/FAIA200154.
  13. ^ Turner, Mark (1997). The Literary Mind. New York: Oxford University Press. p. 93. ISBN 9781602561120.
  14. ^ a b c d e f g h Fauconnier, Gilles; Turner, Mark (1998). "Conceptual Integration Networks". Cognitive Science. 22 (2): 133–187. doi:10.1207/s15516709cog2202_1.
  15. ^ a b c d Fauconnier, Gilles; Turner, Mark (2003). "Conceptual Blending, Form and Meaning". Recherches en Communication. 19. doi:10.14428/rec.v19i19.48413.
  16. ^ a b c d Birdsell, Brian J. (2014). "Fauconnier's theory of mental spaces and conceptual blending". In Littlemore, Jeannette; Taylor, John R. (eds.). The Bloomsbury Companion to Cognitive Linguistics. Bloomsbury companions. London: Bloomsbury. pp. 72–90. doi:10.5040/9781472593689.ch-005. ISBN 9781441195098.
  17. ^ Gibbs, W. Raymond (2000). "Making good psychology out of blending theory". Cognitive Linguistics. 11 (3–4): 347–358. doi:10.1515/cogl.2001.020.
  18. ^ Fauconnier, Gilles (2021). "Semantics and Cognition" (PDF). Revista Diadorim. 22 (3). doi:10.35520/diadorim.2020.v22n2a38222. Archived from the original (PDF) on 2021-07-21. Retrieved 2021-07-18.
[edit]