Mental Models 9/11/07 2:00 PM
Mental Models: how do our minds work?
Given that nerves conduct signals to and fro, there is still a lot to be done to explain how we think, how
ideas work, what goes on in our minds (vs brains). Enter cognitive science, a brand-new field that
straddles biology, anthropology, psychology, computer science and whatever other fields might offer
insights. UCSD has one of the first and best CogSci departments in the country; the anthro department is
a leader in cognitive anthropology. This section discusses a set of ideas about how we use models,
analogies, and metaphors when we think (here, moral reasoning); indeed, maybe those are thinking.
SOURCE:
Brian Derfer (cognitive anthro graduate student), 1995 manuscript, used by permission
Schema theory
Rosch's (1977) view of category structure and its implications for reasoning that Lakoff and Johnson
(1980) have identified have had a significant impact on the emergence of schema theory. Schema theory
is a theoretical framework which has emerged in the cognitive sciences over the last two decades and
which holds particular promise for enriching our knowledge of moral reasoning.1 What follows is a brief
description of schema theory and its central concept -- the schema -- in order to pave the way for a
discussion of the implications of schema theory on research into moral reasoning.
Schemas are conceptual structures and processes which enable human beings to store perceptual and
conceptual information about the world and make interpretations of events through abstraction.2 For
purposes of this discussion, a schema can be thought of as a relatively flexible, schematic representation
or "template" that helps to guide our interpretation of events. Schemas permit quite different experiences
to be understood as experiences of the same thing (D'Andrade, 1992, 1993). A "dog" schema, for
instance, can be activated by many different instances of a dog -- collies, Great Danes, dachshunds, etc.
Schemas can do this because they provide a fairly abstract representation which leaves unspecified a
number of "slots" to be filled in by particular experiences or contexts (D'Andrade, 1993). In the event
that some slot is not filled in, a schema can fill in missing information with "default values." For
instance, in the event that I see a collie from the front, and do not happen to directly perceive its tail, the
"dog" schema which is activated provides me with the knowledge that, in all probability, the dog that I
see has a tail.
Default values emerge naturally during the learning of a given schema -- through repeated experiences
and interactions with specific instances of events. Each time the "dog" schema, for instance, is
successfully used to make an interpretation of a dog, it adapts slightly to conform to the specific
properties of that dog. In this way, a person who has repeated interactions with dogs that bite will
probably have a "dog" schema in which "biting" is a default value. Schemas, then, tend to adapt
somewhat to the regularities of experience with particular objects or events.3 Donald Norman has
summed up this property of schemas nicely:
Schemas are not fixed structures. Schemas are flexible configurations, mirroring the regularities of
experience, providing automatic completion of missing components, automatically generalizing
from the past, but also continually in modification, continually adapting to reflect the current state
of affairs. Schemas are not fixed, immutable data structures. Schemas are flexible interpretive
states that reflect the mixture of past experience and present circumstances (Norman, 1986: 536 --
cited in D'Andrade, 1993:142)
One result of this "adaptive" property of schemas is that schemas produce prototypes as default outputs
http://weber.ucsd.edu/~jmoore/courses/schemas.html Page 1 of 6
Mental Models 9/11/07 2:00 PM
which tend to mirror or blend the particular properties of the types of objects or events which are
experienced most frequently (D'Andrade, 1993). One person's prototypical dog might, for instance, be a
medium-sized brownish dog that is moderately furry, barks at strangers, is loyal, scratches frequently, has
a wet nose, etc. It is not very likely that many Americans' idea of a prototypical dog resembles a
dachshund or a greyhound very closely, even though most Americans can still identify these as dogs.
This prototype structure of schemas resonates quite well with a body research into the structure of
categories carried out by several linguists and psychologists (cf. Lakoff, 1987; Rosch, 1975). In the mid
1970s, Rosch discovered that categories are not defined by a set of list of necessary and distinctive
features that all members share, but are constructed around best examples or prototypes. For this reason,
people can rank items in terms of their goodness as examples of a given category. She calls these
judgments "prototypicality ratings." She found that, for instance, Berkeley undergraduates gave robins
and bluebirds high typicality ratings for the bird category, and gave emus and penguins low ratings
(Rosch, 1975 -- cited in D'Andrade, 1993).
Prototypes and default values provide many abstract schemas with extremely rich content. Just hearing
the word "dog," for instance, in a story evokes a fairly rich set of expectations about the physical
appearance and behavior of the animal. Schema theorists hold the view that the activation of a schema
involves, in some sense, the activation of a world -- extremely rich in sensory, emotional, and conceptual
associations. Holland and Quinn (1987) describe schemas as "story-like chains of prototypical events that
unfold in simplified worlds." Reasoning, on this view, is not a formal, abstract process of algorithmically
applying rules to well-specified symbolic inputs -- a view implicit in the digital computer metaphor of
the mind that has held sway in cognitive science throughout much of four decades. Rather, reasoning
involves the activation of "simplified worlds," and the mental "running through" of typical sequences of
events in order to understand the nature of some problem, or provide clues to its solution.
Schemas thus provide the presupposed, taken-for-granted knowledge of the world that renders much of
what we and others do intelligible. A whole interrelated set of cognitive schemas must be known, for
example, in order to make sense of the statement "He swung and missed." as an explanation about the
end result of a baseball game, or why someone is feeling depressed. Unpacking the schemas that
underlie even a small piece of discourse can be a very complex process.
Reasoning also involves the opportunistic and imaginative extension of cognitive schemas to the novel
situations that one confronts through metaphor. As a result, metaphor plays a huge role in our
understanding and reasoning about the world (cf. Lakoff and Johnson, 1980; Lakoff, 1987; Johnson,
1987). Examples are ubiquitous. To take just one example, much of our understanding of objects and
events, according to Lakoff and Johnson (1980), is accomplished through the "image-schema" of a
CONTAINER. Human beings have a rich CONTAINER schema -- in a "literal" sense -- based on our
frequent experience of our bodies and the world as bounded entities. We metaphorically extend this
schema to a wide variety of concepts and events that do not have any literal boundaries, and our
understanding of these concepts and events is grounded in these metaphorical extensions. For instance,
English speakers understand such abstract ideas as "mind" and "argument" through the CONTAINER
metaphor, among other metaphors. Hence we understand someone who is preoccupied as having
something "in the back of his mind," someone who is very clever as being "full of ideas," etc. We also
understand an argument as something that can be "water-tight" or "full of holes" so that it "can't hold
water," etc. etc. (Lakoff and Johnson, 1980). Lakoff and Johnson deny that metaphor plays merely a
"superficial" role in communication among people. Rather, they claim, and have argued persuasively in
several books, that our very experience of the world is fundamentally and pervasively structured by
metaphor (Lakoff and Johnson, 1980; Lakoff, 1987, n. d.; Johnson, 1987, 1993).
At this point, one may get the impression that schema theorists hold the view that the creation of
http://weber.ucsd.edu/~jmoore/courses/schemas.html Page 2 of 6
Mental Models 9/11/07 2:00 PM
schemas, and the extension of them through metaphor, is something that occurs entirely in the heads of
individual human beings. While computational modelers and psychologists have focused primarily on the
individual as the unit of analysis for understanding the development of schemas, anthropologists and
linguists have focused on immense role that culture plays in the development of widely shared schemas
that are transmitted across people and through generations. A number of anthropologists have adopted the
term "cultural model" to refer to cognitive schemas which are widely and intersubjectively shared among
members of a cultural group. It should be pointed out that makes a cultural model "cultural" is its
sharedness, not its uniqueness to or origins within a particular cultural milieu. Not all cultural models are
unique to a particular group (the container schema, for instance, is probably shared by humans
everywhere); however, many are. (Consider the "touchdown" schema known by most Americans.)
One can also look at a particular metaphor as something that evolves in a particular cultural context, and
which becomes strongly embedded in the language that people use to communicate with one another.
Again, metaphors through which Americans understand the mind provide a nice example of this
phenomenon: We understand the mind as a machine ("My mind isn't working right now." "Her wheels
are always turning."), and, very recently among cognitive scientists, as a computer ("cognition" is
"information processing"; sensory experiences are "input" and "data," etc. etc.).
Two other properties of schemas are important to this discussion: The first is that schemas tend to be
hierarchically organized such that certain schemas are "embedded" in other schemas. For instance, the
COMMERCIAL TRANSACTION schema cannot be understood without understanding the "simpler"
schemas which compose it, such as BUYER, SELLER, and MONEY schemas, which themselves cannot
be understood without understanding even "simpler" schemas which compose them, such as COIN,
BILL or DENOMINATION in the case of the MONEY schema. The hierarchical relations among
schemas may also function as "means-ends goal linkages," whereby one schema may function as a
means relative to a more general schema which acts as an end, but which in turn acts as a means relative
to an even more general schema which acts as an end, and so on (D'Andrade, 1992:30). The following
example nicely illustrates this property:
One recognizes some chair as part of the "finding a seat" schema, which is part of the "attending
the lecture" schema, which is part of the "finding out what's going on" schema, which may be for
some people part of the "doing anthropology" schema, or perhaps a "meeting friends" schema, or
whatever (D'Andrade, 1992:30)
The "top-level" schema in a schema hierarchy -- the most general interpretation of what is going on -- is
likely to function as a goal. That is, the most general interpretation of events is likely to correspond to
the "end" in terms of which actions can be understood as means.
For this reason, a researcher can gain insight into the motivations underlying a person's behavior by
gaining insight into the schemas, and the relationships among the schemas, in terms of which that person
interprets the world. Claudia Strauss has shown that a plausible account of individual motivation can be
gained by showing how widely-shared cultural models such as ACHIEVEMENT, or BUSINESS are
linked with emotionally salient personal experiences (such as childhood asthma) and schemas about one's
self ("I'm different") to form what she calls "personal semantic networks." Specifying the links among
the various schemas in these networks provides a detailed and persuasive picture of the sources of
motivation for an individual (Strauss, 1992).
A final important property of schemas is that they can usually be identified using fairly ordinary means
of ethnographic investigation -- namely, through interviews and in-depth analyses of transcribed pieces
of discourse. Over the last decade, a growing body of research has repeatedly shown this to be the case
(cf. D'Andrade, 1987; Gentner and Gentner, 1983; Holland and Skinner, 1987; Hutchins, 1980; Quinn,
http://weber.ucsd.edu/~jmoore/courses/schemas.html Page 3 of 6
Mental Models 9/11/07 2:00 PM
1987; Strauss, 1992).
FOOTNOTES
1 Summaries can be found in D'Andrade (1992, 1993), and Rumelhart et al (1986).
The central concept, "schema," has gone by a number of other names, including "cultural model"
(D'Andrade and Strauss, 1992; Holland and Quinn, 1987), "mental model" (Johnson-Laird, 1983),
"idealized cognitive model" (Lakoff, 1987), "folk model" (D'Andrade, 1987), "script" (Schank and
Abelson, 1977), "scene" (Fillmore, 1975), and "frame" (Minsky, 1975).
back
2 The reason why a schema can be thought of both as a structure and a process is that the concept of a
"schema" fits nicely with a computational model of cognition called parallel distributed processing,
wherein the process of arriving at some interpretation of inputs is actually a result of the physical
structure of the interpreting device. In the case of a physically realized PDP model, "interpretations" or
outputs corresponding to different inputs are generated by the movement of electrical signals over
multiple layers of massively interconnected neuron-like elements. The "strengths" of the exitatory or
inhibitory connections among the nodes determine the form of the output; and the form of the output
may, in turn, cause the strength of many of the connections to change. In this way, the process by which
a PDP model recognizes an input is one and the same as the structure of the PDP model itself.
PDP models are sometimes referred to as "neural nets" to capture the similarities that are thought to hold
between the computational models and the physical structure of actual brains, both of which are thought
to process information rapidly and in massively parallel fashion over a large number of simple elements.
Remarkably enough, PDP models exhibit processing behaviors that are in many ways like the cognitive
capacities of human beings -- including the ability to "learn," the advancement of learning abilities
through a sequence of "developmental stages," the ability to make correct interpretations of entirely
novel inputs, priming effects, graceful degradation of performance when "lesioned," a relatively high
level of performance when given degraded information as input, the formation of prototypes, and
systematic distortion in memory (D'Andrade, 1992; Elman, n.d.; Farah and McClelland, 1991; Hinton
and Shallice, 1991; Rumelhart et al, 1986)
The concept of a schema, then, captures both the conceptual structure of information and the micro-
processing of human brains which is thought to create this conceptual structure.
back
3 Other factors may be involved as well, such as the level of emotional arousal during a particular
experience. Hence, it may only take one dog bite to make "biting" a default value of the "dog" schema.
back
References
D'Andrade, R. G. (1987) A folk model of the mind. In D. Holland and N. Quinn (Eds.) Cultural
Models in Language and Thought. Cambridge: Cambridge University Press.
D'Andrade, R. G. (1992) Schemas and motivation. In R. G. D'Andrade and C. Strauss (Eds.)
Human Motives and Cultural Models. Cambridge: Cambridge University Press.
http://weber.ucsd.edu/~jmoore/courses/schemas.html Page 4 of 6
Mental Models 9/11/07 2:00 PM
D'Andrade, R. G. (1993) The Development of Cognitive Anthropology. Cambridge: Cambridge
University Press.
D'Andrade, R. G. and C. Strauss (Eds.) (1992) Human Motives and Cultural Models. Cambridge:
Cambridge University Press.
Elman, J. L. (n.d.) Learning and Development in Neural Networks: the importance of starting
small. Unpublished manuscript. UCSD, Department of Cognitive Science.
Farah, M. J. and J. L. McClelland (1991) A Computational Model of Semantic Memory
Impariment: Modality Specificity and Emergent Category Specificity Journal of Experimental
Psychology 120(4), 339-357.
Fillmore, C. (1975) An Alternative to Checklist Theories of Meaning. Proceedings of the 1st
Annual Meeting of the Berkeley Linguistics Society 1:123-131.
Gentner, D. and D. R. Gentner (1983) Flowing waters or teeming crowds: mental models of
electricity. In D. Gentner and A. L. Stevens (Eds.) Mental Models. Englewood Cliffs: Lawrence
Earlbaum Associates, Inc.
Hinton, G. E. and T. Shallice (1991) Lesioning an Attractor Network: Investigations of Acquired
Dyslexia. Psychological Review, 98(1): 74-95.
Holland, D. and N. Quinn (Eds.) (1987) Cultural Models in Language and Thought. Cambridge:
Cambridge University Press.
Holland, D. and D. Skinner (1987) Prestige and intimacy: the cultural models behind Americans'
talk about gender types. In D. Holland and N. Quinn (Eds.) Cultural Models in Language and
Thought. Cambridge: Cambridge University Press.
Hutchins, E. (1980) Culture and Inference. Cambridge: Harvard University Press.
Johnson, M. (1987) The Body in the Mind. Chicago: University of Chicago Press.
Johnson, M. (1993) Moral Imagination: Implications of Cognitive Science for Ethics. Chicago: The
University of Chicago Press.
Johnson-Laird, P. N. (1983) Mental Models. Cambridge: Harvard University Press.
Lakoff, G. (1987) Women, Fire, and Dangerous Things. Chicago: The University of Chicago
Press.
Lakoff, G. (n. d.) The Moral Agenda: What Conservatives Know that Liberals Don't. Department
of Linguistics, University of California at Berkeley.
Lakoff, G., and M. Johnson (1980) Metaphors We Live By. Chicago: University of Chicago Press.
Minsky, M. (1975) A Framework for Representing Knowledge. In Winston, P. H. (ed.) The
Psychology of Computer Vision. New York: McGraw-Hill
Norman, D. A. (1986) Reflections on Cognition and Parallel Distributed Processing. In J.L.
McClelland, D.E. Rumelhart and the PDP Research Group (eds.) Parallel Distributed
Processing,Volume 2: Psychological and Biological Models. Cambridge: MIT Press.
Quinn, N. (1987) Convergent evidence for a cultural model of American marriage. In D. Holland
and N. Quinn (Eds.) Cultural Models in Language and Thought. Cambridge: Cambridge
University Press.
Rosch, E. (1975) Cognitive Representations of Semantic Categories. Journal of Experimental
Psychology, 104: 192-233.
Rumelhart, D. E., P. Smolensky, J.L. McClelland and G.E. Hinton (1986) "Sequential Thought
Processes in PDP Models" in McClelland, J.L. and D.E. Rumelhart and the PDP Research Group
(eds.) Parallel Distributed Processing,Volume 2: Psychological and Biological Models. Cambridge:
MIT Press.
Schank, R. C. and R. P. Abelson (1977) Scripts, Plans, Goals, and Understanding: An Enquiry into
Human Knowledge Structures. Hillsdale: Erlbaum.
Strauss, C. (1992) What makes Tony run? Schemas as motives reconsidered. In R. G. D'Andrade
and C. Strauss (Eds.) Human Motives and Cultural Models. Cambridge: Cambridge University
Press.
http://weber.ucsd.edu/~jmoore/courses/schemas.html Page 5 of 6
Mental Models 9/11/07 2:00 PM
Back to top
Back to course handout menu
Back to Moore's home page
Jump to overview of BioAnthro at UCSD & elsewhere
Last update: 5 Jan 1999
http://weber.ucsd.edu/~jmoore/courses/schemas.html Page 6 of 6