Jump to content

Mental representation

From Wikipedia, the free encyclopedia
(Redirected from Idea in anthropology)

A mental representation (or cognitive representation), in philosophy of mind, cognitive psychology, neuroscience, and cognitive science, is a hypothetical internal cognitive symbol that represents external reality or its abstractions.[1][2]

Mental representation is the mental imagery of things that are not actually present to the senses.[3] In contemporary philosophy, specifically in fields of metaphysics such as philosophy of mind and ontology, a mental representation is one of the prevailing ways of explaining and describing the nature of ideas and concepts.

Mental representations (or mental imagery) enable representing things that have never been experienced as well as things that do not exist.[4] Our brains and mental imageries allow us to imagine things have either never happened or are impossible and do not exist. Although visual imagery is more likely to be recalled, mental imagery may involve representations in any of the sensory modalities, such as hearing, smell, or taste. Stephen Kosslyn proposes that images are used to help solve certain types of problems. We are able to visualize the objects in question and mentally represent the images to solve it.[4]

Mental representations also allow people to experience things right in front of them—however, the process of how the brain interprets and stores the representational content is debated.[5]

Representational theories of mind

[edit]

Representationalism (also known as indirect realism) is the view that representations are the main way we access external reality.

The representational theory of mind attempts to explain the nature of ideas, concepts and other mental content in contemporary philosophy of mind, cognitive science and experimental psychology. In contrast to theories of naïve or direct realism, the representational theory of mind postulates the actual existence of mental representations which act as intermediaries between the observing subject and the objects, processes or other entities observed in the external world. These intermediaries stand for or represent to the mind the objects of that world.

The original or "classical" representational theory probably can be traced back to Thomas Hobbes and was a dominant theme in classical empiricism in general. According to this version of the theory, the mental representations were images (often called "ideas") of the objects or states of affairs represented. For modern adherents, such as Jerry Fodor and Steven Pinker, the representational system consists rather of an internal language of thought (i.e., mentalese). The contents of thoughts are represented in symbolic structures (the formulas of mentalese) which, analogously to natural languages but on a much more abstract level, possess a syntax and semantics very much like those of natural languages. For the Portuguese logician and cognitive scientist Luis M. Augusto, at this abstract, formal level, the syntax of thought is the set of symbol rules (i.e., operations, processes, etc. on and with symbol structures) and the semantics of thought is the set of symbol structures (concepts and propositions). Content (i.e., thought) emerges from the meaningful co-occurrence of both sets of symbols. For instance, "8 x 9" is a meaningful co-occurrence, whereas "CAT x §" is not; "x" is a symbol rule called for by symbol structures such as "8" and "9", but not by "CAT" and "§".[6]

Canadian philosopher P. Thagard noted in his work "Introduction to Cognitive Science", that "most cognitive scientists agree that knowledge in the human mind consists of mental representations" and that "cognitive science asserts: that people have mental procedures that operate by means of mental representations for the implementation of thinking and action"[7]

Strong vs weak, restricted vs unrestricted

[edit]

There are two types of representationalism, strong and weak. Strong representationalism attempts to reduce phenomenal character to intentional content. On the other hand, weak representationalism claims only that phenomenal character supervenes on intentional content. Strong representationalism aims to provide a theory about the nature of phenomenal character, and offers a solution to the hard problem of consciousness. In contrast to this, weak representationalism does not aim to provide a theory of consciousness, nor does it offer a solution to the hard problem of consciousness.

Strong representationalism can be further broken down into restricted and unrestricted versions. The restricted version deals only with certain kinds of phenomenal states e.g. visual perception. Most representationalists endorse an unrestricted version of representationalism. According to the unrestricted version, for any state with phenomenal character that state's phenomenal character reduces to its intentional content. Only this unrestricted version of representationalism is able to provide a general theory about the nature of phenomenal character, as well as offer a potential solution to the hard problem of consciousness. The successful reduction of the phenomenal character of a state to its intentional content would provide a solution to the hard problem of consciousness once a physicalist account of intentionality is worked out.

Problems for the unrestricted version

[edit]

When arguing against the unrestricted version of representationalism people will often bring up phenomenal mental states that appear to lack intentional content. The unrestricted version seeks to account for all phenomenal states. Thus, for it to be true, all states with phenomenal character must have intentional content to which that character is reduced. Phenomenal states without intentional content therefore serve as a counterexample to the unrestricted version. If the state has no intentional content its phenomenal character will not be reducible to that state's intentional content, for it has none to begin with.

A common example of this kind of state are moods. Moods are states with phenomenal character that are generally thought to not be directed at anything in particular. Moods are thought to lack directedness, unlike emotions, which are typically thought to be directed at particular things. People conclude that because moods are undirected they are also nonintentional i.e. they lack intentionality or aboutness. Because they are not directed at anything they are not about anything. Because they lack intentionality they will lack any intentional content. Lacking intentional content their phenomenal character will not be reducible to intentional content, refuting the representational doctrine.

Though emotions are typically considered as having directedness and intentionality this idea has also been called into question. One might point to emotions a person all of a sudden experiences that do not appear to be directed at or about anything in particular. Emotions elicited by listening to music are another potential example of undirected, nonintentional emotions. Emotions aroused in this way do not seem to necessarily be about anything, including the music that arouses them.[8]

Responses

[edit]

In response to this objection, a proponent of representationalism might reject the undirected non-intentionality of moods, and attempt to identify some intentional content they might plausibly be thought to possess. The proponent of representationalism might also reject the narrow conception of intentionality as being directed at a particular thing, arguing instead for a broader kind of intentionality.

There are three alternative kinds of directedness/intentionality one might posit for moods.[8]

  • Outward directedness: What it is like to be in mood M is to have a certain kind of outwardly focused representational content.
  • Inward directedness: What it is like to be in mood M is to have a certain kind of inwardly focused representational content.
  • Hybrid directedness: What it is like to be in mood M is to have both a certain kind of outwardly focused representational content and a certain kind of inwardly focused representational content.

In the case of outward directedness, moods might be directed at either the world as a whole, a changing series of objects in the world, or unbound emotion properties projected by people onto things in the world. In the case of inward directedness, moods are directed at the overall state of a person's body. In the case of hybrid, directedness moods are directed at some combination of inward and outward things.

Further objections

[edit]

Even if one can identify some possible intentional content for moods we might still question whether that content is able to sufficiently capture the phenomenal character of the mood states they are a part of. Amy Kind contends that in the case of all the previously mentioned kinds of directedness (outward, inward, and hybrid) the intentional content supplied to the mood state is not capable of sufficiently capturing the phenomenal aspects of the mood states.[8] In the case of inward directedness, the phenomenology of the mood does not seem tied to the state of one's body, and even if one's mood is reflected by the overall state of one's body that person will not necessarily be aware of it, demonstrating the insufficiency of the intentional content to adequately capture the phenomenal aspects of the mood. In the case of outward directedness, the phenomenology of the mood and its intentional content does not seem to share the corresponding relation they should given that the phenomenal character is supposed to reduce to the intentional content. Hybrid directedness, if it can even get off the ground, faces the same objection.


Philosophers

[edit]

There is a wide debate on what kinds of representations exist. There are several philosophers who bring about different aspects of the debate. Such philosophers include Alex Morgan, Gualtiero Piccinini, and Uriah Kriegel.

Alex Morgan

[edit]

There are "job description" representations.[1] That is representations that represent something—have intentionality, have a special relation—the represented object does not need to exist, and content plays a causal role in what gets represented:.

Structural representations are also important.[1] These types of representations are basically mental maps that we have in our minds that correspond exactly to those objects in the world (the intentional content). According to Morgan, structural representations are not the same as mental representations—there is nothing mental about them: plants can have structural representations.

There are also internal representations.[1] These types of representations include those that involve future decisions, episodic memories, or any type of projection into the future.

Gualtiero Piccinini

[edit]

In Gualtiero Piccinini's forthcoming work, he discusses topics on natural and nonnatural mental representations. He relies on the natural definition of mental representations given by Grice (1957)[9] where P entails that P. e.g. Those spots mean measles, entails that the patient has measles. Then there are nonnatural representations: P does not entail P. e.g. The 3 rings on the bell of a bus mean the bus is full—the rings on the bell are independent of the fullness of the bus—we could have assigned something else (just as arbitrary) to signify that the bus is full.

Uriah Kriegel

[edit]

There are also objective and subjective mental representations.[10] Objective representations are closest to tracking theories—where the brain simply tracks what is in the environment. Subjective representations can vary person-to-person. The relationship between these two types of representation can vary.

  1. Objective varies, but the subjective does not: e.g. brain-in-a-vat
  2. Subjective varies, but the objective does not: e.g. color-inverted world
  3. All representations found in objective and none in the subjective: e.g. thermometer
  4. All representations found in subjective and none in the objective: e.g. an agent that experiences in a void.

Eliminativists think that subjective representations do not exist. Reductivists think subjective representations are reducible to objective. Non-reductivists think that subjective representations are real and distinct.[10]

Decoding Mental Representation in Cognitive Psychology

[edit]

In the field of cognitive psychology, mental representations refer to  patterns of neural activity that encode abstract concepts or representational “copies” of sensory information from the outside world.[11] For example, our iconic memory can store a brief sensory copy of visual information, lasting a fraction of a second. This allows the brain to process visual details about a brief visual event, like another car driving past on the highway.. Other mental representations are more abstract, like goals, conceptual representations, or verbal labels (“car”).

In order for cognitive psychologists to understand how humans process information within the brain, they created the Posner’s letter matching task experiment to learn how individuals process visual information by measuring their reaction time when viewing pairs of letters.[12] This experiment revealed that some representations have different reaction times meaning that some take more time to activate using stimulus like “a”.  Stimulus “a” can be represented in multiple ways by a visual input that consists of the physical form of the letter, letter category, and the phonetic representation.

fMRI

[edit]

Functional Magnetic Resonance Imaging (fMRI) is a powerful tool in cognitive science for exploring the neural correlates of mental representations. “A powerful feature of event-related fMRI is that the experimenter can choose to combine the data from completed scans in many different ways.” [13]

Functional Magnetic Resonance Imaging (fMRI) is a powerful tool in cognitive science for exploring the neural correlates of mental representations. For instance, if participants are instructed to visualize a certain object or scene, fMRI can determine the engaged brain regions (primary visual cortex for visual imagery; hippocampus for episodic memory). By recording patterns of brain activity, functional magnetic resonance imaging (fMRI) can be used to quantify and decode different kinds of mental representations. Certain ideas, perceptions, or mental images may be associated with these patterns, which are a reflection of underlying neurological processes. For example, one study tested if fMRI could accurately measure the mental representations that are triggered when viewing a simple image. Participants' were shown 1,200 images of natural objects and printed letters while brain activity was recorded from multiple regions of visual cortex (V1-4), lateral occipital complex). Using deep neural networks (DNNs), the authors were then able to “recreate” the original images, based only on the brain data. These reconstructed images were remarkably similar to the original, preserving important elements like texture, shape, and color. A new group of participants was able to correctly identify the original image based on the reconstructed image 95 percent of the time.

For instance, if participants are instructed to visualize a certain object or scene, fMRI can determine the engaged brain regions (primary visual cortex for visual imagery; hippocampus for episodic memory). Such patterns provide a glimpse into neural encoding of mental states, and act as bridges between neural activity and subjective experience. Advocates for cognitive science consider fMRI research critical to exposing how mental representations are spread and overlapped. These methods have demonstrated that conceptual representations, such as "tools" versus "animals," are not limited to discrete brain regions but rather span networks encompassing associative, motor, and sensory regions. This illustrates how mental models combine semantic and perceptual aspects to provide a more complex and dynamic view of cognition. Furthermore, by showing how experiences gradually alter mental representations, fMRI research has advanced our understanding of brain plasticity. fMRI offers a glimpse into the brain underpinnings of thought and organization by mapping these processes. [14]

Multi-Voxel Pattern Analysis

[edit]

Multi-Voxel Pattern Analysis is a data processing method that is used to analyze multiple sets of patterns simultaneously. This analysis is also commonly used in cognitive psychology, to examine brain imaging data when paired with fMRI. This testing essentially allows researchers to analyze whether a particular mental representation is active within a particular brain region. With fMRI activation, the visual perception of the brain can be analyzed and decoded. In certain regions of the brain, such as the retinotopic region, researchers have the ability to predict features of the visual perception, such as lines or patterns, the awareness of the individual, features that weren’t originally analyzed, as well as the identifying perceived images of an individual. After thorough research, studies have shown that patterns of imagery and perception are more seen in the ventral temporal cortex, than they are in the retinotopic region of the brain. These results show that without new information entering the brain, it has the ability to reactivate certain patterns of neural activity that have been active before. [15]With this analysis, researchers are able to understand the process in which the brain decodes information and identify ways in which this information is represented.

Restricted vs. Unrestricted Decoding of Mental Representations

[edit]

When scientists study the brain, they want to understand how our thoughts, feelings, and perceptions are represented in brain activity. One way they do this is through something called neural decoding, where they try to figure out what’s going on in the brain by analyzing patterns of brain activity. There are two main ways to approach this: restricted decoding and unrestricted decoding. Here’s how they differ:

Restricted Decoding

[edit]

Restricted decoding is when scientists focus on brain activity tied to a specific task or stimulus. Basically, it’s when you do something like recognize an object, solve a problem, or look at a picture and scientists track the brain activity related to that task. For example, if you look at a picture of a face, certain areas of your brain will light up in a predictable way. Researchers can then study these patterns and "decode" the brain activity to figure out what you’re seeing or thinking.[16]

So, restricted decoding is pretty focused. The brain activity is tied to one thing, like a specific object or task. The goal is to figure out how the brain represents specific things (like seeing a face or recognizing a word) when you're actively engaging with something.

For example, with fMRI scans, researchers can track brain activity while people look at different objects or images. They can use this data to predict what the person is seeing based on the neural patterns, since those patterns are relatively consistent when someone is exposed to the same thing (like a particular image or object).

Unrestricted Decoding

[edit]

Unrestricted decoding is a bit more relaxed. Instead of focusing on a task, researchers look at brain activity when people aren’t doing anything in particular, for example when you’re resting or thinking freely. This approach is more about understanding general mental states or abstract thoughts that aren't linked to a specific task or stimulus.[17]

For example, someone might just be asked to relax and think about whatever comes to mind. Researchers would then try to decode the brain patterns to figure out what’s going on in their head, whether they're feeling happy, sad, or even daydreaming. Since the brain is in a more free-flowing state, the patterns are a lot less predictable, and scientists often use fancy tools like machine learning to help interpret the data.

In other words, unrestricted decoding is about trying to figure out what's happening in the brain when it's not responding to a clear task, this includes what kinds of emotions, memories, or random thoughts are popping up.

See also

[edit]

References

[edit]
  1. ^ a b c d Morgan, Alex (2014). "Representations Gone Mental" (PDF). Synthese. 191 (2): 213–44. doi:10.1007/s11229-013-0328-7. S2CID 18194442.
  2. ^ Marr, David (2010). Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. The MIT Press. ISBN 978-0262514620.
  3. ^ Mckellar, Peter (1957). Imagination and thinking: A psychological analysis. Oxford, England.
  4. ^ a b Robert J. Sternberg (2009). Cognitive Psychology. Cengage Learning. ISBN 9780495506294.
  5. ^ Pearson, Joel; Kosslyn, Stephen M. (2015-08-18). "The heterogeneity of mental representation: Ending the imagery debate". Proceedings of the National Academy of Sciences. 112 (33): 10089–10092. doi:10.1073/pnas.1504933112. ISSN 0027-8424. PMC 4547292. PMID 26175024.
  6. ^ Augusto, Luis M. (2014). "Unconscious representations 2: Towards an integrated cognitive architecture". Axiomathes. 24: 19–43. doi:10.1007/s10516-012-9207-y. S2CID 122896502.
  7. ^ Thagard, P. (1996). Mind. Introduction to Cognitive Science.
  8. ^ a b c Kind, Amy (2014). Current Controversies in Philosophy of Mind. New York: Routledge. p. 118.
  9. ^ Grice, H.P. (1957). "Meaning". Philosophical Review. 66 (3): 377–388. doi:10.2307/2182440. JSTOR 2182440.
  10. ^ a b Kriegel, Uriah (2014). Current Controversies in Philosophy of Mind. Routledge. pp. 161–79.
  11. ^ Gazzaniga, Michael; Ivry, Richard; Mangun, George. Cognitive Neuroscience: The Biology of the Mind (Fifth ed.). W. W. Norton & Company. pp. 74–76.
  12. ^ MEYERS, LAWRENCE; SCHOENBORN, DON; CLARK, GAIL (1975). "Memory and encoding in a letter-matching reaction time task" (PDF). Bulletin of the Psychonomic Society. 5 (1), 41–42: 41–42. doi:10.3758/BF03336695.
  13. ^ Milner, David (November 1998). "Cognitive Neuroscience: The Biology of the Mind and Findings and Current Opinion in Cognitive Neuroscience". Trends in Cognitive Sciences. 2 (11): 463. doi:10.1016/s1364-6613(98)01226-1. ISSN 1364-6613. PMID 21227278.
  14. ^ Shen, Guohua; Dwivedi, Kshitij; Majima, Kei; Horikawa, Tomoyasu; Kamitani, Yukiyasu (2019-04-12). "End-to-End Deep Image Reconstruction From Human Brain Activity". Frontiers in Computational Neuroscience. 13: 21. doi:10.3389/fncom.2019.00021. ISSN 1662-5188. PMC 6474395. PMID 31031613.
  15. ^ Reddy, Leila; Tsuchiya, Naotsugu; Serre, Thomas (2010-04-01). "Reading the mind's eye: Decoding category information during mental imagery". NeuroImage. 50 (2): 818–825. doi:10.1016/j.neuroimage.2009.11.084. ISSN 1053-8119. PMC 2823980.
  16. ^ "Cognitive Neuroscience". wwnorton.com. Retrieved 2024-11-25.
  17. ^ "Cognitive Neuroscience". wwnorton.com. Retrieved 2024-11-25.

Further reading

[edit]
[edit]