Adventures in Cognitive Science 6: How Language interferes with Actions

punctuation-marks-2999583_640.jpg
Source: pixbay.com

After writing about mirror neurons, the common coding theory and neural simulation of actions, which all go kind of in the same direction of simulating actions and helping us to perceive the actions of others, I will in this post try to unify these theories with the embodiment, the topic of my first two posts. This time I am going to present the paper “Grounding language in action”, which was published in 2001 by Arthur Glenberg, a professor at the department of psychology at the University of Wisconsin-Madison and quite an influential figure in cognitive science, at least in my studies his name pops up every once in a while, and Michael Kaschk, a professor for psychology at the Florida State University.
In this paper they report a new phenomenon associated with language comprehension, called the action-sentence compatibility effect (ACE). When a sentence implied action in one direction (e.g. “close the drawer” implies an action away from the body), the participants had difficulty making a good judgement about a response that requires the opposite direction. The data collected from this experiment supports an embodied theory of meaning that relates meaning of sentences to human actions instead of classical theories of language comprehension, that suggest meaning is represented as a set of relations among nodes.

Introduction

How language transports meaning is still an open question (I would say that this is true today as it was in 2001, when the article was written), the traditional theory claims that language transports meaning using abstract, amodal and arbitrary symbols, like words. Words are abstract, since “chair” refers to both, big and small chairs, they are amodal since the same word for “chair” is used when writing or speaking about a chair, and they are arbitrary, since the word “chair” has no direct relationship to the object it represents. An alternative account (the one the authors are holding) is that linguistic meaning is grounded in bodily activity. In this study the authors show, that comprehending an action into one direction (e.g. “close the drawer”, which implies an action away from the body) interferes with real action into the opposing direction (e.g. “grab the cup”). The authors state, that the meaning of words can not be totally abstract, because then in order to explain words, you would have to rely on other words (abstract symbols), which would lead to a circular argument, so meaning of words must be grounded in the world. They propose a different account, the so-called index hypothesis (IH), which says that meaning is based on actions, so it is embodied. The meaning comes from the set of available actions in a situation, this set of available actions comes from meshing affordance (potential interactions between bodies and objects) to accomplish action-based goals. A chair for example affords sitting for (adult) humans, but not for elephants, since they have the wrong kind of body for sitting on a chair.
According to the index hypothesis three processes transform a words and syntax into action-based mapping: first words and phrases are mapped (or indexed) to perceptual symbols which, unlike abstract symbols are modal and non-arbitrary, then affordances are derived from those perceptual symbols and finally affordances are meshed under the guidance of syntactic constructions. The grammatical form of the sentence directs a cognitive simulation (just like in my other posts, we are running a simulation) that combines the affordences and judges, if this set of meshed affordances corresponds to a doable action, the sentence is understood.
The authors are going to prove this hypothesis by performing two experiments, where the actions implied in a sentence is interfering with the actions of the participants.

Experiment 1

A series of sensible and non-sensible (e.g. “boil the air”) was shown to the participants and they had to decide as fast as possible whether the sentence made sense or not, the implied direction of the sentence (toward/way) was manipulated for the sensible sentences, e.g. “put your finger under your nose” was a toward sentence, “close the drawer” was an away sentence. The participants were never instructed to consider the implied direction. The response of the participants was measured with a box that had a middle button, a near and a far button which was either “yes” or “no”, depending on whether a “yes-is-near” or “yes-is-far” condition was tested, so in order to press those buttons the participants had to either move their fingers to their bodies or away from their bodies. The major depended variable measured was the time until a “yes” or “no” button was pressed, pressing the middle button displayed a sentence on the screen. According to the index hypothesis when reading a sentence that implies an action away from the body, a simulation including a movement away from the body is run and the other way around when a sentence implies an action towards the body. The prediction was that there is a statistical interaction between a sentence that implied moving away from the body and the direction of the actual movement while pressing a button, this interaction is the action-sentence compatibility effect (ACE). Half of the sentences used the double-object construction (“Courtney handed you the notebook”) and the other half used the dative form (“Andy delivered the pizza to you”).
The results of that experiment showed that ACE plays a role when performing and action and it does it for multiple sentence structures (e.g. the dative form). And finally the experiment also shows that the ACE can be found in both: concrete and more abstract actions, which rules out an alternative explanation of the effect according to the index hypotheses: that understanding a sentence uses the same cognitive mechanisms as for planning and executing an action, because if there was a translation into action patters after understanding a sentence there would be no influence on the taken action.

Experiment 2A and 2B

Experiment 2A was designed to replicate and extend the findings from Experiment 1, in this experiment the response direction variable was manipulated for the participants, they also had to use their left hand (all the participants were right-handed and used their right hands in experiment 1). Experiment 2B was designed to test a spacial location alternative to the index hypothesis, instead of having to move the fingers, one finger was placed on the “yes” button and one finger was placed on the “no” button (which were either near or far from the body), according to the index hypotheses the ACE should be eliminated in this experiment.
The results of those experiments were that they demonstrated that the ACE is replicable, and they also showed that the phenomenon also exists using the non-dominant hand, so the ACE is unlikely to reflect detailed action planing. The difference between Experiments 2A and 2B showed that the ACE depends on actions and not on spatial location of the responses.

Conclusion

The ACE is following the predictions from the index hypothesis and supports the notion that language understanding is heavily grounded in bodily action, the meaning of a sentence is given by an understanding of how the actions are described by the sentence can be accomplished and how the sentence changes the possibilities for actions. The ACE shows that real bodily action is the root of meaning transported by a language.

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now
Logo
Center