The September SMART Lecture was presented by Wendy Sandler, Professor of Linguistics at the University of Haifa and Founding Director of the Sign Language Research Lab.
The talk was about the relation between sign language and the body that conveys it. Some take the view that sign languages are just like spoken languages, distinguished only trivially by the medium of production, while others hold that sign languages are derived directly from natural gestures. The work presented suggests that we are confronted with this puzzling dichotomy because we have often been looking in the wrong places in our quest to understand the linguistic properties of sign language and what it has to tell us about language in general. Wendy Sandler began by isolating gestures of different parts of the body that are designated to manifest grammatical structure in established sign languages. Turning to a young sign language in a Bedouin village, she showed that the body begins as a nonsegmented whole, with only the hands designated to create symbolic images. Across signers of four age groups in this preliminary study we saw not a magical and sudden appearance of grammatical structure, but instead a gradual activation of different parts of the body, to create increasingly complex grammatical form. Through this process, the designated gestures of visual language illuminate the emergence of grammar in a way that could not be observed in a newly emerging spoken language, even if it were possible to encounter one.
The October SMART Lecture was presented by Luc Steels, scientist and artist, and Director of the Artificial Intelligence Laboratory of the Vrije Universiteit Brussel.
The talk discussed the research on language evolution that Luc Steels has conducted with his students over the past decade. After a general introduction to our methodology and achievements, illustrated with some videoclips of language game experiments on real robots, he focused on one spin-off of this research, namely a new computational formalism for the parsing, production and learning of construction-based grammars, known as Fluid Construction Grammar (FCG). FCG tries to combine the best ideas that have been around in computational linguistics for the past decades. The result is a new synthesis that allows integration of the constructional perspective, as well as great flexibility and robustness. The presentation was illustrated with a live demonstration of the FCG system in action.
The November SMART Lecture was presented by Max Louwerse, professor Cognitive Psychology and Artificial Intelligence at the Tilburg Center for Cognition and Communication, at the University of Tilburg.
A vast amount of literature has demonstrated that cognitive processes can be explained by a perceptual simulation account. Oftentimes such studies interpret an effect for perceptual simulation as the only explanation. This talk will discuss whether there are alternative candidates. One alternative is provided by the Symbol Interdependency Hypothesis. According to this hypothesis prelinguistic conceptual knowledge used when speakers formulate utterances gets translated into linguistic conceptualizations so that as a function of language use perceptual relations are encoded in language. This talk provided evidence of language encoding conceptual information, geographical information and social information, and argued that language users utilize these language regularities in their cognitive processes.
The debate was organized together with the Amsterdam Colloquium and was on the Future of Semantics: Armchair Theorizing or Data Fetishism. With:
The March 2014 SMART Lecture was presented by Andrew Nevins, Professor of Linguistics at University College London.
Almost every song lyric can be misunderstood: famously, Jimi Hendrix’s Kiss the Sky is often heard as Kiss This Guy. Why does this happen? The answer lies in understanding the phenomenon of ‘mondegreens’. ‘Slips of the ear’ of this sort occur in everyday spoken conversation on a regular basis, even in Dutch. The party game Chinese Whispers plays on our tendency to transform the intended speech wave into a different perceptual experience and can be seen as the instantaneous mutations that lead to large-scale language change. While slips of the tongue are well-known from Freud’s analysis of speaker’s ‘accidental’ revelations, slips of the ear have received far less attention. We’ve developed a database of 4000 naturally collected English examples where the hearer is the source of miscommunication — whether in the noisy pub, over the mobile phone, or in the clash of dialects heard when an Australian visiting Arizona asks ‘Where’s a basin?’, and a local replies ‘Bison? They’re rare around here these days’. Looking into recurrent slips reveals that our expectations can indeed bias what we mis-hear, but within limits: the intended utterance and the misheard message must be just phonetically close enough to allow our ears to deceive us, such that speaker-dependent bottom-up perceptual ambiguity proposes, and listener-dependent top-down processing disposes.
The May 2014 SMART Lecture was presented by Roel Willems, senior researcher at the Donders Institute for Brain, Cognition and Behaviour and at the Max Planck Institute for Psycholinguistics.
A debated issue in language understanding is whether language is understood via sensori-motor simulation of its semantic content. The available evidence is mixed, and it is fair to say that we have little understanding of how and when simulation plays a role in language understanding. Roel Willems presented the results of some recent neuroimaging studies in which we studied mental simulation in the context of narratives. Issues of interest were individual differences in mental simulation during story comprehension, and how mental simulation is driven by literary techniques such as narrative perspective.
The neuroimaging work using narratives is in contrast to earlier work in which single, isolated verbs to probe simulation were used, and Roel Willems argued that the study of language will benefit from complementing what we know from experiments with de-contextualized stimuli, with findings obtained with richer stimuli such as narratives.
“Linguistic creativity is made possible by a division of labor between an inventory of stored, reusable units and a set of computational processes which compose those units into an unbounded number of new expressions—allowing us to produce and comprehend previously unthought concepts and ideas with an ease that we usually take for granted. This book has addressed the deceptively simple-sounding questions: What are the stored units and which computational processes can give rise to new expressions? […] In this book, we have proposed a precise, mathematical theory of productivity and reuse, and studied a specific instantiation of that theory: the Fragment Grammars model. The model is based on the idea that the correct pattern of storage and computation in a language can be determined (at least partially) as the result of a probabilistic inference based on the distribution of forms in the input data. We have shown that this idea, combined with standard assumptions from linguistics and psychology, leads to a model under which the productivity of a word-formation process is evidenced by the productive use of that process. Put another way, the model, like the linguist—and, by hypothesis, the child learner—concludes that a word-formation process is productive by observing it used to generate new forms.” – Tim O’Donnell, Productivity and Reuse in Language: A Theory of Linguistic Computation and Storage, MIT Press (2014, in production).
Levels, levels, and levels: Unpacking the senses relevant to cognitive science (an unorthodox view of Marr’s levels of analysis)