For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

SMART Debate on Defining Cognition

With panellists Jelle Zuidema (UvA), Fred Weerman (UvA), Jeannette Schaeffer (UvA), Rens Bod (UvA), Julian Kiverstein (UvA), Patricia Pisters (UvA)

 

Eugune S. Hunn (University of Washington): Ethnobiological Classification and Nomenclature: Cognitive aspects of an anthropological research program

The February SMART lecture was presented by Eugene S. Hunn, Professor Emeritus, Department of Anthropology of the University of Washington. His talk was introduced by Konrad Rybka.

Eugene S. Hunn reviewed the contributions of ethnobiology studies to our understanding of human cognition. Ethnobiology is a complex and evolving interdisciplinary field dedicated to documenting how people in diverse places and languages make sense of their natural environment, largely by means of human language. Ethnobiology relies primarily on ethnographic methods: participant observation and careful documentation of natural language, with some use of simple psychological experimentation. An early phase of ethnobiological research focused on biological classification and nomenclature. This phase was most prominent from ca. 1953 (Conklin) through the 1970s (Berlin, Bulmer, et al.). Attention since has shifted to more pragmatic concerns, though interest in principles of language and cognition continues. He was deeply involved in these early explorations in cognitive ethnobiology. Here he highlighted three aspects of our contribution to understanding human cognition. In roughly reverse chronological order: 1) “Precocious acquisition,” that is, evidence that in small-scale, land-based communities dependent on oral tradition, children master an extensive vocabulary of plant and animal names (several hundred of each) before the age of 10. They acquire these names and much associated ecological and cultural knowledge with minimal explicit tutoring. This suggests that humans are “programmed” to appreciate local biodiversity. 2) “The Magic Number 500,” which represents a rough “quota” of terms for certain classificatory domains, specifically plants, animals, places, and people. He has argued that this suggests that human memory is in “formatted” to admit a certain quantity of terms for domains in which the terms are “nodes” in complex webs of information. And 3) a “perceptual/taxonomic” model of how knowledge of biodiversity might be stored and manipulated conceptually in an efficient and ecologically adaptive manner. Though these ideas were developed more or less in academic isolation (within cultural anthropological circles), Hunn hopes they may resonate with more current cognitive research within contrasting methodological paradigms.

 

Niels Taatgen (University of Groningen): The Distracted Mind

The March SMART lecture was presented by Niels Taatgen, Professor of Cognitive Modeling in the institute of Artificial Intelligence and Cognitive Engineering of the University of Groningen. His talk was introduced by Jakub Szymanik.

One of the challenges of modern work is not to get distracted. Outside stimuli continuously vie for our attention, but also out internal thought processes may derail our thoughts in a direction away from productive work. In his talk Niels Taatgen presented a model of distraction based on the PRIMs cognitive architecture (Taatgen, 2013). The idea in this model is that goals activate mental operators to achieve these goals, but that sources of distraction (perceptual or mental) can also recruit operators. He demonstrated the model in two experiments, one of visual distraction, and one of mental distraction.

 

Jubin Abutelabi (University Vita Salute San Raffaele): Neural Consequences of Bilingualism

The May SMART lecture was presented by Jubin Abutelabi, Cognitive Neurologist and Associate Professor of Neuropsychology at the Faculty of Psychology, University Vita Salute San Raffaele in Milan. His talk was introduced by Enoch O. Aboh.

Culture, education and other forms of acquired capacities act on individual differences in skill to shape how individuals perform cognitive tasks such as attentional and executive control. Of interest, bilingualism also appears to be a factor that shapes individual performance on tests of cognitive functioning. Indeed, bilingualism not only expresses itself as the ability to speak more than one language but also appears to shape individual performance on tests of cognitive functioning. The cognitive processes most likely to be affected by bilingualism are those involved in cognitive systems orchestrating resources assigned to attentional and executive control. However, debate is enraging on whether bilingualism provides actually a cognitive advantage.

Beyond behavioural differences, bilingualism seems to affect brain structure as well. Indeed, bilingualism induces experience-related structural changes (i.e., in terms of increased grey or white matter density) in areas that are part of the executive control network such as the frontal lobes, the left inferior parietal lobule, the anterior cingulate cortex, and in subcortical structures such as the left caudate and left putamen. The primary goal of his presentation was to provide an overview of the functional and structural changes induced by bilingualisms (i.e., the neural consequences of bilingualism), and, second, to illustrate specifically how these brain changes may eventually protect the human brain from cognitive decline during aging.

 

Karin Kukkonen (University of Oslo): Probability Designs: Literature and Predictive Processing

The June SMART lecture was presented by Karin Kukkonen, Associate Professor in Comparative Literature at the University of Oslo and Academy of Finland Postdoctoral Research Fellow. Her talk was introduced by Stephan Besser.

How predictable are narratives? If we consider the situation at the beginning of the fairy tale, it seems rather unlikely that Cinderella will marry the prince. Only once the narrative has run its course, after a number of rather unexpected events (such as the intervention of a fairy godmother), does the outcome of the tale become actually probable. At the same time, however, the generic predictions of the fairy tale would lead readers to expect – as a matter of course — that the heroine will marry her prince.

These issues of predictability and probability tie in with the cognitive approach of so-called “predictive processing”. Predictive processing suggests that the human mind works through predictive, probabilistic models of the world which are constantly revised in light of new observations in a process called “Bayesian inference”. The approach has taken a hold in neuroscience (in the work of Karl Friston and Chris Frith), in developmental psychology (in the work of Alison Gopnik) and in philosophy of mind (in the work of Jakob Hohwy and Andy Clark). Suppose that also literature has something to do with on Bayesian inferences? What would be the basic features of its designs on the probabilistic thinking of readers? And how can we distinguish between a sound guess of what is likely to happen next in a tale and the predictions that arise from generic expectations? In this talk, Karin Kukkonen outlined how a probabilistic approach to cognition can shed light on these complex constellations of prediction and probability involved in literary narrative.

 

Frank Keller (University of Edinburgh): Performance in a Collaborative Search Task: The Role of Feedback and Alignment

The July SMART lecture was presented by Frank Keller, Professor of computational cognitive science in the School of Informatics at the University of Edinburgh. He presented joint work with Moreno Coco and Rick Dale.

When people communicate, they synchronize a wide range of behaviors. This includes linguistic aspects of communication (e.g., which referential expressions and syntactic structures are used), as well as non-linguistic behaviors such as gaze and posture. This process of synchronization is called alignment and is assumed to be fundamental to successful communication. In this paper, they questioned this assumption and investigate whether disalignment is a more successful strategy in some cases. More specifically, they hypothesized
that alignment leads to task success only when communication is interactive.

Frank Kelller presented results from a spot-the-difference task in which a speaker describes a naturalistic scene to a listener, who has to decide whether he/she is looking at the same scene. Interactivity was manipulated by allowing/disallowing feedback from the listener.
Using recurrence quantification analysis, they measured the alignment between the scan-patterns of the interlocutors, and the re-use of syntactic structures by the speaker. They found that interlocutors who could not exchange feedback aligned their gaze more, and that excessive gaze alignment correlated with decreased task success. When feedback was possible, it was utilized to better organize the interlocutors’ joint search strategy by diversifying visual
attention. Furthermore, speakers’ re-use of syntactic structures increased task success, even in the no-feedback condition. These results suggest that alignment per se does not imply communicative success, as most models of dialogue assume. Rather, the effect of alignment depends on the type of alignment (gaze vs. syntactic structure), on the goals of the task, and on the presence of feedback.

 

Michael Wheeler (University of Stirling): The Reappearing Tool: Transparency, Smart Technology and Embodied Cognitive Science

The August SMART Lecture was presented by Michael Wheeler, Professor of Philosophy at the University of Stirling.

Skill-based accounts of intentionality, or of intelligence more generally, sometimes claim that expert performance with technology is characterized by a kind of disappearance of that technology from conscious experience, that is, by the transparency of the tools and equipment through which we sense and manipulate the world. This claim, which may be traced to phenomenological philosophers such as Heidegger, Merleau-Ponty and, more recently, Dreyfus, has been influential in and around embodied cognitive science. For example, in the debate over the nature and status of representational accounts of intelligence, it has been a driver for antirepresentationalism. In user interface design, the transparency of technology has often been adopted as a mark of good design. And such transparency has been advanced as necessary and/or sufficient for extended cognition (the situation in which the technology with which we couple genuinely counts as a constitutive part of our cognitive machinery). Through a series of reflections on concrete examples of our contemporary engagement with technology, Michael Wheeler argued that many of our intuitions that seem to be robust in the generic tool-use case run aground on the challenges posed by smart artefacts (those that come equipped with artificial-intelligence-based applications). This result should prompt a reassessment of the transparency idea in at least some cases of technology-involving embodied cognition. As Wheeler argues, this reassessment has significant implications for phenomenologically inspired skill-based accounts of intentionality/intelligence and also for the hypothesis of extended cognition.