The September SMART Lecture was presented by Charles Yang (UPenn), with an introduction by Barend Beekhuizen (Leiden, ILLC/UvA)
It is easy, almost trivial, to claim the superiority of a combinatorial linguistic system over a finite repository of fixed expressions. The traditional problem has always been to explain how the finite-to-infinity transition might have taken place. For a gradualist approach to the origin of language, a common answer points to the recapitulation of phylogeny in ontology. The developmental stages in child language acquisition are said to trace similar steps in language evolution as both processes are envisioned to move from formulaic expressions to increasingly abstract grammatical processes.
Charles Yang presented evidence suggesting that a formulaic stage in child language does not exist. Rather, language use is detectably compositional as it employs abstract categories and rules from the earliest stage of language acquisition. A gradualist account of language evolution thus loses its starting position never mind the transitional steps toward the terminal point. And we are left with the saltationist alternative, that the key component of Language (compositionally aka “merge”) evolved only once. Along the way, we discuss the implications of child language for the comparative cognition and linguistic theories in the age of big data.
The October SMART Lecture was presented by: William Foley, Professor of Linguistics at the University of Sydney.
What is the relationship between the ontological categories of the world, the epistemological and cognitive categories of our minds, and the semantic and grammatical categories of our language? This fundamental question has been given essentially two types of answers, both of which are foreshadowed in Plato’s discussion in Cratylus over the natural versus conventional basis of the meanings of words: essentialism, basically the nature of the world imposes an necessary structure on our cognitive and linguistic categories, or nominalism, these days better known as constructivism, the view that our linguistic categories, often taken as equivalent to cognitive categories, provides the scaffolding for our understanding of the world. The seemingly largely conventional nature of these linguistic categories is taken as particularly salient, hence the name constructivism. But just how essentialist or constructivist are our linguistic categories like nouns and verbs? Language is an essential pan-human species-specific trait: any child, save a serious neurological pathology, born anywhere can learn any human language regardless of their genetic background. It is a biological essential. But what they learn is a constructed human language, passed on in a particular community through socialization, with particular linguistic traits that constitute that language, the sum total of which are shared by no other language. It is a cultural construct. Language then is an obvious domain in which to study models of co-evolution: the interplay of biological constraints and cultural learning in human evolution, specifically here in linguistic evolution, more commonly known as language change over time.
In a co-evolutionary model of language change, investigating the extent and distribution of variation becomes the critical research question. We would expect the patterns of language to reflect both biocognitive constraints due to limits on possible brain systems and the oral-aural tract of spoken language as well as cultural constraints from valorized or rejected items or activities of cultural production. Linguistic diversity is firstly the result of selection in the cultural track in which traits are evolved and taken up by the standard processes of hybridization and inheritance, etc. Biocognitive constraints act as a further check on which variants produced by cultural evolution are selected, Language universals, then, would need to be viewed as biocognitive primes for what would be possible linguistic variants and on what innovated linguistic variants produced by cultural evolution are favored in selection. The mechanism by which such biological constraints would most likely operate would be cognition: e.g the kinds of linguistic categories that can be learned and the relationships forged between them.
Distinct and contrasting word classes of nouns and verbs have widely been claimed to be one of the most robust language universal categories. This is claimed to lie in a universal cognitive contrast between objects and events, but developmental psychological experiments argue for a marked asymmetry between these two, with a salient cognitive category of object emerging very early, but events, much later. While it is a crosslinguistic fact that most languages in most areas of the world have clearly contrasting grammatical categories of noun and verb, in a few areas and language families, this is not the case. Words are largely flexible, like rain in English, either noun or verb in usage depending on context. In the Austronesian languages of insular Southeast Asia, rates of such flexibility are extraordinarily high, but this family has diversified and moved out of the region into areas like New Guinea, where Papuan languages with a robust noun-verb distinction are found. The rate of flexibility within Austronesian languages is shown to decay largely in proportion to distance from Southeast Asia, and is particularly low in coastal New Guinea, where intensive interaction with Papuan speakers over millennia has occurred. But, not entirely, as the rate of flexibility in Polynesia is again high. William Foley showed there is a close correlation between flexibility rates and the presence of retained Southeast Asian genetic markers in the local genome, so that cultural factors of trade and particularly mate choice have played a central role in selection for/against this trait of flexibility. Further, not all types of ontological categories show equal rates of flexibility even in languages with very high rates. Words denoting natural kinds like bear show low rates, while those for artifacts, like spear, high rates. This is strongly indicative of robust biocognitive universals in how certain categories can be construed.
The November SMART Lecture was presented by Professor Rineke Verbrugge, holding the chair of Logic and Cognition at the University of Groningen’s Institute of Artificial Intelligence (ALICE) since May 2009.
Computational agents often reason about other agents’ beliefs, knowledge, goals and plans, based on formal logics. Usually they are capable of an arbitrary amount of recursion when reasoning about their interlocutors: “Alice believes that I believe that Alice believes that I wrote a novel under pseudonym”… and so onwards. However, people lose track of such `theory of mind’ reasoning after a few levels. If software agents work together with human teammates, it is very important that they take into account the limits of social cognition of their human counterparts. Otherwise an international negotiation, for example, fails, even when it has potential for a win-win solution.
In this talk Rineke Verbrugge discussed several strands of research related to recursive theory of mind: children’s development between the ages 5-7 from first-order theory of mind to second-order theory of mind, in story tasks and linguistic tasks; adults’ limitations and strengths in higher-order social reasoning in games; and the question why higher-order theory of mind may have evolved in the first place. To investigate these questions, we take logic into the lab and combine computational cognitive models, agent-based models, complexity analysis and empirical research with adults and children.
With panelists Yoad Winter (Utrecht), Paula Roncaglia-Denissen (Amsterdam), Henkjan Honing (Amsterdam) and Richard Kunert (MPI Nijmegen)