For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

‘Logic has tremendous predictive power’

An interview with Michiel van Lambalgen
Michiel van Lambalgen

“When I came to the university it was because I had read Wittgenstein, and in particular the Tractatus, which of course is full of logic. But then, for my second logic exam, I scored a two. I considered that to be such an insult that I believed it should be my field.”

“What really interested me at the time was philosophy and foundations of mathematics and there logic plays a big role. But when you do mathematics and logic in the Netherlands, you quickly get to know alternative ways of doing mathematics, like for instance intuitionism. The idea that there is only one logic, which is kind of sacrosanct, quickly disappears. That predisposed me to take a broader take on logic in cognitive science.”

“Later, when I was lecturer in AI and teaching introductory courses in logic to AI students, I read Vision by David Marr, a book that made a deep impression on me. He showed how you can apply mathematical methods to the study of visual perception. When you look at it carefully you see that he is applying some highly non-standard logic in order to describe visual processes.”

 

Autistic children

“Logic has a tremendous predictive power. Together with people in Nijmegen we studied autistic children. We had a prediction on autistic reasoning behavior, based on the logical structure of a particular reasoning task and the observation that on a certain non-verbal task they were known to score very differently from more typical people. Based on the logical commonality between the reasoning task and the non-verbal task we made a prediction of the scoring patterns that we would see when autistic children do that reasoning task. The predictions were marvelously confirmed.”

“In a second instance I did an EEG study with people in Nijmegen. I had previously worked with a linguist on how to formalize semantics of tense and aspect in a particular logical calculus. One of the constructions that interested us was the English progressive: ‘I am going to the market’ and so on. The formalism predicted the following: if you have a sentence with a main clause like ‘the girl was writing a letter’ followed by a subordinate class ‘when her friend spilled coffee on the paper’, then people would first, after the main clause, draw the inference that there was a finished letter. After the subordinate class the conclusion would have to be redrawn. We predicted that there was a special EEG signal that would appear at moments where the conclusion had to be redrawn. That turned out to be the case.”

Van Lambalgen often refers to David Marr’s three levels of explanation: the computational level, where the computational task a cognitive agent must solve is described, the algorithmic level, where algorithms that can solve the task are studied, and the implementational level, where possible neural implementation of those algorithms are considered.

“With the logics that we have been studying the route from logic to neural implementation is fairly direct, unlike classical logic. Roughly speaking, the idea is that what the logic does is compute so-called discourse models. You model a discourse, a number of coherent sentences in the calculus and then the calculus allows you to compute a model which is a faithful representation of that discourse. Now that model turns out to be completely isomorphic to a stable state of a suitable neural network. Because of this correspondence we were confident that we would see something on the EEG.”

 

Peculiar beings

“The general picture of cognition is this: we are very peculiar beings. On the one hand we always lack information, but on the other hand we are continually subjected to a sensory bombardment out of which the essential elements have to be isolated. The solution for that paradoxical situation is “going beyond the information given” by means of coding systems. We apply higher level descriptions in order to organize our sensory data. Some of these higher level descriptions can be profitably taken to be logical formalisms, but not all of them.”

“To represent language semantically you need non-monotonic logic. When I say ‘I started the novel in December’, you might think that I am reading a book. When I consequently say ‘The pupils listened very attentively’, you might now think that I am reading the novel to my pupils. But when I go on and say ‘However, some found dictation extremely boring.’, it turns out that I was actually dictating the novel to my pupils. Here you see that because there is incomplete information in all these sentences – you have an aspectual verb which should take another verb as an argument but that verb is missing – you run ahead of yourself and therefore make a non-monotonic inference which may have to be retracted.”

“There are Bayesians who also claim that their formalism does the job. I do think that non-monotonic logic performs better than the Bayesian formalism. There are several tasks which are related to planning where Keith Stenning and I have a mathematical proof that Bayesian reasoning cannot work. More controversially, we also have some EEG data which seem to show that processing is much closer to what can be described by a non-monotonic logic than by Bayesian reasoning. There are two kinds of computation; you can identify at which stages the computation uses most resources and you can try to correlate that with the intensity of the EEG signal. It then comes out wrongly for the Bayesian analysis.”

In recent years Van Lambalgen is exploring connections between the 18th century philosopher Immanuel Kant and 21st century cognitive science.

“The general idea that I sketched earlier — that we need to go beyond the information given in order to come up with an account of the world around us — is very Kantian. There have to be structures in place in our mind in order to interpret sensory data. Kant was very much aware that our cognition is very partial; we always view objects from a certain perspective and have to combine the information from the various perspectives into one coherent picture. In modern cognitive science that goes by the name of the binding problem.”

 

Terribly narrow-minded

“What spurred my interest was Kant’s logic. This part of Kant’s work is mostly ridiculed as being, as someone said, ‘terribly narrow-minded and mathematically trivial’. Stenning and I had written a book called Human Reasoning and Cognitive Science where we roughly came to the conclusion that logic is not analytic but synthetic a priori. I wanted to look at what Kant said about the status of logic. Then I noticed that the traditional view of what his logic is, and why it is trivial, are totally off the wall. The reason being that they are interpreted in the wrong semantics, in the semantics of classical logic, whereas you should interpret them in a semantics which takes into account that objects are always partial, that you can always gain more knowledge, unlike the objects of classical mathematics which classical logic was designed for.”

“When you look at that semantics then the logic takes on a very different aspect. It turns out to be a logic that was discovered 25 years ago and goes by the name of geometric logic, which is used in topology and, as the name implies, also in geometry. It is the logic in which you can formulate the axioms of Euclidean geometry. The whole thing turned out to be much deeper than traditionally thought. It allows you to read the Critique of Pure Reason as Kant intended it, namely starting from a logic and deriving the categories, the structures that we need in order to interpret the data from that logic. My PhD-student Dora Achourioti and I are now writing this down. We intend to give a re-reading of the Critique as Kant meant it, but then in a heavily formalized version.”

“The interest for cognitive science is, I believe, that Kant’s picture of the mind is a good way to think about the mind even in these days. It puts so much emphasis on the binding problem, his notion of synthesis. He has a very subtle analysis of the various domains of cognition where binding occurs and what the prerequisites are for being successful. What I find personally attractive is that you can describe the binding problem in logical terms that have been well studied in other branches of mathematics. One bonus that you get for free is that it is often been said that Kant is a constructivist. He does not believe in classical mathematics – although he did not have the notion – but he believes in construction. The logic that comes out is precisely the logic that is suited for those constructions.”

“For Hume causality is constant conjunction and a habit formed on that basis. For Kant it has to do with judgment. You impose a rule upon the phenomena which guides your expectations. That seems almost like what Hume says but there is a subtle difference. Kant would say of Hume that he leaves the mechanism of association unexplained. According to Kant, rules and their top-down inference establish the binding. Hume would say that this association is merely Hebbian learning: neurons that fire together wire together.”

“There are some studies in developmental psychology which have a bearing on this. One rule which seems to play an important role in infant cognition is that causality arises through contact. If they see one object impacting upon another, then that is taken to be an instance of a rule which says something like: if there is contact there is causality and therefore binding of two events. There are stronger forms of these views. Some people claim that we have built-in structures called ‘Bayesian-nets’ or ‘causal graphs’ which form the schema in which to represent all our causal interactions. The schema is given and you need some learning to fill out the parameters. This is a combination of a Kantian and a Humean way of looking at things, not purely Humean. The bottom-line of this is that Kant would say: ‘association is all well and good, but what gets associated to what and why?’ “

 

Hopes and fears

“I have hope and fears for cognitive science. My hope is that it develops in a more model-based direction and my fear is that it does not. Cognitive science has a tremendous appeal because of the pretty pictures it generates, to put it bluntly. It appears as if research can locate particular brain functions in particular areas of the brain. I tend to think that, at least in my field, such claims are highly questionable and that without good models, mathematical or logical, you cannot interpret these data.”