For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Dieuwke Hupkes is a PhD student at the Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam. After finishing her bachelor’s degree in Physics and Astrophysics, she switched to the master of Logic. We talked to her about her research career, current projects, and future plans.
D. Hupkes

From physics to logic

“I have always been very interested in language and logic-like things. One of the main reasons to study physics was that I thought it would be very hard and that I would learn a lot from it. From there, I could maybe go somewhere else. At some point I discovered that, even though I was a good student, the thing I was good at was not really physics but understanding the underlying structure of problems. And then, at the right moment, I met someone who was doing logic, and thought that that would be perfect; logic is just about the structure of problems. But initially the switch was not very easy. Physics has a lot of thinking which you may classify as logical but has absolutely nothing to do with logic. At the beginning of my master, I remember being in classes and not understanding what a negation was: what do you mean, not p? In physics, we “prove” things all the time by rewriting equations, but this is really a very different way of proving things.”

 

Learning to program

“Now I’m not really doing logic anymore, but rather artificial intelligence and computational linguistics, which is exactly what I’m interested in. That I managed to make this switch so smoothly can for a very large part be attributed to Jelle Zuidema and Khalil Sima’an. When I started my first course on language processing, which is also in the AI master, we had to do a final project that involved implementing a syntactic parser. I partnered up with someone who could program very well, but after finishing the project, I felt guilty and went to the professor of the course, Khalil Sima’an. I told him that although I believed to understand the underlying theory, I didn’t actually do any of the programing. Khalil said that he did not mind, as long as I understood the general principle and promised that I would implement a parser at a later moment in my life, which I did later that year. This was the first step toward my shift to AI and computational linguistics. In a later course, Unsupervised language learning taught by Jelle Zuidema, I ran into a similar problem. At that point I had caught up with programming but still was not very fluent with it. I found the course very interesting but was not able to do the first assignment. I sent Jelle an email to tell him it would be better if I dropped the course. He asked for a meeting and convinced me to continue the course. From then on, I picked up and did all the language processing courses, which is now really my field of expertise. I’m very grateful that Jelle and Khalil saw the things that I could do and did not just focus on the thing I couldn’t.”

 

Language

“As long as I can remember I have always been interested in language. I have always been intrigued by the complexity of language. Still, everyone can acquire it. For instance, I can speak with whomever I want, and we can learn to do so very quickly. I can make many mistakes and you still understand what I mean. We all have similar notions of what is grammatical and what is not. I find language one of the biggest mysteries of humans, and it relates to almost everything. Language describes how we think, it reflects some properties of our brains, and it reflects what the world around us looks like. If the world was completely different, our languages would be different. For me, it encompasses it all.”

 

Modeling as a tool

“I believe in understanding things by trying to model them. For me, being able to make a model that does what you want, means that you need to understand what you want. You need to understand the underlying problem. In that sense I see modeling as a tool. Investigating the brain to understand language is interesting and I hope to integrate the knowledge that we gain from that in a model at some point. All these different ways to look at a similar problem should come together. We can evaluate new models based on what people know about linguistics, or we can understand if they are behaving well because of psycholinguistics. Hopefully the fields will be combined. Usually it is not easy to integrate these findings, because here I think people do speak different languages sometimes. Initiatives like SMART can help in bringing together the different communities, by making sure they are not staying on their islands and helping to build bridges. I believe we are on the right track. The further everyone gets with understanding what types of problems they can and cannot answer with their approaches, the easier it becomes to communicate.”

 

Current research

“I’m employed within the Language in Interaction project. Interaction refers to the aim of making different disciplines interact. The title of my project is Building a neural parser, but when I started I quickly came to the conclusion that it may be a bit overambitious to build a neural network-based parser if we don’t yet understand what neural models are doing. I spent quite some time trying to get a better understanding of how these models can encode different things. They need to represent compositionality and hierarchy like we have in language and this needs to be represented in the hidden state space. This was very instructive, because now we have a tool box to gain a better understanding of what models are doing. At the moment, I’m focusing on trying to understand how we can give these models the right learning biases to generalize more in human-like ways. It’s hard to miss that neural networks these days appear to be able to do almost everything. However, when you look more closely it seems that they are very good at many things, but they don’t do that in the same way humans do. From an applied perspective, this is perhaps not so much of a problem. It becomes challenging if you want them to do human tasks and they do it in a completely different way. As I said before, I like modeling as a tool to add to the understanding of the underlying problem. If these models generalize in a different way than humans do, they are also not as useful as explanatory models. So now I’m trying to understand how we can make them more human-like, which is very well in line with the original title of the project.

 

Future plans

“Compositionality, hierarchical compositionality, is a very fundamental issue. If you understand how to do that in a neural network, it will solve many of the problems that neural networks are having in many fields right now. With regard to the practical application of my research, this is at the very core of what we should be understanding. It does not only solve issues in language, although there are many applications imaginable in language too; maybe Siri will understand you better, or you can interact with a computer instead of a human. This is one of the advantages of the field I’m working in. I feel very lucky to be working in a dynamic field that is not only interesting from a theoretical perspective, but also in a more applied setting. Understanding how to bias neural networks to find solutions that generalize in a more human-like way is a really big challenge and it would be great if I could figure out how to do that. I’m now making little steps towards that goal. Doing research allows me to interact with the things I want to know all the time and this way, it can fulfill some kind of intrinsic goal that I have.”