Slips of the hand
An interview with Roland Pfau
Roland Pfau received his MA (1995) as well as his PhD degree (2001) from the University of Frankfurt. He joined the University of Amsterdam in May 2001 as assistant professor in General Linguistics. In his research, he focuses on the morphological and syntactic structure of sign languages as well as on processes of grammaticalization, often taking a typological and cross-linguistic perspective on these issues.
“There are a couple of things I remember I wanted to be when I was young. Amongst my earliest recollections, when I was like 5 years old, are garbage collector and skiing instructor. A few years later, I remember that I wanted to become a teacher. Then, in my teenage years, my dream was to be a shoemaker. I have a bit of a shoe fetish, and I always thought that it was so difficult for men to find nice shoes. I mean, nice shoes that are not ridiculously fancy, just a decent but slightly original shoe. Shoemaker was actually the last thing I wanted to be before things got serious and it turned out that I would proceed towards an academic career.”
“During my PhD, which was on slips of the tongue, my professor got interested in sign language. She had been approached by the state government of Hessen in the context of an ongoing debate in the state parliament about acknowledging German Sign Language as a minority language. She had worked on sign language a little bit before, but following this request, she got enthusiastic about the topic and found some money to offer sign language classes. A couple of colleagues and me took these classes, and at some point decided to set up a research group in order to study the structure of German Sign Language. This really had an impact on my research. I still think that the topic of my PhD is highly interesting, but what started as a hobby, the sign language research, soon really became the number one topic for me.”
“Later on, as member of the research group in Frankfurt, I also did some work on speech errors in sign language. Slips of the hand, as we call them, are very intriguing and highly informative, when it comes to the nature of language and how it is processed. Just like words, signs consist of smaller parts, such as a handshape, a location, and movement. The smaller parts of a spoken language, the consonants and vowels, are sequentially organized. In a sign language, however, at least to some extent, this is different. If you have a location and a handshape, than it is certainly not the case that you first articulate the handshape, and then the location. The question is whether these building blocks could still be separately affected in speech errors, despite their simultaneous nature. And the answer is yes; although these sub-lexical elements are organized differently, they are treated in a similar way. Similar to a consonant exchange like ‘redding wing’ instead of ‘wedding ring’, you can also get a handshape exchange, and just as in spoken languages, a slip may result in an existing sign, or in a non-existing but possible sign. However, slips of the hand, just like slips of the tongue, (almost) never result in an impossible sign, which would violate the phonotactic constraints of the language. Thus, when it comes to error types and error units, sign languages behave pretty much the same as spoken languages.”
“When you tell people what you do, sign language always triggers reactions. ‘Oh that is so nice, with sign language everyone can communicate in the same language’. Then, of course, I cannot hold back, and I have to point out that this is not true, as there are many different sign languages. Yet, in general, in my private life, I try not to annoy people too much with longish lectures about linguistic stuff.”
“In the early years of sign language research, the 1970s and 80s, linguists tended to stress the similarities between sign languages and spoken languages. They did that because it was important at the time to demonstrate that sign languages are fully-fledged natural languages, and in order to do so, it was crucial to show that they behave similarly to spoken languages when it comes to structural organization. Once this had been established, linguists began to pay more attention to the differences between sign languages and spoken languages. Most obviously, sign languages and spoken languages use different modalities of signal transmission: the visual-gestural modality for sign language versus the oral-auditive modality for spoken language.
The difference in modality leads to three important structural differences. The first one has to do with the lexicon: signs are more likely to be iconic than words. It is simply much easier to express an object or an action in an iconic way with your hands than with your voice. The second difference is phonological in nature. For the articulation of signs, you have two identical articulators: your two hands. There is nothing comparable in spoken language. The availability of two articulators would in principle allow a signer to simultaneously use them fully independently of each other in the articulation of signs. Interestingly, however, it has been shown that the full range of possibilities is not exploited in sign languages. For example, if both hands move in a two-handed sign, they have to be specified for the same handshape. This phonological rule has been shown to constrain the form of signs in all sign languages investigated to date. There are exceptions, but then we are not dealing with lexical forms, but rather with morphologically complex forms. For example, if my right hand is flat, representing a car, and the left one has the index finger extended, representing a person, then I can move them in parallel, yielding a morphologically complex expression.
The third difference is the use of space in front of you. This space is used in most, if not all, sign languages, for grammatical purposes, for instance, for the realization of pronouns. If I want to talk about my brother, who is not present, I would sign ‘brother’, and then associate an arbitrary locus in the signing space with my brother, by pointing. Later in the discourse, by pointing to the same location, I can refer back to the brother, that is, the pointing is interpreted as a pronoun meaning ‘he’. Once again, the abstract phenomenon is not different from spoken language pronominalization, but the surface form is clearly different, because the spatial strategy allows for a more concrete way of expressing this grammatical category.”
“When researchers first started to compare sign language structures, that is, when sign language typology was put on the research agenda in the late 1990s, this was really like a light went on. Before, everyone had been investigating his or her own sign language, but putting the patterns into a bigger picture is of course super fascinating. If you find structural similarities among sign languages, it is interesting, and if you find differences, it is maybe even more exciting. This is a bit of a research philosophy of mine, that I strive to put whatever I write about into a typological perspective.”
“Most of my research focuses on morphosyntax and syntax. To give a few examples, I have done work on agreement in sign languages and on question formation in Indo-Pakistani Sign Language. One of my favorite topics for many years has been negation. Sign language negation is intriguing, I think, because it involves a manual element, a manual negative particle, but also non-manuals. In this context, non-manual means a headshake, as we would also use in spoken language as a co-speech gesture. It has been demonstrated that in sign language, the headshake is really an integral part of the grammar of the language, not just a gesture. While the distribution of headshakes accompanying spoken utterances is quite random, across sign languages, it is systematic and rule-governed, and it differs from sign language to sign language.”
“Of course, I want to contribute to the body of knowledge about sign language. But ultimately, I hope that I can also give something back to the deaf community. Sign languages are threatened languages, and I therefore find it very important to make an effort to share research findings with the community. In a way, I do this as editor of the journal Sign Language & Linguistics, where we publish papers on sign language structure. Something that I have not done so much in the past, but that I hope to accomplish in a research project that started in February, is to also publish in a format that is more accessible to deaf people – not just scientific articles, but papers for a lay audience, hopefully also in sign language. Actually, in this project, which investigates argument structure in sign languages, we put in the budget money to produce summaries in sign language which will be made available online.”