For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Makiko Sadakata is assistant professor of music cognition at the University of Amsterdam and the Radboud University Nijmegen. She studied Music Composition and Musicology at the Kyoto City University of Arts, and obtained a Ph.D. in Social Science at Radboud University Nijmegen. Her research focuses on auditory perceptual learning, music and language and neurofeedback.
Makiko Sadakata

“Before I got interested in music cognition, I studied music composition. Then I got more and more interested in how people listen to sounds. As a music creator I was really interested in what music does to people. For example, which music makes people happy? There was a course on music cognition at the university where I studied, taught by one of the leading researcher in Japanese music cognition field, Kengo Ohgushi. The course was very inspiring, so I decided to study more on the topic…, and ended up switching my focus completely from music composition to music cognition. That was a big change! I needed to learn a lot of new things like statistics, experimental methods, psychology.”

While I was doing my master in Japan, a researcher from Nijmegen, Peter Desain, visited my University for a sabbatical. He was working on categorical rhythm perception, with Henkjan Honing, which I found very interesting. Back then; I was also keen on speaking in English (which is not very easy to realize in Japan) and learning a new culture too. Combining all these “curiosities”, I strongly felt that I should talk to Peter Desain as much as I could. I think I was asking him to join me for lunch! For my master thesis project, I proposed to do a cross-cultural study on rhythm production, and Peter kindly invited me to come to do data-collection in Nijmegen. I remember I was very exited about this project.

When I first visited the Radboud University Nijmegen, I was quite amazed and surprised about the research environment. Since the Radboud University does not have a music department, the music research was carried out as a collaboration of researchers from different disciplines, e.g., psychology, mathematics, informatics, and musicology. I met with Henkjan Honing during this first visit too. I was amazed that people from different fields communicate with each other so easily in the Netherlands; at least, this has been my experience in both Nijmegen and Amsterdam. I had a really good time in the Netherlands and immediately thought that I would like to do my PhD with Peter Desain and Henkjan Honing.”

“The keywords that I am interested in, for a long time already, are learning and gaining fluency in processing of auditory signals. It does not really matter if this is in speech or in music. Of course, speech and music are different, but the level that I am dealing with at this moment is abstract categories (phonemes or notes) and I see a marked similarity between how we process it. The way in which we form sound categories depends on where you grew up, what kind of language background or music background you have. There are a lot of people looking for differences between language and music, but I am more interested in looking for overlapping mechanisms.”

“For my PhD research, I investigated the relation between cultural influence on how people perceive, produce and compose music rhythms. I was interested in looking at any dimension of music, but I was in the expertise group of rhythm, so I decided to focus on the temporal aspect of music. The Japanese language is known to be quite unique with regard to its speech rhythm: each syllable (mora) has a similar duration. What we found in our study is that this character is reflected in music: Japanese music tends to be “flatter”, playing in similar durations, with regard to its rhythm. Also Japanese musicians tend to perform rhythms in a flatter manner than Dutch musicians. We found a correlation between the rhythm of the language and performance/perception strategies of music rhythms. Of course, we cannot say too much about the causality based on correlational evidences though.

The same score rhythm can be heard or performed in different ways. Towards the end of my PhD project, we tried to address how our rhythm perception and production are associated, and how prior experience plays a role there. We applied a Bayesian formulation to explain rhythm perception production relationship, which is not quite advanced enough yet to explain complex variability, for example, the cultural bias, but has the potential to be extended to that.”

“After my PhD, I looked more into various aspects of learning. The first project I was involved was “PracticeSpace”, which aims at developing visual feedback system for learning to perform music instruments. The system could be used to learn to play the piano and drum, using different visualizations of sounds. When you learn to play an instrument, teachers can give different instructions, for example: ‘play as if you are floating’ or ‘play as if you are singing’. There are many such metaphors, but the interpretations of such metaphors could vary among different people. The idea of our project was to capture the acoustic signals that are associated with performance and project it in a visual display. In this way, people do not have to deal with “potentially vague metaphors”. But there is a big question of what visual features suits best to represent acoustic signals. Learning is known to depend on the skill level of the learner: for beginners, presenting a lot of information may hinder their learning, but an advanced player can incorporate more information. By testing advanced players, we found that people like to see the note-by-note representations such as timing and loudness of each note, but in fact, they improved best when they are presented with a higher-level interpretation of note relationships, such as “different feelings of drum grooves”.

Recently, we submitted a paper about applying this visualization technology to speech therapy in Parkinson’s patients. These patients have difficulty in understanding their own speech signal, such as loudness and pitch height. Research showed that presenting them with their pitch and loudness visually was helpful for their speech therapy process, however, the same study indicated that interpreting and integrating two graphs (of pitch and loudness) are sometimes hard for them. In collaboration with Sint Maartenskliniek in Nijmegen, we tried to find the best way to combine these two dimensions in one visual feedback for the patient population. I find this project very exciting because this gives me a feeling that the researcher can do something to give back to the society.”

“I am currently involved in the EarOpener project, which tries to provide neurofeedback based on categorical perception and to see if this helps learning of sound categories, such as phonemes or tone patterns. There have been many studies on auditory categorical perception and associated neural correlates (e.g., event related responses) and we are trying to apply this knowledge to design a new language learning system. For this, we use a so-called oddball paradigm. This paradigm presents a long sequence of a sound consisting of, let’s say ‘ra’ and ‘la’. When one sound ‘ra’ is presented 85% of the time and ‘la’ is presented less than 15% of the time, our brain is known to produces a signal for this odd tone, ‘la’. This response is like a marker, showing that the brain has detected a mismatch in sound sequence. The response is also known to depend on one’s categorical knowledge. For example, the difference between ‘ra’ and ‘la’ is very difficult for Japanese native speakers like myself. If we present this sequence to a Japanese native listener, I expect that the mismatch response would be much smaller than that of Dutch native listener. It is because Japanese language does not make a distinction between r and l while Dutch language does. We know that this type of response correlates with learning: while you are learning new categories, your mismatch response typically increases … even it sometimes precedes your behavioral response. It means that, you may not be consciously able to hear the difference between the sounds, but your brain already knows the distinction!

That is fascinating and it makes sense too. Our brain already starts to pick up the signal before we can consciously deal with it. In the EarOpener project we applied a classification methods to capture this brain response online. The responses are very small, but with a sophisticated classification algorithm we can tell the absence / presence of the detection signal with about the accuracy of 65%-75%. The idea is to present this absence/presence information to a learner as a feedback during learning and see if it influences the learning behavior. This project is quite exploratory, but very fascinating, and we do have some evidence showing that neuro feedback has an impact on their brain response during learning.”

“I am trained as a classical music player – piano is my main instrument – and I was in the conservatory to study composition. I was very good in grasping a musical structure right away. I could play well at first sight, and I was quite active as an accompanist when I was in Japan. Now all that has faded a bit, which is a pity. Every now and then I try to revitalize my music activity by planning little concerts with a friend. But research also brings new music knowledge to me, especially through teaching and research at UvA and ILLC. I learned a lot of nice musical pieces from musicology students. Also I started to like electronic dance music, which I never listened to before myself, via a research project with Aline Honingh. It is very nice that I am developing a new range of musical habits through my work.”