Tuesday, December 20, 2011

A case of beat deafness?

Below the full episode of the Dutch Labyrint Tv program on Music and Neuroscience broadcasted on 14 December 2011. Unfortunately no English subtitles, but parts of it are English spoken.



Thursday, December 15, 2011

Meer weten over maatdoofheid? [Dutch]

Mathieu: de man zonder ritme.


Olaf Oudheusden (de regisseur van ‘De man zonder ritme’) en een heel team van enthousiaste programmamakers, waaronder Wiesje Kuijpers en Eef Grob, stopten veel energie in de aflevering van Labyrint over muziek en onze hersenen die gisteravond werd uitgezonden op Nederland 2. Je ziet het er, mijns inziens, aan af: zie de volledige aflevering op www.labyrint.nl. Voor een verslag van de online napraatsessie zie www.labyrint.nl.



Veel materiaal sneuvelt natuurlijk in de montage. Daarom hieronder een stukje uit de teleconferentie van een maand of wat geleden, tussen het CSCA in Amsterdam en BRAMS in Montréal, dat een aanvullend inzicht geeft in het ontbreken van maatgevoel bij Mathieu:



Sunday, December 11, 2011

A case of congenital beat deafness? [Part 2]

Isabelle Peretz, Co-director of the International Laboratory for Brain, Music and Sound Research (BRAMS), told me about Mathieu during a workshop at the Université Libre de Bruxelles in November 2009. She was very excited, and I couldn’t but share her enthusiasm: She was pretty sure she found a beat-deaf person.
'Mathieu was discovered through a recruitment of subjects who felt they could not keep the beat in music, such as in clapping in time at a concert or dancing in a club. Mathieu was the only clear-cut case among volunteers who reported these problems. Despite a lifelong love of music and dancing, and musical training including lessons over several years in various instruments, voice, dance and choreography, Mathieu complained that he was unable to find the beat in music. Participation in music and dance activities, while pleasurable, had been difficult for him.' (from Phillips-Silver et al., 2011)
About one year later her group published a journal paper presenting some behavioral evidence that Mathieu was a case of congenital beat deafness.

The questions posted in a blog entry just after the publication of that study resulted in a collaboration in which next to behavioral also direct electrophysiological methods were used. Pascale Lidji (also associated with BRAMS) did an EEG/ERP experiment, modeled after our earlier Amsterdam experiments, to directly probe Mathieu’s apparent beat-deafness.

Last week we had a teleconference discussing the first experimental results (filmed by a Dutch TV crew following our work). These suggest that Mathieu’s brain did pick-up the beat, but his conscious perception did not, as several behavioral experiments confirmed. Intriguing, to say the least.

See below for some fragments from the teleconference:


And the trailer announcing the tv program to be broadcasted next week:


For more information, see the Labyrint tv website.

N.B. There will be a live broadcasted napraatsessie that can be viewed at www.labyrint.nl.

ResearchBlogging.orgPhillips-Silver, J., Toiviainen, P., Gosselin, N., Piché, O., Nozaradan, S., Palmer, C., & Peretz, I. (2011). Born to dance but beat deaf: A new form of congenital amusia Neuropsychologia DOI: 10.1016/j.neuropsychologia.2011.02.002

Saturday, December 10, 2011

Is muzikaliteit aangeboren of aangeleerd? [Dutch]

In de NTR-serie Pavlov stellen acht bekende Nederlanders een vraag aan de wetenschap. In deze uitzending test Fleur Bouwer (UvA) de muzikaliteit van Lavinia Meijer.



Zelf ook de luistertest doen? Hij duurt ongeveer twintig minuten. Klik hier.

Voor de volledige uitzending zie de website van Pavlov.

Tuesday, December 06, 2011

Which brain areas are involved in listening?

It's a persistent myth to think that music is processed solely in the right hemisphere. This week yet another study shows that, even when the processes are restricted to listening alone, virtually the whole brain is involved.

A Finish research group led by Petri Toiviainen found that music listening recruits not only the auditory areas of the brain, but also employs large-scale neural networks. They could show that the processing of musical pulse recruits motor areas in the brain, supporting the idea that music and movement are closely intertwined. Limbic areas of the brain, known to be associated with emotions, were found to be involved in rhythm and tonality processing. Processing of timbre was associated with activations in the so-called default mode network, which is assumed to be associated with mind-wandering and creativity.

Adapted from Stewart et al. (2009) Oxford Handbook of Music Psychology. 

As said, this study is not alone in this. In a recent chapter Laurel Stewart and colleagues made a similar claim based on a review of a vast amount of literature. In the figure above (redrawn from the original) the circles indicate the areas where more than 50% of the existing literature agrees that they are involved. (N.B. it is good to realize these areas are actually part of whole networks, and not just single locations.) And here again, if you look at the brain networks involved in listening, you’ll notice that virtually the whole brain is involved.

ResearchBlogging.orgAlluri, V., Toiviainen, P., Jääskeläinen, I., Glerean, E., Sams, M., & Brattico, E. (2011). Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm NeuroImage DOI: 10.1016/j.neuroimage.2011.11.019

ResearchBlogging.orgStewart L, von Kriegstein K, Warren JD, & Griffiths TD (2006). Music and the brain: disorders of musical listening. Brain : a journal of neurology, 129 (Pt 10), 2533-53 PMID: 16845129

Wednesday, November 30, 2011

Is beat induction species-specific? [Part 2]

It is a slowly but steadily unfolding story, with more and more evidence in support of it: The story revealing with what other species we share beat induction, a skill that is argued to be fundamental to music.

The ability to synchronize to the beat of the music has been demonstrated in several parrot species and, apparently, one elephant species, supporting the vocal learning and rhythmic synchronization hypothesis, which posits that vocal learning provides a neurobiological foundation for auditory–motor entrainment.

While earlier experiments with parrots and related animals were criticized mainly for their relatively informal setup (e.g. using existing YouTube videos or analyzing home-made video’s), a few weeks ago an elegant and systematic study appeared in Nature Scientific Reports in which budgerigars (Melopsittacus undulates), a vocal-learning parrot species, were trained to synchronize to a metronome. A study that can be considered an important first step towards understanding the timing control mechanism in vocal learners.



Video example of budgerigar doing a tapping task (Source).

Unfortunately, they were trained only to a (visual and auditory) metronome, and not a rhythmically varying acoustic signal (read: music), so we are still not sure this is indeed a case of beat induction. And is the bird in the video not simply reacting, instead of anticipating (predicting negative phase) as humans do?

Also, to be real support for the vocal learning (or vocal mimicking) hypothesis, additional experiments are still needed. Most notably an experiment that tests whether related species that are not vocal learners, such as doves, are incapable of the learning that the budgerigars show. (I know that at least one cognitive biologist is willing to pick up the glove :-)

ResearchBlogging.org Hasegawa, A., Okanoya, K., Hasegawa, T., & Seki, Y. (2011). Rhythmic synchronization tapping to an audio–visual metronome in budgerigars Scientific Reports, 1 DOI: 10.1038/srep00120

Saturday, November 26, 2011

TEDxAmsterdam: What makes us musical animals?




Research and references mentioned in the talk can be found in the book cited below. More video reports can be found at the TEDxChannel.

ResearchBlogging.orgHoning, H. (2011) Musical Cognition. A Science of Listening. New Brunswick, N.J.: Transaction Publishers. ISBN 978-1-4128-4228-0.

ResearchBlogging.orgZarco, W., Merchant, H., Prado, L., & Mendez, J. (2009). Subsecond Timing in Primates: Comparison of Interval Production Between Human Subjects and Rhesus Monkeys Journal of Neurophysiology, 102 (6), 3191-3202 DOI: 10.1152/jn.00066.2009


Wednesday, November 09, 2011

What is the role of consciousness in auditory perception?

István Winkler
On Tuesday 15 November 2011 prof. dr István Winkler (Hungarian Academy of Sciences) will give the monthly CSCA lecture in Amsterdam. He is visiting the Music Cognition Group for two days.

Winkler will talk about his recent research in auditory perception and its role and functioning in the newborn brain. He will argue that the representation of a sound organization in the brain is a coalition of auditory regularity representations producing compatible predictions for the continuation of the sound input. Competition between alternative sound organizations relies on comparing the regularity representations on how reliably they predict incoming sounds and how much together they explain from the total variance of the acoustic input. Results obtained in perceptual studies using the auditory streaming paradigm will be interpreted in support of the hypothesis that regularity representations underlie auditory stream segregation.

Furthermore, Winkler will argue that the same regularity representations are involved in the deviance-detection process reflected by the mismatch negativity (MMN) event-related potential (ERP).

Finally, based on the hypothesized link between auditory scene analysis and deviance detection, Winkler will propose a functional model of sound organization and discuss how it can be implemented in a computational model.


For more information (time and location), see the CSCA website.

ResearchBlogging.orgNäätänen R, Kujala T, & Winkler I (2011). Auditory processing that leads to conscious perception: a unique window to central auditory processing opened by the mismatch negativity and related responses. Psychophysiology, 48 (1), 4-22 PMID: 20880261.

ResearchBlogging.orgWinkler I, Denham SL, & Nelken I (2009). Modeling the auditory scene: predictive regularity representations and perceptual objects. Trends in cognitive sciences, 13 (12), 532-40 PMID: 19828357

Wednesday, November 02, 2011

Is beat induction species-specific? [Part 1]

Beat induction (BI) is the cognitive skill that allows us to hear a regular pulse in music to which we can then synchronize. Perceiving this regularity in music allows us to dance and make music together. As such it can be considered a fundamental musical trait that, arguably, played a decisive role in the origin of music (see also earlier entries of this blog). Furthermore, BI has been argued to be a spontaneously developing, domain-specific and species-specific skill.

With regard to the first aspect, recent studies with infants and newborns provide some evidence suggesting such early bias (Honing et al., 2009). With regard to the second aspect convincing evidence is still lacking, although it was recently argued that BI does not play a role (or is even avoided) in spoken language (Patel, 2008). And with regard to the latter aspect, it was recently suggested that we might share BI with a selected group of bird species (Fitch, 2009) and not with more closely related species such as nonhuman primates.(Zarco et al., 2009). This is surprising when one assumes a close mapping between specific genotypes and specific cognitive traits. However, more and more studies show that genetically distantly related species can show similar cognitive skill, and this offers a rich basis for comparative studies of this specific cognitive function.

Most animal studies have used behavioral methods to probe the presence (or absence) of BI, such as tapping tasks or measuring head bobs. It might well be that if more direct electrophysiological measures are used (such as analogs of the MMN), nonhuman primates might indeed also show BI.

Its this hypothesis that that is the topic of a new and exiting collaboration of our group with that of Hugo Merchant at the Institute of Neurobiology in Querétaro, Mexico. This week we started a series of experiments with Rhesus Macaques using the same paradigm we used in our earlier newborn studies.



ResearchBlogging.orgFitch, W. (2009). Biology of Music: Another One Bites the Dust Current Biology, 19 (10) DOI: 10.1016/j.cub.2009.04.004

ResearchBlogging.orgHoning H, Ladinig O, Háden GP, & Winkler I (2009). Is beat induction innate or learned? Probing emergent meter perception in adults and newborns using event-related brain potentials. Annals of the New York Academy of Sciences, 1169, 93-6 PMID: 19673760

ResearchBlogging.orgPatel, A. D. (2008). Music, language, and the brain. Oxford: Oxford University Press.

ResearchBlogging.orgZarco, W., Merchant, H., Prado, L., & Mendez, J. (2009). Subsecond Timing in Primates: Comparison of Interval Production Between Human Subjects and Rhesus Monkeys Journal of Neurophysiology, 102 (6), 3191-3202 DOI: 10.1152/jn.00066.2009

Friday, October 28, 2011

Wie won de Creatieve Geest Prijs? [Dutch]

Onderzoeker Shanti Ganesh van de Radboud Universiteit is de winnaar van de Creatieve Geest Prijs 2011, een nieuwe prijs aan de UvA geïnitieerd door de Freek & Hella de Jonge Stichting en het Cognitive Science Center Amsterdam (CSCA). Ganesh ontvangt de prijs voor haar onderzoeksvoorstel ‘Can creativity switch domains?’ Met de prijs van € 10.000 kan ze haar voorstel verder uitwerken, waarbij ze ondersteund wordt vanuit het UvA-onderzoekszwaartepunt Brain and Cognitive Sciences.



Met het instellen van de Creatieve Geest Prijs willen Freek en Hella de Jonge onderzoek stimuleren naar creativiteit en de hersenactiviteiten die daarbij een rol spelen, zoals het waarnemend vermogen van schilders, het ruimtelijk inzicht van architecten en het associatieve talent van cabaretiers.

Meer informatie over de winnaar van 2011.
Meer informatie over de prijs.

Tuesday, October 11, 2011

Interested in human nature?

During a partner meeting yesterday evening at the residence of the Amsterdam municipality, the majority of the speakers list was released for the 2011 edition of the TEDxAmsterdam event. The speakers and the audience will enter the theme ‘Human Nature’ on an expedition to find out what it means to be human in a society that is increasingly dominated by technology and economical issues.

More than ever has there been a need for a vision that is based on human virtues and those that deserve the mark of ‘practical wisdom’. Dutch speakers who are confirmed for TEDxAmsterdam 2011 are among others prof. dr Eveline Crone (Leiden University), prof. dr Henkjan Honing (University of Amsterdam), Chief of the Netherlands Defence Staff Peter Van Uhm, dr David Lentink (bird and swift flight expert), journalist Joris Luyendijk (observes bankers in the City of London). Het Nationale Ballet will perform the opening act.

For the full speaker list see here.

ResearchBlogging.orgCrone, E., & van der Molen, M. (2004). Developmental Changes in Real Life Decision Making: Performance on a Gambling Task Previously Shown to Depend on the Ventromedial Prefrontal Cortex Developmental Neuropsychology, 25 (3), 251-279 DOI: 10.1207/s15326942dn2503_2

ResearchBlogging.orgLentink, D., Müller, U., Stamhuis, E., de Kat, R., van Gestel, W., Veldhuis, L., Henningsson, P., Hedenström, A., Videler, J., & van Leeuwen, J. (2007). How swifts control their glide performance with morphing wings Nature, 446 (7139), 1082-1085 DOI: 10.1038/nature05733

Monday, October 10, 2011

A history of music cognition?

One of the pioneers in the field that would come to be called music cognition was H. Christopher Longuet-Higgins (1923-2004). Not only was Longuet-Higgins one of the founders of the cognitive sciences (he coined the term in 1973), but as early as 1971 he formulated, together with Mark Steedman, the first computer model of musical perception. That early work was followed in 1976 with a full-fledged alternative in the journal Nature, seven years earlier than the more widely known, but, according to Longuet-Higgins, less precisely formulated, Generative Theory of Tonal Music of Lerdahl and Jackendoff. In a review in Nature in 1983 he wrote somewhat sourly:
‘Lerdahl and Jackendoff are, it seems, in favor of constructing a formally precise theory of music, in principle but not in practice.’
Although Lerdahl and Jackendorff’s book was far more precise than any musicological discussion found in the leading journals, the importance of formalization cannot be underestimated. Notwithstanding all our musicological knowledge, many fundamental concepts are in fact treated as axioms; musicologists are, after all, anxious to tackle far more interesting matters than basic notions like tempo, meter or syncopation, to name a few. But these axioms are not in actual fact understood, in the sense that we are not able (as yet) to formalize them sufficiently to explain them to a computer. This is still the challenge of ‘computer modelling’ (and of recent initiatives such as computational humanities) – a challenge that Longuet-Higgins was one of the first to take up [Excerpt from Honing, 2011].

ResearchBlogging.org Longuet-Higgins, H. C. (1983). All in theory — the analysis of music Nature, 304 (5921), 93-93 DOI: 10.1038/304093a0

ResearchBlogging.org Longuet-Higgins, H. C.  (1976). Perception of melodies Nature, 263 (5579), 646-653 DOI: 10.1038/263646a0
 
ResearchBlogging.org Honing, H. (2011). The illiterate listener. On music cognition, musicality and methodology. Amsterdam: Amsterdam University Press.

Saturday, September 17, 2011

Do music lessons make you smarter?

As a follow-up of an earlier entry, announcing a lecture from Glenn Schellenberg at CSCA in Amsterdam, see this link for a recording of that event (UvA streaming video; Sorry, only visible for UvA-students and employees).

Thursday, September 15, 2011

Cleese explains it all



ResearchBlogging.orgZatorre R, & McGill J (2005). Music, the food of neuroscience? Nature, 434 (7031), 312-5 PMID: 15772648

Tuesday, September 13, 2011

Interested in doing research in cognitive and computational musicology?

A postdoc and PhD position are currently vacant in two collaborative research projects that cut across the boundaries between music cognition, musicology and computer science.

For more information, see here. Deadline for applications is 14 October 2011.


(For related vacancies at the e-Laboratory, see here.)

Friday, August 12, 2011

Dirk Jan Povel

Today reached me the sad news that one of the Dutch pionieers in rhythm perception research, Dirk Jan Povel, has passed away after an incurable illness.

Povel made an important contribution to our understanding of the perception of rhythmic patterns reported in a number of highly cited studies. He retired from Radboud University and at the Nijmegen Institute for Information and Cognition (NICI) in November 2005. He taught a few thousand students and was deeply involved in theoretical and applied research in a number of fields. Most notably theoretical and applied research related to speech perception and speech production, the perception of temporal patterns and musical rhythms, and the production of serial motor patterns. More recently he has been doing research on the on-line processes of music perception to discover the perceptual mechanisms listeners use in coding music (see for more information here).

ResearchBlogging.orgPovel, D. (1981). Internal representation of simple temporal patterns. Journal of Experimental Psychology: Human Perception and Performance, 7 (1), 3-18 DOI: 10.1037/0096-1523.7.1.3

Tuesday, August 09, 2011

What makes us musical animals? [Part 2]

This week a new essay came out in which I try to make a case for ‘illiterate listening’, the human ability to discern, interpret and appreciate musical nuances. We have known for some time that babies possess a keen perceptual sensitivity for the melodic, rhythmic and dynamic aspects of speech and music: aspects that linguists are inclined to categorize under the term ‘prosody’, but which are in fact the building blocks of music. Only much later in a child’s development does s/he make use of this ‘musical prosody’, for instance in delineating and subsequently recognizing word boundaries. We all share these musical skills, from day one, and long before a single word has been uttered, let alone conceived. It is the preverbal and preliterate stage of our development that is dominated by musical listening.

The Illiterate listener is available online since August 9, 2011.

ResearchBlogging.org Honing, H. (2011). The illiterate listener. On music cognition, musicality and methodology. Amsterdam: Amsterdam University Press.


Sunday, July 31, 2011

What makes us musical animals?

This week a plug for my new book that just came out: Musical Cognition: A Science of Listening (Read fragments of it online at Google Books; currently available with more than 30% discount on the hardcover at Amazon and Barnes & Noble).

From the cover:
"Musical Cognition suggests that music is a game (or 'benificial play'). In music, our cognitive functions such as perception, memory, attention, and expectation are challenged; yet as listeners we often do not realize that the listener plays an active role in reaching the awareness that makes music so exhilarating, soothing, and inspiring. In reality, the author contends, listening does not happen in the outer world of audible sound but in the inner world of our minds and brains.

Recent research in the areas of psychology and neuro-cognition allows Honing to be explicit in a way that many of his predecessors could not. His lucid, evocative writing style guides the reader through what is known about listening to music while avoiding jargon and technical diagrams. With clear examples, the book concentrates on underappreciated musical skills — “sense of rhythm” and “relative pitch” — skills that make us musical creatures. Research on how living creatures respond to music supports the conviction that all humans have a unique, instinctive attraction to music.

Musical Cognition includes a selection of intriguing examples from recent literature exploring the role that an implicit or explicit knowledge of music plays when one listens to it. The scope of the topics discussed ranges from the ability of newborns to perceive the beat, to the unexpected musical expertise of ordinary listeners. The evidence shows that music is second nature to most human beings — biologically and socially."



ResearchBlogging.orgHoning, H. (2011) Musical Cognition. A Science of Listening. New Brunswick, N.J.: Transaction Publishers. ISBN 978-1-4128-4228-0.

ResearchBlogging.orgWinkler, I., Haden, G., Ladinig, O., Sziller, I., & Honing, H. (2009). Newborn infants detect the beat in music Proceedings of the National Academy of Sciences, 106 (7), 2468-2471 DOI: 10.1073/pnas.0809035106


Wednesday, July 27, 2011

Why would anyone listen to sad music?


See also the San Francisco Classical Voice.

ResearchBlogging.orgHuron, D. (2011). Why is sad music pleasurable? A possible role for prolactin Musicae Scientiae, 15 (2), 146-158 DOI: 10.1177/1029864911401171

Tuesday, June 21, 2011

Are we ‘illiterate listeners’? [Part 2]


This week a fragment from The Illiterate Listener that will be published later this year at Amsterdam University Press:
"French babies cry differently than German babies. That was the conclusion of a study published at the end of 2009 in the scientific journal Current Biology. German babies were found to cry with a descending pitch; French babies, on the other hand, with an ascending pitch, descending slightly only at the end. It was a surprising observation, particularly in light of the currently accepted theory that when one cries, the pitch contour will always descend, as a physiological consequence of the rapidly decreasing pressure during the production of sound. Apparently, babies only a few days old can influence not only the dynamics, but also the pitch contour of their crying. Why would they do this?

The researchers interpreted it as the first steps in the development of language: in spoken French, the average intonation contour is ascending, while in German it is just the opposite. This, combined with the fact that human hearing is already functional during the last trimester of pregnancy, led the researchers to conclude that these babies absorbed the intonation patterns of the spoken language in their environment in the last months of pregnancy and consequently imitated it when they cried.

This observation was also surprising because until now one generally assumed that infants only develop an awareness for their mother tongue between six and eighteen months, and imitate it in their babbling. Could this indeed be unique evidence, as the researchers emphasized, that language sensitivity is already present at a very early stage? Or are other interpretations possible?

Although the facts are clear, this interpretation is a typical example of what one could call a language bias: the linguist’s understandable enthusiasm to interpret many of nature’s phenomena as linguistic. There is, however, much more to be said for the notion that these newborn babies exhibit an aptitude whose origins are found not in language but in music.

We have known for some time that babies possess a keen perceptual sensitivity for the melodic, rhythmic and dynamic aspects of speech and music: aspects that linguists are inclined to categorize under the term ‘prosody’, but which are in fact the building blocks of music. Only much later in a child’s development does he make use of this ‘musical prosody’, for instance in delineating and subsequently recognizing word boundaries. But let me emphasize that these very early indications of musical aptitude are not in essence linguistic."

ResearchBlogging.org Honing, H. (2011, in press). The illiterate listener. On music cognition, musicality and methodology. Amsterdam: Amsterdam University Press.

Sunday, June 19, 2011

Is blogging outdated?

Yesterday an article by Carola Houtekamer appeared in NRC Handelsblad reviewing the state of blogging. She wrote an enthusiastic article a few years ago and it was about time for a re-evaluation.

A round of telephone calls made her realize, though, that blogging is out of date and is replaced by recent activities like Twitter and Facebook. But except in the world of science! There, apparently, the 140 characters are too few, and is Facebook considered too shallow.

The most remarkable revitalization of blog-activity, mentioned in Houtekamer's article, is the new network setup by Bora Zivkovic of Scientific American. But also The Guardian, Wired and the scientific journal PLOS recently started new blog networks (see, e.g., researchblogging.org, blogs.discovermagazine.com, scientopia.org or occamstypewriter.org).

Personally, I like the scale of a blog. Over time it builds up as an archive of smaller and larger ideas, and turns out to be a reference to topics that appear with a certain regularity in my classes; it is not uncommon that some blog entries turn out to be useful as a staring point for a larger project.

Nevertheless, lets see how all this develops in the next five years. New technology will surely suggest novel ways of doing and disseminating the doubts, failures and insights of science.

Saturday, June 18, 2011

Is muzikaliteit bijzonder? [Dutch]

Vandaag in de Volkskrant (in de rubriek Opinie & Debat) een stuk van Dick Swaab, Erik Scherder en ondergetekende met de titel Amuzikaal zijn is de grote uitzondering: over waarom muziek geen luxe is.

Doe mee aan de discussie op opinie.volkskrant.nl.

Tuesday, June 07, 2011

Interested in doing a postdoc in music cognition?

The University of Amsterdam offers three new postdoc positions, one of which is in the field of music cognition.

Detailed information on the project, and instructions on how to apply, can be found here. Deadline for applications: 23 June 2011.

ResearchBlogging.orgHoning, H., Ladinig, O., Háden, G., & Winkler, I. (2009). Is Beat Induction Innate or Learned? Annals of the New York Academy of Sciences, 1169 (1), 93-96 DOI: 10.1111/j.1749-6632.2009.04761.x

Thursday, June 02, 2011

Is muziek een luxe? [Dutch]


Op woensdag 22 juni organiseert Muziek telt! het symposium Muziek en het Brein. Vragen als: wat zijn die positieve effecten van muziek op het brein? Wanneer vinden ze plaats? En hoe ver zijn wetenschappers in hun onderzoek hiernaar? worden beantwoord door de keynote speakers:
  • prof. dr. Erik Scherder (hoogleraar Klinische Neuropsychologie)
  • prof. dr. Henkjan Honing (hoogleraar Muziekcognitie)
  • prof. dr. Dick Swaab (hoogleraar neurobiologie)
Het symposium is bedoeld voor iedereen die werkzaam is in onderwijs, wetenschap, politiek, gezondheidszorg en muziek.

Presentatie van de middag is in handen van Paul Witteman.


Om aan te melden klik hier (mogelijk tot 31 mei a.s.).

Thursday, May 26, 2011

Een creatieve geest? [Dutch]

Op initiatief van de Freek & Hella de Jonge stichting en het Cognitive Science Center Amsterdam (CSCA) is de Creatieve Geest Prijs opgericht. Deze prijs wordt toegekend aan een jonge gepromoveerde wetenschapper die met een origineel en sprankelend idee komt om eigen onderzoek uit te voeren, waarin creativiteit en de werking van de hersenen centraal staan. De prijswinnaar krijgt de kans om in het kader van het UvA Brain & Cognition programma haar/zijn onderzoeksplan te realiseren en zich zo als veelbelovend onderzoeker te profileren.

Met de prijs wordt meer aandacht gevraagd voor onderzoek naar creativiteit en de hersenprocessen die daarbij een rol spelen, zoals de waarneming van schilders, het ontwerpen door architecten of het associatief vermogen van cabaretiers. Dit onderzoek kan plaatsvinden vanuit meerdere wetenschappelijke disciplines, zolang het thema ‘hersenen en cognitie’ centraal staat. Het voornaamste beoordelingscriterium is dat het gaat om een nieuw, onverwacht idee over hoe het brein creativiteit stuurt, gekoppeld aan een plan hoe dit verder te onderzoeken.


De deadline voor aanvragen is 15 augustus 2011

Meer informatie is hier te vinden.




Wednesday, May 18, 2011

Does music make you smarter?

Since the publication of Music and spatial task performance in Nature in 1993, numerous researchers have tried to replicate the so-called ‘Mozart effect’: the idea that listening to Mozart's music would make you smarter.

There is now quite some evidence indicating that indeed music listening leads to enhanced performance on a variety of cognitive tests, but also that such effects are short-term and stem from the impact of music on arousal level and mood, which, in turn, affect cognitive performance. However, this is not special to music: experiences other than music listening have similar effects.

However, music lessons in childhood tell a different story. They are associated with small but general and long-lasting intellectual benefits that cannot be attributed to obvious confounding variables such as family income and parents' education (Schellenberg, 2004). However, the mechanisms underlying this association have yet to be determined (Schellenberg & Peretz, 2008). Other controversial issues related to these findings are, for example, the direction of causation -does music influence cognitive skills or is it the other way around?- and the reason why "real musicians" often fail to exhibit enhanced performance on measures of intelligence -if music makes you smarter why aren't musicians generally smarter?

On Wednesday 15 June 2011 Glenn Schellenberg will give a lecture on this topic at the Cognitive Science Center Amsterdam (CSCA) of the University of Amsterdam. See here for more information on the lecture and location.

ResearchBlogging.orgSchellenberg, E. G., & Peretz, I. (2008). Music, language and cognition: unresolved issues. Trends in Cognitive Sciences, 12 (2), 45-46 DOI: 10.1016/j.tics.2007.11.005

Wednesday, April 20, 2011

A 2006 live recording of Glenn Gould?

[Repeated entry from 17 August 2007] Sony Music recently released a new recording (made in 2006) of Glenn Gould performing the Goldberg Variations. Curious, not? The recording was made using measurements of the old recordings and then regenerating the performance on a computer-controlled grand piano, a modern pianola.

This technology dates from the early nineties, a time when several piano companies (including Yamaha and Bösendorfer) combined MIDI and modern solenoid technology with the older idea of a pianola. Old paper piano rolls with recordings of Rachmaninoff, Bartok, Stravinsky and others were translated to MIDI and could be reproduced ‘live’ on modern instruments like the Yamaha Disklavier. Until now, the only left challenge was to be able to do this for recordings of which no piano-rolls were available.

Besides the technicalities of all this, for most people the real surprise —or perhaps disillusion— might well be the realization that a piano performance can be reduced to the ‘when’, ‘what’ and ‘how fast’ the piano keys are pressed. Three numbers per note can fully capture a piano performance, and it allows for replicating any performance on a grand piano(-la). The moment a pianist hits the key with a certain velocity, the hammer releases, and any gesture that is made after that can be considered merely dramatic: it will have no effect on the sound. This realization puts all theories about the magic of touché in a different perspective.

Nevertheless, while it is relatively easy to make the translation from audio (say a recording from Glenn Gould from 1955) to the what (which notes), and the when (timing) in a MIDI-like representation, the problem is in the ‘reverse engineering’ of key velocity. What was the speed of Gould’s finger presses on the specific piano he used? The Zenph Studios claim to have solved it for at least a few recordings. Only trust your ears :-)

See also last weeks column on Radio 4 (in Dutch):



(see also earlier comments)

.

Thursday, April 07, 2011

Familiar with popmusic?

This week a short entry to promote a PopQuiz made by MSc student Tom Aizenberg to understand more about music and memory. Feel free to share it with your Facebook friends...

Click here to do the listening test (just 10 questions).

Friday, April 01, 2011

Can music be funny?

In the spirit of today a fragment from New Horizons in Music Appreciation, a program from Radio Station WOOF at the University of Southern North Dakota at Hoople: an early example of how to attract a wider audience to listen to classical music:



With regard to today's question: David Huron (2004) studied audience laughter in live recordings of Peter Schickele's music (One of the presenters in the broadcast above). In that paper he offers a physiological explanation for why listeners respond to specific musical fragments by producing the distinctive "ha-ha-ha" vocalization....

ResearchBlogging.org Huron, D. (2004). Music-engendered laughter: an analysis of humor devices in P.D.Q. Bach Proceedings of the 8th International Conference of Music Perception and Cognition, 700-704.

Saturday, March 05, 2011

A case of congenital beat deafness?

Of most people that claim things like ‘Oh, but I’m not musical at all’, ‘I’m hopeless at keeping a tune’ or ‘I have no sense of rhythm’, only a small percentage turn out to be unmusical. The condition is known as amusia, and those who suffer from it are literally music-deficient. It is a rather exceptional, mostly inherited condition that comprises a range of handicaps in recognising or reproducing melodies and rhythms. It has been estimated that about 4 per cent of the people in Western Europe and North America have problems in this direction, to a greater or lesser degree. The most common handicap is tone-deafness or dysmelodia: the inability or difficulty in hearing the difference between two separate melodies.

To diagnose amusia, the Montreal Battery of Evaluation of Amusia (MBEA) has been developed. This test is available online – but wait a while before trying it out :-) People who say: ‘I can’t hold a note,’ ‘I sing out of tune,’ or ‘I have no sense of rhythm,’ are not necessarily suffering from amusia. Such people often confuse poor singing or dancing skills with the absence of a sense of hearing differences in melodies and rhythms. For instance, clapping a complex rhythm or dancing to the music requires quite some practice. Nevertheless, almost all of us can hear the differences between rhythms. It has been established that, even in people who are diagnosed as being tone-deaf, about half of them have a normal sense for rhythm (Peretz & Hyde, 2003).

Jessica Phillips-Silver (Université de Montréal, Canada) and a dream-team of music cognition experts found a person that claims to have truly no sense for rhythm, or, more precisely, is apparently deaf to hearing regularity in music. They describe their results in an upcoming issue of Neuropsychologia.

All tests presented in this intriguing study indeed hint at a person that has a true deficit in picking up the regularity in music (the ‘beat’ or regular pulse).

However, as with other studies on beat induction, it has proven to be very difficult to support the presence or absence of this skill on judging overt behavior such as dancing (see earlier entries on, e.g., Snowball). The study presents one (non-standard) perceptual test on beat perception, and I’m surprised the researchers did not use a relatively simple and far more direct test to see if beat induction is present or absent in this participant, such as the MMN paradigm used in work with newborns (e.g., Honing et al., 2009) or other recent studies making use of brain imaging methods. Would make a great follow-up paper.*

ResearchBlogging.orgPhillips-Silver, J., Toiviainen, P., Gosselin, N., Piché, O., Nozaradan, S., Palmer, C., & Peretz, I. (2011). Born to dance but beat deaf: A new form of congenital amusia Neuropsychologia DOI: 10.1016/j.neuropsychologia.2011.02.002

ResearchBlogging.orgPeretz, I. & Hyde, K. (2003). What is specific to music processing? Insights from congenital amusia Trends in Cognitive Sciences, 7 (8), 362-367 DOI: 10.1016/S1364-6613(03)00150-5

ResearchBlogging.orgHoning, H., Ladinig, O., Háden, G., & Winkler, I. (2009). Is Beat Induction Innate or Learned? Annals of the New York Academy of Sciences, 1169 (1), 93-96 DOI: 10.1111/j.1749-6632.2009.04761.x

* In fact, we started working on it this summer (Lidji, Palmer, Honing & Peretz, in preparation)