Melissa Baese-Berk studies speech perception and production, with special attention to nonnative speakers and listeners. She looks closely at variation in speech production and how that variation influences listeners in perception. She has also worked extensively on how various aspects of perception and productions systems interact. Listening to speech in adverse conditions, for example, is often a challenge. Baese-Berk also serves as director of the UO's Second Language Acquisition and Teaching Certificate Program and as an undergraduate advisor. Before joining the UO, she served as a postdoctoral researcher at both the Basque Center on Cognition, Brain, and Language in Spain and at Michigan State University.
Areas of Expertise (6)
Media Appearances (7)
Foreign Accents May Depend as Much on Your Eyes as Your Ears
Baese-Berk says communication is a two-way street. There’s a speaker and then there’s the listener. And listeners’ ability to understand can depend on their eyes as much as their ears, which means racial bias plays a role..
Learn a New Lingo While Doing Something Else
Scientific American online
In one study, published in 2015 in the Journal of the Acoustical Society of America, linguists found that people who took breaks from learning new sounds performed just as well as those who took no breaks, as long as the sounds continued to play in the background. The researchers trained two groups of people to distinguish among trios of similar sounds—for instance, Hindi has “p,” “b” and a third sound English speakers mistake for “b.” One group practiced telling these apart one hour a day for two days. Another group alternated between 10 minutes of the task and 10 minutes of a “distractor” task that involved matching symbols on a worksheet while the sounds continued to play in the background. Remarkably, the group that switched between tasks improved just as much as the one that focused on the distinguishing task the entire time. “There's something about our brains that makes it possible to take advantage of the things you've already paid attention to and to keep paying attention to them,” even when you are focused on something else, suggests Melissa Baese-Berk, a linguist at the University of Oregon and a co-author of the study...
How to Learn a New Language While Making Dinner, Running Errands, or Paying Bills
Of course, none of this is to say that you shouldn't take a language course or hire a tutor to help you. "You need to come to class and pay attention," one of the authors on both studies, Melissa Baese-Berk, told Greenwood. After class, though, you can listen to a foreign-language radio station and not pay full attention. It will still help you...
What a Difference an 'a' Makes
Around the O
When these audio recordings left Armstrong’s lunar locution an unsolved mystery, the UO’s Melissa Baese-Berk and scholars from Michigan State University and Ohio State University stepped in with their linguistics expertise to investigate the matter. Their analyses are detailed in a paper published Sept. 7 in PLOS ONE.
“Linguistic analysis gives us tools to examine Armstrong's claim,” said Baese-Berk, who specializes in speech production and perception. She focuses her research on issues such as the challenge of listening to speech in adverse conditions, like noisy galaxies, and the cognitive processes listeners use to decode messages with ambiguous language, like “for” versus “for a.”...
How to Teach Old Ears New Tricks
In a study published in 2013, for example, linguist Melissa M. Baese-Berk, then at Michigan State University, and her colleagues showed that an hour of training over two days on five different varieties of accented English improved understanding of all types of accented English, even totally novel accents. These findings gel with the research about learning foreign sounds—in general, listening to a broad array of speakers will train your brain faster and let you more reliably transfer that knowledge to the real world...
Good News: You Can Learn A New Language Without Even Thinking About It
So, is it time to fire your tutor and invest in a box set of Corazón Salvaje? Not quite. While this growing body of research suggests inactive language learning is an awesome tool in your language-learning arsenal, the value of focused practice cannot be discounted. “You need to come to class and pay attention,” says Melissa Baese-Berk, a linguist and co-author of one of the studies. “But when you go home, turn on the TV or turn on the radio in that language while you’re cooking dinner, and even if you’re not paying total attention to it, it’s going to help you.”...
Proposal: Armstrong Flubbed His Big Moon Speech Because of Ohio
Dilley and her colleagues, including MSU linguist Melissa Baese-Berk and OSU psychologist Mark Pitt, conducted a statistical analysis of the duration of the "r" sound as spoken by native central Ohioans saying both "for" and "for a" in natural conversation. To do this, the Acoustical Society of America explains, the researchers used a collection of recordings of conversational speech from 40 people raised in Columbus -- which is near Wapakoneta. Within that body of recordings, the researchers found 191 use cases of "for a." They then matched each of those instances to an instance of "for," sans "a," as said by the same speaker. They then compared the relative duration of the two...
Many international students at U.S. universities study in intensive English courses to improve their language skills before taking content-oriented courses toward their degrees. English for Academic Purposes (EAP) instructors in intensive courses and university faculty in content courses both listen to international student speech, but it is unclear whether they perceive it similarly or differently. In the present study, two groups (Content Faculty and EAP Instructors) provided comprehensibility ratings and transcribed an excerpt of speech from international students. Both groups of participants answered questions about their experience with the English of international students and other non-native speakers and their attitudes towards the English proficiency of international students. Comprehensibility ratings and intelligibility scores for both groups were similar, but EAP Instructors were able to transcribe more accurately for less-intelligible speakers. Content Faculty with negative attitudes towards international students’ language abilities gave lower comprehensibility ratings than those with positive attitudes, even though their transcription accuracy was equivalent. These results strengthen our understanding of the relationship between comprehensibility and intelligibility and have implications for university EAP curricula.
Speech perception abilities vary substantially across listeners, particularly in adverse conditions including those stemming from environmental degradation (e.g., noise) or from talker-related challenges (e.g., nonnative or disordered speech). This study examined adult listeners' recognition of words in phrases produced by six talkers representing three speech varieties: a nonnative accent (Spanish-accented English), a regional dialect (Irish English), and a disordered variety (ataxic dysarthria). Semantically anomalous phrases from these talkers were presented in a transcription task and intelligibility scores, percent words correct, were compared across the three speech varieties. Three cognitive-linguistic areas—receptive vocabulary, cognitive flexibility, and inhibitory control of attention—were assessed as possible predictors of individual word recognition performance. Intelligibility scores for the Spanish accent were significantly correlated with scores for the Irish English and ataxic dysarthria. Scores for the Irish English and dysarthric speech, in contrast, were not correlated. Furthermore, receptive vocabulary was the only cognitive-linguistic assessment that significantly predicted intelligibility scores. These results suggest that, rather than a global skill of perceiving speech that deviates from native dialect norms, listeners may possess specific abilities to overcome particular types of acoustic-phonetic deviation. Furthermore, vocabulary size offers performance benefits for intelligibility of speech that deviates from one's typical dialect norms.
It is frequently assumed that perception and production develop in tandem during non-native speech sound acquisition. However, several previous studies have suggested that simply producing tokens during training can disrupt perceptual learning (e.g., Leach & Samuel, 2009). Here, I present several experiments examining some sources of such a disruption. Further, I ask how learning in production progresses when listeners fail to learn in perception. In the first set of experiments, I examine performance on a discrimination task after training in perception alone, or training perception + production, in which listeners repeat tokens on every trial. In follow up experiments, rather than repeating the training tokens, listeners read an unrelated letter aloud between each perceptual training trial or respond to the unrelated letter with a button press, rather than reading the letter aloud. In a second set of experiments I examine how production learning progresses when learners produce tokens during training. The critical factor examined is whether listeners are able to differentiate between tokens in perception and whether this ability correlates with learning in production. Taken together, the results of these studies suggest that the relationship between perception and production is complex, especially during learning.
This paper describes the development of the Wildcat Corpus of native- and foreign-accented English, a corpus containing scripted and spontaneous speech recordings from 24 native speakers of American English and 52 non-native speakers of English. The core element of this corpus is a set of spontaneous speech recordings, for which a new method of eliciting dialogue-based, laboratory-quality speech recordings was developed (the Diapix task). Dialogues between two native speakers of English, between two non-native speakers of English (with either shared or different L1s), and between one native and one non-native speaker of English are included and analyzed in terms of general measures of communicative efficiency. The overall finding was that pairs of native talkers were most efficient, followed by mixed native/non-native pairs and non-native pairs with shared L1. Non-native pairs with different L1s were least efficient. These results support the hypothesis that successful speech communication depends both on the alignment of talkers to the target language and on the alignment of talkers to one another in terms of native language background.
Many theories predict the presence of interactive effects involving information represented by distinct cognitive processes in speech production. There is considerably less agreement regarding the precise cognitive mechanisms that underlie these interactive effects. For example, are they driven by purely production-internal mechanisms (e.g., Dell, 1986) or do they reflect the influence of perceptual monitoring mechanisms on production processes (e.g., Roelofs, 2004)? Acoustic analyses reveal the phonetic realisation of words is influenced by their word-specific properties – supporting the presence of interaction between lexical-level and phonetic information in speech production. A second experiment examines what mechanisms are responsible for this interactive effect. The results suggest the effect occurs on-line and is not purely driven by listener modelling. These findings are consistent with the presence of an interactive mechanism that is online and internal to the production system.