Crowdsourcing Could Bolster Speech Sounds Research Crowdsourcing—aggregating responses to an online task across a large number of people—may be an effective way to rate sounds in speech disorders research, according to a study in the Journal of Communication Disorders. Traditional use of speech-language pathologists and other trained professionals to rate research participants’ progress can be costly ... Research in Brief
Free
Research in Brief  |   March 01, 2015
Crowdsourcing Could Bolster Speech Sounds Research
Author Notes
Article Information
Speech, Voice & Prosody / Research in Brief
Research in Brief   |   March 01, 2015
Crowdsourcing Could Bolster Speech Sounds Research
The ASHA Leader, March 2015, Vol. 20, 14. doi:10.1044/leader.RIB1.20032015.14
The ASHA Leader, March 2015, Vol. 20, 14. doi:10.1044/leader.RIB1.20032015.14
Crowdsourcing—aggregating responses to an online task across a large number of people—may be an effective way to rate sounds in speech disorders research, according to a study in the Journal of Communication Disorders.
Traditional use of speech-language pathologists and other trained professionals to rate research participants’ progress can be costly and time-consuming. It can also be a challenge to find raters who are not part of the research and, therefore, unbiased.
Modeling studies have shown that even when individual responses to a task are not highly accurate, aggregated or crowdsourced responses from a large number of people generally converge with those of experts. In this study, New York University researchers—led by Tara McAllister Byun, assistant professor of communicative sciences and disorders—compared the speech sound ratings of experienced listeners with those of listeners recruited through Amazon’s Mechanical Turk online crowdsourcing platform.
Twenty-five experienced listeners and 153 online listeners rated recordings of 100 words containing /r/, collected from children receiving speech-language treatment to correct pronunciation of the sound. Data collection from the experienced listeners took three months; the online data collection took 23 hours.
The researchers found that when items were classified as correct or incorrect based on the majority vote across all listeners in a group, the two groups were in agreement on all but seven of 100 items. In further analysis, they found that samples of nine or more crowdsourced listeners demonstrate a level of performance consistent with expectations for experienced listeners.
“Because large crowdsourced samples can be obtained quickly, easily and inexpensively, speech researchers could find it beneficial to use crowdsourcing technology in place of traditional methods of collecting speech ratings,” McAllister Byun says.

“Because large crowdsourced samples can be obtained quickly, easily and inexpensively, speech researchers could find it beneficial to use crowdsourcing technology in place of traditional methods of collecting speech ratings.”

Researchers acknowledge that using crowdsourcing for speech ratings poses some limitations— including a lack of control over sound quality and inattentive or uncooperative raters—but suggest that the method could affect the process of gathering speech ratings.
“A key advantage of using crowdsourcing to recruit listeners for speech rating tasks is the speed and ease with which ratings can be obtained,” McAllister Byun says. “However, using crowdsourcing for speech data rating is not merely a question of convenience; it also has the potential to improve speech research by expanding access to independent listeners, thereby reducing bias.”
Crowdsourcing Speech Ratings with Amazon's Mechanical Turk
0 Comments
Submit a Comment
Submit A Comment
Name
Comment Title
Comment


This feature is available to Subscribers Only
Sign In or Create an Account ×
FROM THIS ISSUE
March 2015
Volume 20, Issue 3