Rewarding Research Grant agencies award millions to support groundbreaking research in communication sciences and disorders. To find out what’s around the bend in CSD, sample some of the largest of these grant-funded investigations. Features
Free
Features  |   October 01, 2014
Rewarding Research
Author Notes
Article Information
Research Issues, Methods & Evidence-Based Practice / Features
Features   |   October 01, 2014
Rewarding Research
The ASHA Leader, October 2014, Vol. 19, 32-45. doi:10.1044/leader.FTR1.19102014.32
The ASHA Leader, October 2014, Vol. 19, 32-45. doi:10.1044/leader.FTR1.19102014.32

Graphic Jump LocationImage Not Available

Watch researcher Mabel Rice describe how children with specific language impairment fare over the years.

Graphic Jump LocationImage Not Available

Watch researcher Susan Ellis Weismer describe her investigations of how executive function affects language development.

Graphic Jump LocationImage Not Available

Watch researcher Cynthia K. Thompson describe her quest to find out what most affects and helps language recovery in aphasia.
Ah, research—such a powerful engine of discovery and innovation in the professions. Women and men devoting years, if not decades, to investigating, testing and learning; trails being blazed along CSD frontiers that help professionals better understand and treat communication disorders.
Each year, grant agencies award millions of dollars in support of CSD-related research. Some grants continue to fund a long-term research program; others jumpstart new projects. The pages that follow showcase current research programs supported by some of the largest grants from major funding agencies. Exciting, relevant research, spearheaded by scientists on multiple fronts: These include using computer games to make auditory training more palatable, developing measures to predict the persistence of stuttering, assessing the long-term effects of specific language impairment on older children, studying the impact of SLI and autism spectrum disorder language disruption on executive function, looking to understand how the brain recovers from strokes in three language domains, and improving diagnostic trends, interventions and prevention strategies to reverse age-related hearing loss.
As you’re peeking under the hood and enjoying this sampling of diverse research in action, look for two essential, underlying truths. First, which agencies are common sources of substantive funding for the professions? Funding agencies are in many ways the silent partner in the dance of CSD research. Second, which research topics are considered priorities today by those agencies? The topics they support help shape the future of client care and further research.
So…what’s around the bend? Read on.
Gary Dunham, PhD, is ASHA director of publications and editor-in-chief of The ASHA Leader.
Chicks, Zebrafish—and a Cure for Hearing Loss?
Hair cells—tiny cells in the inner ear that look hairy at one end under a powerful microscope—play a central role in hearing, because when they die there has traditionally been no way to restore them. But at the Virginia Merrill Bloedel Hearing Research Center at the University of Washington, Ed Rubel and his colleagues are working to prevent hair cell death … and even regenerate hair cells in adults with hearing loss.
For Rubel, curing hearing loss is the ultimate goal but not his only motivation. He’s just as interested in creating the foundation upon which the next generation’s scientists—and the ones who follow them—will base their work. “Discovering new knowledge is the only permanent thing we can do as human beings,” he says, “besides having children.”
The recipients of a National Institute on Deafness and Other Communication Disorders “Core Center P30” grant, Rubel and other scientists are studying tiny zebrafish to find ways to prevent hair cell loss due to environmental toxins, ototoxic medications and aging. Larval zebrafish—only 2 mm long—use “hair cells” on the outside of their bodies to detect water currents in much the same way humans use them to detect sounds. Using a unique screening process, Rubel hopes to discover the genes that directly affect hair cells’ responses to ototoxic agents and create new drugs to prevent hearing loss and balance disorders.
Another research project stems from the discovery that baby chicks can regenerate lost hair cells following noise- or drug-induced hearing loss. Rubel and his co-investigators seek to isolate the specific cellular and molecular events that cause hair cells to regrow in birds and other animals—and apply them to humans. An actual product to spur hair cell growth—which would most likely be applied during a surgical procedure—could be as many as 20 to 50 years away, Rubel says. But an agent that can prevent hair cell loss from antibiotics (such as gentamicin) or chemotherapy drugs (like cisplatin) may be imminent. Pending U.S. Food and Drug Administration approval, Rubel’s team is just about ready to begin clinical trials in humans.
But audiologists needn’t worry about hair cell advances putting them out of business; quite the opposite. Rubel points out that the need for diagnosis won’t go away, nor will these procedures and products be effective for all patients. And as with the advent of cochlear implants, the need for audiologists will continue to grow alongside scientific advances. “That’s why Mr. Bloedel and I created the Virginia Merrill Bloedel Hearing Research Center,” Rubel says. “Researchers aren’t in competition with clinicians. We’re partners, and always will be.”
Matthew Cutter is a writer/editor for The ASHA Leader.
Reversing Age-Related Hearing Loss
To meet the challenges of a common, chronic condition of aging—age-related hearing loss, or presbycusis—improved diagnostic methods, interventions and prevention strategies are critical. At the Medical University of South Carolina, clinical and basic science researchers have been seeking solutions to these vital needs. With the renewal of the National Institutes of Health/National Institute on Deafness and Other Communication Disorders P50 Clinical Research Center, its research program—“Experimental and Clinical Studies of Presbycusis”—begins its 26th year of study.
Age-related hearing loss is one of the most common health concerns in the United States, one that contributes to poor communication abilities and reduced quality of life for millions of older adults. But many do not seek treatment. This research program—led by Judy R. Dubno, professor in the Department of Otolaryngology–Head and Neck Surgery—is focused on altering that trend.
The MUSC program is unique in several respects: It includes a 25-year longitudinal study of hearing in older persons; an extensive database of auditory, cognitive, health and other outcome measures from more than 1,400 participants; unique interdisciplinary collaborations of basic, translational and clinical scientists; and new approaches to the study of age-related hearing loss, including neuroimaging, human genetics and otopathology of human temporal bones.
— Matthew Cutter
The Forgotten Group: Older Children With SLI
Most children with specific language impairment receive a rush of services after the initial diagnosis up until about third grade. At that point, their language performance tends to improve, and services drop off.
What’s not clear is whether they’ve truly caught up to their peers on key language aspects like vocabulary, grammar and syntax—and how they fare through the rest of their school years and beyond. Enter Mabel Rice, a distinguished professor and speech-language researcher at the University of Kansas—with a big research agenda to trace just what happens to them.
Rice and co-investigators Shelly Smith, a geneticist at the University of Nebraska, and Lisa Hoffman, a quantitative expert on growth curve modeling (also at the University of Kansas) are tracking the language and life achievement of these children. With 25 years of funding from the National Institute on Deafness and Other Communications Disorders—most recently a $3 million-plus, renewable five-year grant—the team has followed a group of children with SLI for 18 years, from age 4 on. The team is also analyzing the SLI history and genetics of their families.
It’s too early to tell long-term effects. But many with SLI consistently lag behind controls on language achievement. And while most of them find jobs, they’re less likely to pursue higher education. This work has big implications for how we think about intervention, Rice says.
“Kids with SLI drop out of SLP caseloads around 8 or 9 years old, in part because they make some progress, but the problem is it doesn’t happen fast enough to close the gap with their peers,” she explains. “So we think they’re fine but they remain seriously behind and fade into the background as they move into adolescence. They need our help during this period.”
Rice acknowledges that some with SLI grow out of it. “But that’s not the main story by any stretch,” she says. “We need to think about how we can continue to help these kids into adolescence, in ways that expand our views on what intervention is.”
Bridget Murray Law is managing editor of The ASHA Leader.
Mabel Rice is an invited presenter at ASHA’s 2014 Research Symposium, “Primary Language Impairment in Children With Concomitant Health Conditions or Nonmainstream Language Backgrounds.” The symposium, on Nov. 22 in Orlando, Florida, is open to all 2014 ASHA Convention registrants.
Language Disorders May Disrupt Executive Function
The special education community is all abuzz about “executive function,” which, loosely defined, is self-regulation. Why the hubbub? Because, when executive function works poorly, children take a hit socially and academically.
And what fascinates researcher Susan Ellis Weismer is the possibility that people use inner language to manage that executive function process—a possibility past research supports. Building on that work, Ellis Weismer theorizes that when language is disrupted in disorders such as specific language impairment and autism spectrum disorder, so is executive function, in three main areas:
  • Inhibition: The ability to complete the task at hand. “I need to ignore distractions and focus.”

  • Updating working memory: The ability to store and retrieve practical knowledge. “Tomorrow I need a packed lunch because there’s a field trip.”

  • Task shifting: The ability to shift easily from one activity to another. “The bell is ringing so I need to put away my work and go out to recess.”

With a five-year, $564,177-a-year grant from the National Institutes of Health, Ellis Weismer and co-investigator Margarita Kaushanskaya, both of the University of Wisconsin-Madison, are testing the relationship between these executive-function components and language in children with SLI and ASD. For example, as children perform nonverbal problem-solving tasks, the researchers interrupt them to see whether they can use inner language to mediate their thoughts.
To add another wrinkle, the researchers compare the SLI and ASD groups’ performance with that of developmentally normal bilingual children, who (in other research) show an executive function advantage over monolingual children.
Ellis Weismer suspects inhibition will be tough for participants with SLI and ASD—in which case she hopes to develop inhibition-building computerized games to help. She anticipates task-shifting will prove most difficult for children with ASD, and working memory most difficult for those with SLI.
“If we find clear relationships between aspects of executive function and language, then we might try some experimental treatments to improve task-shifting in kids with autism,” she says. “If we find that in SLI it’s more working memory problems, we could possibly help these kids improve the ways they use general cognitive processes in memory.”
— Bridget Murray Law
Probing the Language Breakdowns of ASD
Children with specific language impairment struggle with two major aspects of language learning: phonological working memory and syntax (grammar). But is the same true for children with autism spectrum disorder who encounter language-learning problems?
And if the two groups of kids share these particular language difficulties, are similar areas of their brains affected?
These are the fundamental questions John Gabrieli and co-investigators Kenneth Wexler and Helen Tager-Flusberg seek to answer in a five-year, $583,471-a-year NIH grant.
“Very little is known about how these two core language abilities develop and how in the brain they may vary among normally developing kids and kids with ASD and SLI,” says Gabrieli, a cognitive neuroscience researcher at the Massachusetts Institute of Technology. Gabrieli notes that these two core areas have historically been known as the SLI trouble spots among SLI researchers, but less is known about where language breaks down in autism spectrum disorder.
“In ASD, some kids have parallel sorts of language difficulties as kids with SLI,” Gabrieli says. “But we don’t know if these syntactical presentation similarities look the same or different in the brain.”
To find out, Gabrieli, Wexler, also of MIT, and Tager-Flusberg, of Boston University, are recruiting boys and girls, ages 5 to 17—matched for intellectual ability, and with and without ASD and SLI—and conducting functional magnetic resonance imaging on their brains as they perform two tasks. In one they hear and then repeat a non-word. In another, they listen to sentences that vary in syntactic structure and gauge how correct they sound.
The researchers are still recruiting participants, so it’s too early for results. But all signs point to differences in the way children with SLI and ASD process language, Gabrieli says. Those differences could ultimately mean insights for speech-language treatment of children on the spectrum, especially in the all-important area of early intervention.
“In autism research, the social difficulties have dominated, but we believe we can also understand and intervene better in the language problems associated with the disorder,” Gabrieli says. “And once we better understand what’s going on, that could lead to improved early intervention in language for these kids.”
— Bridget Murray Law
Seeking an Elusive Goal in Severe Autism: Spoken Language

Graphic Jump LocationImage Not Available

A participant in Tager-Flusberg’s study undergoes an EEG.
Why do about a quarter of children with autism spectrum disorder fail to acquire spoken language?
The Autism Center of Excellence at Boston University, led by Helen Tager-Flusberg, has a $9 million NIDCD grant to look for answers to that question and to test a new intervention method for nonverbal children with ASD, a population that has been largely ignored in research efforts.
Tager-Flusberg, along with researchers at Harvard Medical School, Northeastern University and Albert Einstein School of Medicine, want to identify differences in the brains of nonverbal children and adolescents with ASD. They are exploring brain mechanisms they hypothesize may be implicated in a child’s inability to speak.
“Maybe the brains of nonverbal children process signals differently,” Tager-Flusberg says, “and maybe they can’t distinguish speech from other sounds. We know that in children with ASD, there are differences in how their brains are wired—is the wiring different among the speech areas?”
For some of these children, Tager-Flusberg says, the answer may be the input—the auditory processing—and for others, it might be the output—the brain connections in the speech motor system. For some, it might be both, and for some, it could be neither. Identifying where the wiring is different is essential to developing effective treatments.
The researchers also will collect data on participants’ behavior: cognitive and social functions, imitation, joint attention, ability to use alternative communication systems, oral-motor function, and other measures.
“We’re trying to understand who these kids are and why they are different from other kids with autism,” Tager-Flusberg says.
Preparing the children for the brain scans and EEG is enormously time consuming, but the decision not to use sedation is ethical and appropriate, she says. “Minimally verbal children with ASD are hugely challenging. We spend time developing specialized and individualized programs to train the kids to tolerate the study activities and scans.”
All participants will undergo the functional and behavioral assessments. In addition, the younger children (5- to 10-year-olds) will receive a novel treatment—auditory motor mapping training—and the assessments will be repeated post-treatment.
In the training, the child is encouraged to imitate the sounds, words and phrases the speech-language pathologist articulates in a sing-song tone. The child also beats the rhythm of the sounds on a drum.
“This is a neurologically based intervention,” Tager-Flusberg explains. “The music and motor systems are very close to the language areas of the brain. If a child engages in an activity that’s enjoyable that involves the manual motor system and music, we think it will stimulate speech and language.”
The researchers hope the study will eventually reveal the best ways to assess nonverbal children with ASD; identify which children are most receptive to this particular intervention; and lead to the development of other interventions that help children with ASD acquire speech.
Carol Polovoy is assistant managing editor of The ASHA Leader.
Helen Tager-Flusberg is an invited presenter at ASHA’s 2014 Research Symposium, “Primary Language Impairment in Children With Concomitant Health Conditions or Nonmainstream Language Backgrounds.” The symposium, on Nov. 22 in Orlando, Florida, is open to all 2014 ASHA Convention registrants.
Attacking Aphasia With More Targeted Diagnosis and Treatment

Graphic Jump LocationImage Not Available

Neural networks engaged for language processing in cognitively healthy individuals.
People who have aphasia from a stroke almost always have two questions: How much language will I recover? And how long it will take?
Today, speech-language pathologists don’t have the answers. But a team of researchers, led by Northwestern University’s Cynthia K. Thompson, has a $12 million NIH grant to research what variables affect an individual’s language recovery—and, given those variables, what the most effective treatments will be.
“We’re looking to understand how the brain recovers from stroke in three different language domains,” says Thompson, who is studying the domain of sentence processing. Swathi Kiran at Boston University and David Caplan at Harvard are investigating spoken naming, and Brenda Rapp at Johns Hopkins is looking at spelling and writing.
Using identical imaging methods, researchers at the three sites will examine the brains of more than 200 people with post-stroke aphasia to identify what—and how—variables in post-stroke brains may affect recovery of language and the associated neural networks. These variables include the site and size of the stroke lesion, the extent of hypo (under) perfused tissue, whether white matter is affected, and participant’s resting-state neuroactivity. Northwestern neurophysicist Todd Parrish, also part of the research team, is developing state-of-the-art automated systems to analyze the data.
“With this information, we can make better prognostic statements,” Thompson explains. “We can say, ‘With this size lesion at this site, with this hypoprofusion pattern and white matter damage, and considering the type and extent of language impairment, we anticipate recovery with this particular type of treatment.’”
Research shows that treatment affects brain function, Thompson says. “If we explicitly train particular language functions, the neural networks engaged for that function become active. And if we can identify regions of the brain most likely to be recruited into the language network, we can potentially push recovery by using noninvasive stimulation on those regions,” such as repetitive transcranial magnetic stimulation and transcranial direct current stimulation.
The behavioral treatments that correlate with imaging results will focus on what to treat, not necessarily how to treat, Thompson says. The researchers are using psycholinguistically motivated behavioral treatments that focus on complex words and sentences—an approach that seems counterintuitive to the practice of starting with simple, typical words and sentences and moving to the more complex. “We want to maximize the benefits of treatment by stimulating novel and/or existing neural pathways to access language that existed prior to brain damage,” Thompson says. “By facilitating pathways to more complex items in a domain, we simultaneously stimulate less complex items or structures in the same domain that rely on the same neural mechanisms.”
A control group that receives no treatment for six months will provide information about brain changes over time in chronic stroke-induced aphasia. “If people receive no treatment, are there changes in profusion, white matter connections or activation patterns?” Thompson asks. “We think not, but we will have the data for comparison to those who receive treatment.”
— Carol Polovoy
Why Stuttering Sticks for Some—and How to Help
It’s a statistic speech-language pathologists well know. But it can be surprising to many outside the professions: The vast majority—80 percent—of children will recover from stuttering with or without treatment. But for the remaining 20 percent, the condition persists.
The question that drives Anne Smith and Christine Weber-Fox, co-directors of the Purdue Stuttering Project, is why?
“In our past research we’ve been looking at adults, investigating the physiological signature of stuttering, which we know begins between 2 and 5 years old,” says Smith, a neuroscientist at Purdue University. “Now our focus is what it is physiologically and behaviorally that will predict its persistence.”
Smith and Weber-Fox are teasing out that answer with support from a five-year, $3.1 million grant from NIH—part of Smith’s research that’s has been continuously funded since 1988. They’ve recruited 80 4-year-olds who stutter and 50 who don’t, and they’re following all for five years to identify factors associated with persistent stuttering.
The children complete behavioral and physiological tests that measure emotional factors and language and motor abilities. Children and their caregivers complete a standard questionnaire on temperament, and researchers use electroencephalography to see how children’s brains process language. Skin conductance and heart rate measures gauge their emotional responses during speech.
Smith and her team will pool these results with data from a previous cohort of 90 children. They’ll then use the results to design a more clinical phase of their research: a multisite project that uses these earlier results to design and test a multifactorial diagnostic tool to predict the probability of stuttering persistence.
The idea, says Smith, is that SLPs can one day assess preschool children and begin delivering treatment to those likely to continue stuttering. “This is the translational aspect—that we ultimately develop a battery of tests SLPs can use to do a risk analysis for stuttering, much as doctors do a risk analysis for heart disease,” Smith explains. “Early treatment is really critical, so we need a tool SLPs can give kids at young ages to project risk and need for treatment.”
— Bridget Murray Law
It’s All in the Genes

Graphic Jump LocationImage Not Available

SLP Lisa Freebairn, a research assistant in Barbara Lewis’s lab, assesses a child’s speech sounds.
If a child has a speech-sound disorder, how will it affect the child’s social, academic and emotional life and behavior? The answer, according to Barbara Lewis and her research team, depends on a host of factors, some of them genetic.
Lewis, professor of psychological sciences and adjunct professor of pediatrics at Case Western Reserve University, has been working on this question for more than 25 years with continuous NIH funding. The original effort—to examine a group of people with speech-sound disorders of unknown cause to determine whether or not there was a genetic basis to the disorder—morphed into a longitudinal study of 4- to 6-year-olds, some of whom now have children of their own.
The study has become more complex with every new iteration of gene-identifying technology. But the researchers have identified common genes for speech-sound disorders, as well as genes with rare variants, and also found that some of those same genes are also associated with reading disabilities, attention-deficit hyperactivity disorder, and other language and learning problems. “What that means,” Lewis says, “is that children with speech-sound disorders may be at higher risk for other neurodevelopmental difficulties.”
People with co-morbidities—other neurodevelopmental difficulties in addition to the speech-sound disorder—are more at risk for problems with reading, language, speech, intelligence and behavior than if they have a speech-sound impairment alone. And if the speech-sound disorder lasts beyond 7 or 8 years old, Lewis says, there is a greater risk for literacy and language-learning disorders, as well as for emotional and psychosocial difficulties.
The information researchers glean for this study will have implications for treatment. “Ultimately, we can tailor treatment for a kid with a speech-sound disorder to what’s influencing it,” Lewis says. “We can provide early, intense intervention to children who have biological risk factors.”
There are still many unanswered questions, Lewis says. For instance, do people with childhood apraxia of speech have a higher genetic load for speech-sound disorders than those with other types of speech-sound disorders? Is it a rare variant caused by a single gene? Why do more males have CAS than females?
Relating the genetics of neurodevelopment to behavior and outcomes is “much more complex than I ever thought it would be,” Lewis says. “But what we can say is that speech-sound problems are in part genetic. Now we’re trying to find out why.”
— Carol Polovoy
Correcting Speech by ‘Seeing’ the Sound
When people hear speech, do their brains also “see” the accompanying articulatory gesture—or linguistically meaningful movement of the vocal tract—and use that information to aid understanding? Researchers have been debating the question for 30 years, and most agree that the issue is settled. But they don’t agree on exactly how it’s been settled: Some believe the auditory signal suffices, while others, like Doug Whalen, believe human speech contains more than just sound.
At Haskins Laboratories in New Haven, Connecticut, Whalen’s team has worked for 14 years to unravel the tangled relationship between speech production and perception. “When you listen to speech and record speech, it’s an audio signal—so you think, how can there be articulation in the audio signal?” says Whalen, a distinguished professor in the Speech-Language-Hearing Sciences program at the Graduate Center of the City University of New York and Haskins vice president of research. “But there’s a fair amount of behavioral research and some neuroimaging research that show that people do, in fact, treat these signals as indicating what the gestures should have been in order to understand what was going on linguistically.”
And these longstanding theoretical questions also have clinical applications. In the current iteration of an NIDCD grant—funded since 1996, and totaling $736,771 for fiscal year 2014—Whalen’s team is exploring the practical use of feedback in speech-language treatment aimed at accent modification: If they show people what they should be doing with their articulators, will they do a better job? “The experiment we were designing yesterday was trying to use images of the tongue to show Spanish speakers what their tongue should look like for an /æ/ sound, because Spanish doesn’t have an /æ/,” Whalen says.
Prior research suggests that ultrasound feedback of the tongue for /r/ and /l/ can help Japanese speakers better understand the English distinction of /r/ and /l/.
A newly awarded, NIDCD multisite (CUNY, University of Cincinnati, Haskins, New York University and University of Syracuse) translational grant focuses on direct applications in the clinic—the use of ultrasound feedback for children who misarticulate /r/. “If you provide feedback about articulation of /r/ to kids with ultrasound images of their tongue, many of them improve,” Whalen says. “It’s a truly astonishing result. And so the purpose ... is to take these very solid theoretical results and make it possible for clinicians to apply them.”
— Matthew Cutter
Auditory Playing?
Auditory training for people with hearing aids can be excruciatingly boring, admits Nancy Tye-Murray, professor in the Department of Otolaryngology at the Washington University School of Medicine: “People will do the repetitive exercises, some with nonsense syllables, but they blank out and they’re not really learning.”
So it’s no wonder that so many hearing aids end up in people’s dresser drawers, rather than in their ears.
Tye-Murray and her multidisciplinary team of WU researchers (second-language expert Joe Barcroft, cognitive psychologist Mitchell Sommers and research audiologist Brent Spehar) have a four-year, $1 million NIH grant to develop effective auditory training that people want to perform and that helps them understand speech in their specific listening situations.
And they’re doing it with computer games.
The team is breaking down auditory training piece by piece to determine the best approach: Spaced or massed training? Synthetic speech or the recorded voices of communication partners? Meaning-based, engaging activities or nonsense syllables? One talker or many voices? Treatment in the lab or at the patient’s home?
The team has developed three computer-based games. In “Running Man”—developed with the assistance of Dennis Barbour in the university’s bioengineering department—users make word discriminations to send a runner through a changing landscape, increasing the runner’s speed with each correct response. In “Build a Paragraph,” users listen to a paragraph and then rearrange five sentences to correspond to what they have heard. “Murder Mystery” challenges users to listen carefully to dialog and then answer questions correctly to receive clues to solving a murder.
The games train at the word, sentence and comprehension levels, Tye-Murray explains. “We think improvement comes from a combination of all three, but we’ll see.”
In their first four-year grant cycle, the researchers showed that meaning-based auditory training—based on principles of second-language learning—led to measurable, sustained improvement in listening. Now they are looking to extend the length of the benefits and to include school-aged children.
Eventually, Tye-Murray says, this research will give audiologists an additional tool to help patients. “Audiologists will be able to provide not just hearing aids, but also auditory training easily and at a low cost,” she says. And speech-language pathologists, especially those in schools, will be able to provide children with exercises they want to do, rather than ones they have to do.
Tye-Murray also envisions connecting clients online so that they compete against one another in the games and so that clinicians can track their progress.
“This is not your father’s auditory training,” she says. “And it’s just the beginning of what’s possible.”
— Carol Polovoy
0 Comments
Submit a Comment
Submit A Comment
Name
Comment Title
Comment


This feature is available to Subscribers Only
Sign In or Create an Account ×
FROM THIS ISSUE
October 2014
Volume 19, Issue 10