Multidimensional Student Assessment Does a student’s high grade-point average in her speech-language pathology graduate program mean that the student has the skills and knowledge to be an effective clinician? Not necessarily, because it likely doesn’t take into account such important competencies as diagnostic, treatment, and oral presentation skills. To address this issue, our ... Academic Edge
Free
Academic Edge  |   April 01, 2012
Multidimensional Student Assessment
Author Notes
  • Holly L. Storkel, PhD, CCC-SLP, is an associate professor in the Intercampus Program in Communicative Disorders at the University of Kansas, Lawrence. She is an affiliate of ASHA Special Interest Groups 1, Language Learning and Education, and 10, Issues in Higher Education. Contact her at hstorkel@ku.edu.
    Holly L. Storkel, PhD, CCC-SLP, is an associate professor in the Intercampus Program in Communicative Disorders at the University of Kansas, Lawrence. She is an affiliate of ASHA Special Interest Groups 1, Language Learning and Education, and 10, Issues in Higher Education. Contact her at hstorkel@ku.edu.×
  • Mary Beth Woodson, MA, is a doctoral candidate in Film and Media Studies at the University of Kansas and a graduate program assistant at the University’s Center for Teaching Excellence. Contact her at mbwoodso@ku.edu.
    Mary Beth Woodson, MA, is a doctoral candidate in Film and Media Studies at the University of Kansas and a graduate program assistant at the University’s Center for Teaching Excellence. Contact her at mbwoodso@ku.edu.×
  • Jane R. Wegner, PhD, CCC-SLP, is a clinical professor in the Intercampus Program in Communicative Disorders at the University of Kansas, Lawrence, and director of the Schiefelbusch Speech-Language Hearing Clinic. She is an affiliate of Special Interest Groups 10, 11, Supervision and Administration, 12, Augmentative and Alternative Communication, and 16, School-Based Issues. Contact her at jwegner@ku.edu.
    Jane R. Wegner, PhD, CCC-SLP, is a clinical professor in the Intercampus Program in Communicative Disorders at the University of Kansas, Lawrence, and director of the Schiefelbusch Speech-Language Hearing Clinic. She is an affiliate of Special Interest Groups 10, 11, Supervision and Administration, 12, Augmentative and Alternative Communication, and 16, School-Based Issues. Contact her at jwegner@ku.edu.×
  • Debora B. Daniels, PhD, CCC-SLP, is a clinical associate professor in the Intercampus Program in Communicative Disorders at the University of Kansas Medical Center. She is an affiliate of Special Interest Groups 11 and 16. Contact her at ddaniels@kumc.edu.
    Debora B. Daniels, PhD, CCC-SLP, is a clinical associate professor in the Intercampus Program in Communicative Disorders at the University of Kansas Medical Center. She is an affiliate of Special Interest Groups 11 and 16. Contact her at ddaniels@kumc.edu.×
Article Information
Professional Issues & Training / Academic Edge
Academic Edge   |   April 01, 2012
Multidimensional Student Assessment
The ASHA Leader, April 2012, Vol. 17, online only. doi:10.1044/leader.AE.17052012.np
The ASHA Leader, April 2012, Vol. 17, online only. doi:10.1044/leader.AE.17052012.np
Does a student’s high grade-point average in her speech-language pathology graduate program mean that the student has the skills and knowledge to be an effective clinician?
Not necessarily, because it likely doesn’t take into account such important competencies as diagnostic, treatment, and oral presentation skills. To address this issue, our program—the University of Kansas master’s program in speech-language pathology—requires students to keep portfolios, online repositories of their graded course assignments and critiqued materials from clinical experiences.
Using these portfolios, we can more roundly assess students’ knowledge and skills and improve their learning in response to the Council of Accreditation in Audiology and Speech-Language Pathology (CAA) standards requiring graduate programs to use formative assessments (see sidebar). The CAA defines formative assessment as “ongoing measurement during educational preparation for the purpose of improving student learning.” Formative assessment provides qualitative data that complement the quantitative data provided by the grade-point average. Moreover, formative assessment provides a means of identifying areas for improvement that are common across diverse experiences, even though these particular areas of improvement might not strongly affect a final grade in one specific experience.
Getting Started
Although the CAA standards change was the original impetus, the faculty also wanted to find ways to identify and measure learning goals as a more effective means of evaluating program success and identifying areas for revision. This motivation is common at the program level, according to the National Institute for Learning Outcomes Assessment. We identified four broad skill areas (as well as more specific skills) related to evidence-based practice for evaluation:
  • Foundational knowledge (e.g., understanding basic concepts, terminology, and theory; ability to find evidence).

  • Application and use (e.g., developing assessment and treatment plans).

  • Analytical processes (e.g., analyzing and integrating assessment findings; monitoring treatment progress).

  • Communication skills.

We identified where in our curriculum we taught these skills [PDF], and found a variety of ways for students to acquire skills in different areas of practice. Although the offerings seemed sufficient, the question of whether students were actually learning the skills still remained.
To address this question, we adopted a student portfolio system in which students archive graded course assignments and critiqued materials from clinical experiences (with client names removed) in an individual online repository each semester.
We conducted two pilot studies to develop the portfolio guidelines [PDF], which have been in use since 2009.
Developing New Assessments
Each student’s portfolio serves as the foundation for formative and summative assessment. Formative assessment takes place after the student has completed two to three semesters of graduate work; summative assessment take place in the student’s last semester.
Both types of assessment begin with student self-reflection and self-evaluation. We developed two self-reflection rubrics, one for diagnostic skills [DOC] and one for treatment skills [DOC], with four levels of performance identified for each skill. Students rate themselves on each rubric and also complete an initial action plan [DOC] to reflect on their progress to date.
The student’s advisor reviews these self-reflections and the contents of the student’s portfolio. The advisor adds his or her reflections on the student’s progress to the action plan.
The formative assessment process culminates with a meeting between the student and advisor to discuss the student’s progress and ways to enhance learning during the remainder of the program. The formative assessment chart [PDF] shows frequent strengths, weaknesses, and recommended action plans from assessments conducted in 2010.
The summative assessment process culminates in a formal oral exam by a three-person faculty committee. For this final assessment, the students choose three artifacts to present [PDF] and respond to faculty questions related to the skill areas. We designed a rubric [XLS] to prompt questions and evaluate student performance in each area.
At the conclusion of the summative exam, the committee reaches a consensus rating of the student on the rubric and discusses this rating with the student. The committee completes the action plan, with particular focus on recommending continuing education activities and discussion topics for the student’s clinical fellowship supervisor to facilitate the student’s successful transition to the workforce.
Feeding Continuous Program Improvement
Copies of each student’s summative rubric (with names removed) are archived for review by the program’s Curriculum Committee. Every year, the data are summarized by computing the percentage of students who earned a particular rating in each skill area (see chart [PDF]). Because only one round of summative exams has been completed to date, the data have been used primarily to refine the portfolio and assessment procedures.
We already have noticed benefits to our program. The summative assessment has given us a shared body of student work, which has fostered faculty discussions about program goals, pedagogical practices, and fostering student success. In addition, we’ve evaluated faculty and student satisfaction with this system of accountability through anonymous online surveys. Results generally have been positive:
  • 89% of faculty felt that the portfolio and assessment process captured student learning and that the summative exam was an efficient and accurate means of evaluating student learning.

  • 82% of students felt that the portfolios helped them learn about their own skills and set goals for their studies.

Student impressions of the summative exam, however, were more mixed. Although some found it stressful, others felt it useful. One student noted, “I found the experience to be very rewarding and [it] gave me additional confidence going into the interview process. I think it is beneficial for us as students to gain the experience in public speaking and thinking on our feet.”
Assessing student learning is challenging. It has taken us several years to develop our current process. However, the rewards are worth the effort. As educators of the next generation of clinicians, it is gratifying to see evidence that students are mastering the skills important for success as clinicians.
Calls for More Comprehensive Academic Assessment

In 2001, ASHA announced changes to the standards for the Certificate of Clinical Competence that would take place in speech-language pathology (2005) and audiology (2007).

The new standards require graduate programs to use formative assessments to measure students’ knowledge and skills. To help academic programs prepare their students to meet these new standards, the Council on Academic Accreditation in Audiology and Speech-Language Pathology (CAA) and the Council for Clinical Certification (CFCC) released guidelines for developing assessment plans.

The guidelines call for:

  • A “Teaching/Learning/Retention/Application” process to assess student progress.

  • Students to identify, analyze, synthesize, and apply cumulative knowledge and skills.

  • Remediation, where warranted, until the student masters knowledge and/or skills.

  • Incorporation and integration of academic and clinical education.

  • Assessment beyond course-by-course exams.

  • All faculty to be involved in assessing students.

  • Assessments to be conducted in the context of ASHA’s scopes of practice in audiology and speech-language pathology and the ASHA Code of Ethics.

  • Assessments to determine student competence as defined by the certification standards.

Department of Education

In 2005, U.S. Secretary of Education Margaret Spellings convened a 19-member Commission on the Future of Higher Education to recommend a national strategy for reforming post-secondary education, with a particular focus on how well colleges and universities prepare students for the 21st-century workplace.

Heralded as “No Child Left Behind” for universities, the 2006 Spelling Commission report called for greater accountability in higher education, with a particular emphasis on documenting student learning outcomes. Although all the commission’s recommendations have not been realized (see The Chronical of Higher Education) universities have made efforts to document learner outcomes.

Higher Learning Associations

The Association of Public and Land Grant Universities and the American Association of State Colleges and Universities have developed a voluntary accountability system; the Association of American Colleges and Universities has developed a set of rubrics for assessing a variety of learner outcomes.

According to the National Institute for Learning Outcomes Assessment (NILOA), more assessment activity is taking place at the program—rather than the institutional—level, and most importantly, those assessment activities are being used to improve student learning (see Ewell, Paulson, & Kinzie, 2011; Kuh & Ikenberry, 2009). However, NILOA also suggests that accountability at the program level would be facilitated by “more substantive information about assessment techniques and experiences elsewhere” and calls for “more program level case studies” (Ewell, Paulson, & Kinzie, 2011, p. 20).

Sources
Ewell, P. T., Paulson, K., & Kinzie, J. (2011, June). Down and in: Assessment practices at the program level (Program Level Survey Report). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (www.learningoutcomesassessment.org/NILOAsurveyresults11.htm).
Ewell, P. T., Paulson, K., & Kinzie, J. (2011, June). Down and in: Assessment practices at the program level (Program Level Survey Report). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (www.learningoutcomesassessment.org/NILOAsurveyresults11.htm).×
Kuh, G. D., & Ikenberry, S. O. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (www.learningoutcomesassessment.org/MoreThanYouThink.htm).
Kuh, G. D., & Ikenberry, S. O. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (www.learningoutcomesassessment.org/MoreThanYouThink.htm).×
0 Comments
Submit a Comment
Submit A Comment
Name
Comment Title
Comment


This feature is available to Subscribers Only
Sign In or Create an Account ×
FROM THIS ISSUE
April 2012
Volume 17, Issue 5