The Shape of Things to Come I always face danger when writing about technology, especially if I address its future role and impact on the professions of speech-language pathology and audiology. Not foreseeing what new devices and applications can be integrated into the clinical, research, and teaching environments is not the danger. There is risk, but ... Features
Free
Features  |   August 01, 2001
The Shape of Things to Come
Author Notes
  • Michael K. Wynne, is an associate professor in the department of otolaryngology head and neck surgery at Indiana University School of Medicine and serves as coordinator of clinical audiology at Riley Hospital for Children, University Hospital, and Wishard Memorial Hospital. Wynne has authored 50 publications and made more than 200 presentations on a broad range of topics from applications in technology to hearing aids to speech and language development in children with hearing loss.
    Michael K. Wynne, is an associate professor in the department of otolaryngology head and neck surgery at Indiana University School of Medicine and serves as coordinator of clinical audiology at Riley Hospital for Children, University Hospital, and Wishard Memorial Hospital. Wynne has authored 50 publications and made more than 200 presentations on a broad range of topics from applications in technology to hearing aids to speech and language development in children with hearing loss.×
Article Information
Professional Issues & Training / Telepractice & Computer-Based Approaches / Attention, Memory & Executive Functions / Speech, Voice & Prosody / Features
Features   |   August 01, 2001
The Shape of Things to Come
The ASHA Leader, August 2001, Vol. 6, 6-18. doi:10.1044/leader.FTR1.06142001.6
The ASHA Leader, August 2001, Vol. 6, 6-18. doi:10.1044/leader.FTR1.06142001.6
I always face danger when writing about technology, especially if I address its future role and impact on the professions of speech-language pathology and audiology. Not foreseeing what new devices and applications can be integrated into the clinical, research, and teaching environments is not the danger. There is risk, but not danger.
Not foreseeing what companies will survive, merge, or disappear from the radar in this very dynamic business climate is not the danger either. Companies that venture into the technology realm are like pioneers. They explore new territories while facing economic risks that may outreach their resources. Some survive, but many succumb to the pitfalls of venture capitalism.
No, the danger comes from trying to foresee how audiologists and speech-language pathologists will adapt to the changes new technologies impose on their work, play, and home environments. This danger arises from the disparity in the application of technologies across groups of individual users.
Types of Users
Some users embrace new developments in technologies. They show off their new toys at meetings, extend applications into new clinical and investigative arenas, and want changes that they perceive as needs and not extravagant desires. These users are explorers, finding new and different tools and methods to accomplish goals. Over time, they will find some of these tools are cost-effective, whereas others are little more than new toys that distract rather than add to their effectiveness in daily activities.
Other users in our professions accept new technologies, but like connoisseurs of fine wine, use the technology “only in its time.” These users advance when risks are low and outcomes have been demonstrated. This group can be separated into two subgroups depending on how early or late they move to adopting new technologies.
The last group includes those users that simply can’t see how new technologies will make any real difference in their clinical outcomes or teaching effectiveness. They have been relatively successful in their daily activities before, so their thinking goes, why should there be any changes?
So the danger lies in targeting the audience. I could address the first audience by discussing molecular DNA processors that don’t depend on silicone technology; implantable neural and sensory interfaces that allow computers to augment cognitive functioning; artificial intelligence that will supersede human processing; the integration of the real and virtual worlds with diffuse boundaries; augmented reality (combined reality and virtual reality); and the continued integration of robotics into every facet of daily living. However appealing to us technogeeks, many audiologists and SLPs may find a lack of reality testing for such notions.
Still, if you were to ask a well-educated adult at the turn of the last century whether we would land on the moon; predict weather patterns over a five-day period; transmit bi-directional, real-time, two-dimensional audio/video images across thousands of miles for virtual meetings; or eliminate smallpox from the face of the planet, I imagine you would find the same degree of incredulity.
I could adopt a “look-what-technology-has-done-to-date” approach for the recalcitrant users. However, many readers would find the approach rather mundane and antiquated.
So, perhaps this article would be best addressed to the early and late adopters of technologies, that is, those who accept change when they see that change is inevitable. For these users, I foresee the following real changes in the integration of technologies over the next 10 years.
Going Wireless
First, the Bluetooth wireless standard promises to change how just about everything communicates. Bluetooth, named for the Viking King Harald Bluetooth of 10th century Denmark, is a short-range wireless networking standard that allows all manner of devices to communicate and transfer information. With a Bluetooth-enabled computer and personal digital assistant (PDA), synchronization between these two devices will occur on the fly and without connecting cables or aligning infrared ports. Your clinical schedule can be updated continuously while you are at work.
Because Bluetooth facilitates voice communication as well as data transmission, a Bluetooth-enabled hearing aid may be programmed on the fly to meet the needs of a particular listening environment and also receive direct transmissions from a speaker in that environment without an additional assistive listening device. Thus, during air travel, hearing aid users may have their devices programmed for aircraft noise and receive the in-flight movie directly into their hearing aids.
One sure thing I believe about the future is that it will be wireless. Everything that we use from our clothes and appliances to our computers will be connected with each other through wireless ports (given that most of the security issues will be worked out keep a lookout for the final security rules of the Health Insurance Portability and Accountability Act).
Broader Bandwidth
Like everything else in data transmission, Bluetooth is limited in bandwidth, currently operating in the 2.4 GHz range, similar to many late-model cordless telephones. Thus, increasing bandwidth is the name of the game for the future. Everything is about bandwidth. Bandwidth is an expression of how much data can be transmitted in a given period, often in bits per second. More data can be transferred in broader bandwidths and, as a result, the user will have a faster connection.
The fastest bandwidth obtained with most dial-up telephone connections to the Internet is 56.6K bps. Digital subscriber line and cable Internet access can achieve a bandwidth of 1.5M bps. Multi-channel Multipoint Distribution Service, which is a fixed wireless Internet service, and T1 or T10 hard-wired connections are also becoming more widely available at reasonable costs. Because of the increased use and thus the increased need for transmission speed, most homes and businesses will move to a broadband provider in the next few years. Having made the switch myself over the last two years, I cannot see ever going back.
A Speech Interface
Because computers are a ubiquitous feature of our landscapes, there is an increasing need to have computers become more human. That is not to say that computers will take on an android appearance within the next 10 years. Rather, computers will use speech as the interface. It is now possible to sit in a public square and shout commands at a computer and have it respond. In the next five to 10 years, computers will respond to voice commands from a variety of speakers and will provide spoken feedback with elements of a human personality that is user-determined.
I have always believed that one of the limitations of the infusion of technology into the clinical setting has been the reliance on the orthographic interface and video display. Using speech recognition software and a verbal interface, the clinician’s visual attention can be directed fully to the patient. For those of us who have been advocating “high tech, high touch” for years, we finally have the interface to eliminate two distractions from the clinical environment, the keyboard and monitor, while maintaining access to a computer for data input, analysis, and output.
Driving some of this technology is voice extensible markup language (VoiceXML). VoiceXML is a standard language for building interfaces between voice-recognition software and Web content. It translates voice commands and responses to and from any XML-tagged Web content that can be delivered by phone. Thus, anything that can be posted on the Web can now be accessed and retrieved using voice commands over the phone.
Streaming Video and Telepractice
With advances in processing speed, memory allocation, broadband transmission, and speech processing, computer streaming video becomes the next media star. Everything from PC-to-PC videoconferencing in full screen to head-mounted displays to wearable retina projectors are available on the market today.
Everyone has a camera and that camera will be connected to the Internet. We will explore the supervision of students, assistants, and clinicians via Internet connections. Telehealth will become Internet-based. More and more of us will be “tele-living,” where we will access streams of video information through large wall monitors, head-mounted displays, and portable retina projectors. E-learning and e-commerce companies will take advantage of real-time streaming video and thus play an even larger role in our Internet economy. Display resolution, power consumption, and portability will significantly improve in the next five years.
Riding the Wave
I just have to look at my 15-year-old son and 13-year-old daughters to know where I need to be in the next five years. Although they are all engaged in activities away from the computer, they communicate, play, shop, and study electronically. They are “wired.”
Intel’s Itanium 64-bit processor chip is here now. Wireless Application Protocol (WAP), which standardizes access for mobile phones, PDAs, and pagers, are here now. Microsoft Windows XP is here now. Voice portals are here now.
In this e-everything age, we need to expand our training and education to embrace the breadth of current and future technologies, determine which technological themes will emerge, and infuse those technologies into the work environment. It is a call to arms. It is a call to think differently and to understand that these technologies weave through the very fabric of our daily lives. It is a call to be early-adopters and risk-takers.
0 Comments
Submit a Comment
Submit A Comment
Name
Comment Title
Comment


This feature is available to Subscribers Only
Sign In or Create an Account ×
FROM THIS ISSUE
August 2001
Volume 6, Issue 14