Published on

A showcase of developing hearing technology at the IEEE Engineering in Medicine and Biology Conference

  • Name
    Dorothy Hardy
    COG-MHEAR Research Programme Manager

The COG-MHEAR teams held a workshop to present their research at the IEEE Engineering in Medicine and Biology conference (EMBC) in Glasgow recently. There were academic and clinical researchers in attendance, plus industry experts joining to hear about developments in the research programme.

An overview of ongoing cross-disciplinary work required to develop new types of hearing aid was followed by talks on all aspects of the research, starting with a hands-on real-time demonstration showing tracking of lip movements of a person as they talk. This technology enables one voice to be separated out from multiple speakers and other background noise, which could make conversations much easier to follow in many everyday environments where current hearing aid technology is known to be ineffective.

Nevertheless, this type of lip tracking still requires close looking at someone else, which some people might be uncomfortable with. Wireless radio frequency sensors might be used instead of cameras, so that no visual images would be needed. This could even be used for British Sign Language (BSL) recognition using wireless sensing and machine learning technology.

Considerable signal processing, power consumption and communication bandwidth challenges need to be overcome to enable speech to be separated out from background noise, so that hearing aid users do not experience a lag between what they see and what they hear, for long enough to make them notice one is out-of-sync with the other. This could be carried out, with encryption, using the internet through the Cloud, and in hand-held devices. This technology utilises advances in the internet of things, machine learning, 5G communications and field-programmable gate arrays (a type of circuitry on a chip that can be programmed). The teams outlined how all the processing could eventually be carried out offline, within hearing aids themselves; including ambitious hardware implementation possibilities with skin-mounted flexible electronics technology.

Future on-chip processing could ensure complete privacy by keeping all required data and processing local to the hearing aid user. The aim is to create brain-inspired, energy-efficient deep learning computation that could be carried out ‘on chip’ (within hearing aid processing circuitry). .

There was video footage of a chat with a robot, with background noises such as music and a kettle boiling, to show how speech enhancement technology could be useful in getting instructions across, not only to other humans.

Good technology requires thorough testing. So our teams from Edinburgh Napier University and the University of Edinburgh launched the world’s first audio-visual speech enhancement challenge. More about that coming soon.

The practicalities of designing, marketing and fitting hearing aids were discussed, and the workshop ended with an overview of the challenges to be faced in making and using audio-visual hearing aids. The way in which these new devices will look, feel and best be used are the current focus of discussion for researchers, industry experts and end users who collaborate in the research. Select "get involved" on the COG-MHEAR website to join in: link