InnerSpeech uses multimodal machine-learning algorithms to decode inner speech from brain signals as a non-invasive method to address the limitations of current communication options for people with motor speech disorders. This technology has the potential to significantly improve communication for people with severe physical disabilities and revolutionize the way people interact with computers and other devices. By decoding inner speech into text representations, this technology opens up new possibilities for those who are unable to communicate through traditional spoken or written language.
Mr Li Wang-yau* (Alumus, Department of Linguistics and Translation, City University of Hong Kong)
Miss Li Wang-yan (The Chinese University of Hong Kong)
* Person-in-charge
(Info based on the team's application form)
- CityU HK Tech 300 Seed Fund (2023)
- HKSTP IDEATION Programme (2023)