By Author
  By Title
  By Keywords

March 2018, Volume 68, Issue 3

Pilot Study

Enabling profound hearing impaired children to articulate words using lip-reading through software application

Lozina Shoaib  ( School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Islamabad )
Sharifullah Khan  ( School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Islamabad )
Muhammad Azeem Abbas  ( PirMehr Ali Shah Arid Agriculture University Rawalpindi, Pakistan )
Ahmad Salman  ( School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Islamabad )

Abstract

Objective: To mitigate the communication barriers of profound hearing-impaired children by enabling their word articulation ability.
Method: This pre-experimental pilot study was conducted from September 2016 to March 2017 at the National Special Education Centre for Hearing Impaired Children, Islamabad, Pakistan, and comprised deaf children of both genders aged 5-8 years. A specially designed software application for lip-reading was employed to help the subjects articulate words. Each participant received 125 lip-reading sessions using the application. Evaluation was performed in five steps after every 25 individual sessions by a sign-language teacher, a speech therapist and family members of the individual concerned. SPSS 23 was used for data analysis.
Results: Of the 20 children, 10(50%) each were boys and girls. All participants reported an increased performance in articulating words with every passing session. The median score on the performance of children increased from first assessment to the last (p<0.05).
Conclusion: The articulation of words by the profound hearing-impaired children after experimentation was usually comprehensible for an inexperienced or a lay listener.
Keywords: Persons with hearing impairments, Word articulation, Deaf, Software technology. (JPMA 68: 432; 2018)

Introduction

A number of daily life problems are faced by profound hearing impaired individuals due to their inability to communicate. Adverse event is a common problem faced by the patients with profound hearing impairment (PHI).1 An adverse event is a mistreatment of a patient caused by misleading communication by the patient or the misunderstanding by a health practitioner. Studies have reported that 3% to 17% of the hospital cases fall in adverse events2,3 which are also visible in older patients apart from children.4,5 Furthermore, communication disability also leads to depression and chronic diseases.2,6,7
It is well known that language barriers and communication disabilities have a strong correlation with poor quality of care. The reason is mainly attributed to the inability of health practitioners to understand and express themselves in sign language (SL). Moreover, SL is not uniform because it mostly includes informal or natural signs that vary from region to region which makes it difficult to be understandable globally.8 patient can articulate his sickness.2,6 Modifying a patient\\\'s characteristics can mitigate the communication barrier and subsequently prevent possible adverse events.
Lip-reading is the ability to visually perceive the speech of a speaker without using any auditory equipment.9,10 Learning from the facial expressions starts from the infancy, later on, this learning can help the development of the language. Lip-reading from a speaker\\\'s face is the traditional method of teaching spoken language to the students with listening impairment.11 It has been reported that speech awareness can be increased by watching a speaker\\\'s lips\\\' movement.11,12 But on the other side, the dedication required from a speaker (i.e., speech therapist) makes this job cumbersome. Pronouncing the same word repeatedly will make the teaching difficult, if not impossible. Subsequently, a child loses engagement and interest in the learning process. Another factor is the financial overhead associated with the traditional lip-reading method. Commuting of a deaf person or speech therapist for lip-reading sessions require both human and financial support. Moreover, appointments of a speech therapist add-up an additional cost. Residential location of the deaf person also becomes a hurdle, as special training schools are mostly located in urban areas.
The current study was planned to mitigate the communication barriers of PHI children (PHIC) by enabling their word articulation ability using lip-reading through a proposed software application. The software application demonstrates specially designed interactive lip-reading electronic contents. These interactive learning contents can be used repeatedly by a child. Literature reveals that software applications have potential to increase learning outcome.13-15However, less is known about the effectiveness of technology-oriented self-paced lip reading for language articulation by PHIC. A systematic literature review (SLR) of the domain showed that huge literature sources, such as, PubMed, ScienceDirect, IEEE, Springer and others, have very few articles related to language articulation by lip-reading through software application.16-22 This study sought to fill in the gap by implementing the interactive word articulation software application. The provided empirical results of the interactive application are deemed to be the novelty of the study.

Subjects and Methods

This pre-experimental pilot study was conducted from September 2016 to March 2017 at the National Special Education Centre for Hearing Impaired Children (NSEC-HIC), Islamabad, Pakistan, and comprised PHIC of either gender aged 5-8 years. The school comes under the federal government and imparts free education. As it is the school for hearing-impaired children, therefore all the children communicate in the school through SL. The subjects selected were PHIC having hearing loss between 100dB and 120dB. They all had single disability and had never gone through any speech therapy before the intervention which meant that they had no exposure to language learning using the lip-reading method. They were from a low socio-economic status. Informed consent before the
intervention was obtained from the institutional administration on the behalf of children\\\'s parents. The children participated both in group and individual sessions held on the proposed application.
An interactive software application was developed for articulating English words. The words selected for the intervention contained vowels at different positions, i.e., vowels at initial, middle and final positions of the words. The word set was designed by aspeech therapist. The application demonstrated lips and mouth movements to show pronunciations of the words. A child could choose a word of his choice and play repeatedly its associated lip and mouth movement. This provided the student a way to go through a word again. Words were depicted as text along with their associated imagery. The colourful and animated interface of the application attracted student\\\'s attention. Eye contact during lip-reading exercise was important for learning. A student performed lip-reading using the proposed application and articulated the learnt word. The lip-reading word set included carefully selected 75 common vocabulary words, mainly nouns and colour names. There were 15 words for each vowel. The length of the words ranged from 3 to 7 characters. These training words involved one to multiple tongue placements and lip movements. A student started from single tongue placement and lip movement and gradually progressed to multiple movements for a word pronunciation.
Before the actual intervention for articulation of words, two significant activities were performed as per speech therapist\\\'s instruction. First, the children were given some warm-up exercises, which included blowing candles and pieces of papers, chewing bubble gums, and blowing balloons both in group and individual sessions to activate their mouth muscles and palate so that they could start working. Secondly, since the children did not have auditory memory, they were made to place their thumbs on their mentum slightly under the chin and nostril to sense the vibration of their word articulation.
Initially, group sessions were performed to make the children familiar with the laptop computer and the proposed application named Losina (Learning Application without Sign Language for Profound Hearing Impaired Children).23 Then individual sessions were conducted with each
child (Figure-1).



During the study period, each student was taught how to articulate without any hearing aids. A total of 125 sessions were conducted. The time span of each session was approximately 40 minutes where training for word articulation of one child was conducted. One session of one child took place in a day and total of six sessions were conducted for six different children in the day. In other words, four hours of a day were spent in conducting sessions in this intervention. Moreover, it was observed that PHIC were often not willing to attend sessions due to multiple reasons, such as illness, unwillingness or any unfavourable circumstances. In this way, the gap between two successive sessions for each child was 3-4 days.
The training and testing sessions were performed separately and were administered on different days. The PHIC were assessed for word articulation starting from vowels to different words randomly in each evaluation session. Each individual was assessed in five steps after every 25 sessions (i.e., 25, 50, 75, 100, and 125) by three evaluators: sign-language teacher, parents/guardian and speech therapist. The evaluation was performed using a Likert scale ranging from 0 to 2 where 0 stood for unintelligible, 1 stood for intelligible and 2 stood for good. The assessment contained 72 questions having 144 total score. PHIC learnt through visual aid and utilised their visual memory rather than auditory memory. In order to avoid children utilise their photogenic memory (i.e., short-term memory) for assessment, the evaluation sessions were not conducted immediately after each session. The aim behind carrying out separate evaluation sessions after every 25 sessions was to assess the long-term memory of the PHIC. The sign-language teacher was present in all the initial group sessions and individual sessions for support and motivation of the children. The assessment forms were designed with the help and consensus of the sign-language teacher and the speech therapist to assess the articulation of words by a child and his/her behaviour. The assessments were conducted on designed forms. SPSS 23 was used for data analysis.

Results

Of the 20 subjects, 10(50%) each were boys and girls. The performance of the children increased from the first to the last session. The box plots showed that improvement was getting significant as the number of sessions increased. There was gradual increase in the median score and decrease in the sizes of box plots. The short height of the boxes at 100th and 125th sessions showed a uniform learning gain by the participants. However, the long-height box reflected that there was a difference between learning scores of the children but still the median was towards the increasing trend (Figure-2A-C).



The test to analyse significant improvement in word articulation by comparing the median score at 25th session with that of the 125th session showed a statistically significant increase in word articulation by PHIC (p<0.05). In the beginning sessions (i.e., 25, 50, 75), the children could not perform well particularly in assessment performed by their family members compared to the two other evaluators. The reason is, they were not expecting evaluation from their family members or they were shy and/or not cooperating in the evaluation. However, with every passing session, the result showed a steady increase in word articulation. The end results after 125 sessions showed twice the performance metric compared to its value after the initial 25 sessions, reflecting a tremendous gain in ability to articulate words (Figure-2D).

Discussion

A prompting effect was observed in the articulation ability of the participants. Even though the experimental period was short (i.e., six months), positive changes in speaking words were recorded for all participants. The articulation of words by a participant after experimentation was easier for an inexperienced or a common listener to understand, which shows the significance of the proposed software application.
An unexpected benefit of the present work is reported by the teachers about the improved change in the behaviour of the participants after the intervention. An increased confidence level in several participants was also observed, which provides evidence that the proposed application conveyed the target goal to its users. Moreover, participants showed increased attention towards lessons being taught. On inquiring, the participants with increased confidence were willing to
take part in the experiment again. On several occasions, these participants spoke a word immediately after seeing the lip-reading demonstration for already experimented objects. But when a new object was shown, they first tried to grasp the lip and mouth movements for the shown object. This showed that participants had well understood the process of learning and speaking using lip-reading through the application. Similarly, several participants spoke words loudly to show their confidence.
Results show that the problem of reworking with a student was solved by the usage of the proposed application. Traditionally, after several sessions conducted on a day, the students need to redo previous exercises because of the lack of practice work and any learning activity at home. With the use of proposed application, students can learn on their own pace even at their home. The improvement in natural voice quality, fluency and clear audibility of the tested words within the short time span and with no formal intervention, can therefore be attributed to a contribution of the proposed software application used in this study.
The evaluation made by the speech therapist, teacher and family reported that the intervention had made the participants able to recognise the lip-reading method for language learning. This ability has shown a positive effect on the participants in terms of their confidence and cooperation. Moreover, knowing about lip-reading will provide a continuous learning opportunity to the participants in future. During our study, limited numbers of words were experimented with, but the participants having ability to recognise lip-reading can learn different new words from different sources. It is apparent from the results that the increase in the number of training sessions using the application improved language learning. During final conversation with the teacher of the deaf students, she had shown positive attitude towards using the proposed application for improving language learning skills.
One of the limitations of the present work is that no formal assessments for new unseen words were experimented with. However, very few tests were executed with few participants without formally recording the outcomes. Moreover, nasality was not a parameter of a consideration in the present work. Although, the study involved PHIC not directly associated with a hospital setting, the problem raised by the present study can be supported by the reported results.

Conclusion

The use of the software application for enabling word articulating ability of deaf children was found to be effective. The proposed application initiated and improved word articulation ability of the subjects. Moreover, a positive effect in the children behaviour, such as increased confidence and cooperation, was observed.

Disclaimer: None.
Conflict of Interest: None.
Source of Funding: None.

References

1.  Baker GR, Norton PG, Flintoft V, Blais R, Brown A, Cox J, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. Can Med Assoc J. 2004;170:1678-86.
2.  Bartlett G, Blais R, Tamblyn R, Clermont RJ, MacGibbon B. Impact of patient communication problems on the risk of preventable adverse events in acute care settings. Can Med Assoc J. 2008;178:1555-62.
3.  Kovshoff H, Banaschewski T, Buitelaar JK, Carucci S, Coghill D, Danckaerts M, et al. Reports of perceived adverse events of stimulant medication on cognition, motivation, and mood: qualitative investigation and the generation of items for the medication and cognition rating scale. J Child Adolesc Psychopharmacol. 2016;26:537-47.
4.  Dalton DS, Cruickshanks KJ, Klein BEK, Klein R, Wiley TL, Nondahl DM. The impact of hearing loss on quality of life in older adults. Gerontologist. 2003;43:661-8.
5.  Djernes JK. Prevalence and predictors of depression in populations of elderly: a review. Acta Psychiatr Scand. 2006;113:372-87.
6.  Davis P, Lay-Yee R, Briant R, Scott A. Preventable in-hospital medical injury under the "no fault" system in New Zealand. Qual Saf Heal Care. 2003;12:251-6.
7.  Costello JM, Patak L, Pritchard J. Communication vulnerable patients in the pediatric ICU: Enhancing care through augmentative and alternative communication. J Pediatr Rehabil Med. 2010;3:289-301.
8. Xu KA. Facilitating American sign language learning for hearing parents of deaf children via mobile devices. Georg Insti Techno. 2013; 5:34-45.
9.  Ladner RE. Communication Technologies for People With Sensory Disabilities. Proc IEEE. 2012;100:957-73.
10.  Zhang Z, Qu W, Liu F. Review of the lip-reading recognition. In: 2014 IEEE (Institute of Electrical and Electronics Engineers.Fifth International Conference on Software Engineering and Service Science. 2014; pp 593-6.
11.  Heikkilä J, Lonka E, Ahola S, Meronen A, Tiippana K. Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment. J Speech, Lang Hear Res. 2017;60:485-93.
12.  McCarthy RA, Warrington EK. Cognitive neuropsychology a clinical introduction. In: McCarthy RA, Warrington EK, eds. London: Academic press, 2013.
13.  Caudill JG. The growth of m-learning and the growth of mobile computing: Parallel developments. Int Rev Res Open Distrib Learn. 2007;8:200-23.
14.  Sharples M. Mobile learning: research, practice and challenges. Distance Educ China. 2013;3:5-11.
15.  Valk J-H, Rashid AT, Elder L. Using mobile phones to improve educational outcomes: An analysis of evidence from Asia. Int Rev Res Open Distrib Learn. 2010;11:117-40.
16.  Hammami S, Saeed F, Mathkour H, Arafah MA. Continuous Improvement of Deaf Student Learning Outcomes based on an Adaptive Learning System and an Academic Advisor Agent. [Online] 2017 [Cited 2017 July 04]. Available from: URL: http://www.sciencedirect.com/science/article/pii/S0747563217304181
17.  Yue WS, Zin NAM. Voice recognition and visualization mobile apps game for training and teaching hearing handicaps children. Procedia Technol. 2013;11:479-86.
18.  John ES, Rigo SJ, Barbosa J. Assistive Robotics: Adaptive Multimodal Interaction Improving People with Communication Disorders. IFAC-Papers Online. 2016;49:175-80.
19.  Shoaib L, Iqbal MA. Learning Technologies for the Hearing Impaired. In: Shoaib L, Iqbal MA eds. Frontiers of Information Technology. London: 12th International Conference, 2014; pp- 366-71.
20.  Middleton A, Niruban A, Girling G, Myint PK. Communicating in a healthcare setting with people who have hearing loss. BMJ. 2010; 341:c4672.
21.  Zárate S. Subtitling for deaf children: Granting accessibility to audiovisual programmes in an educational way. London: University College London, 2014.
22.  Abdulghafoor MS, Ahmad A, Huang J-Y. Survey on the use of applications for Deaf and Hard Hearing literacy. Comp Communi Control Tech, Inter Conf. 2015; 4:242-7.
23.  Lozina S, Khan S, Abbas MA, Salman A. Learning Application without Sign Language for Profound Hearing Impaired Children (LOSINA). [Online] 2017 [Cited 2017 Aug 25]. Available from: URL: https://github.com/azeemabbas/losina.

Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: