What is the role of facial expression control input in motor and cognitive performance modification using a visual-haptic interface?

Updated: Aug 20, 2019
  • Author: Morris Steffin, MD; Chief Editor: Jonathan P Miller, MD  more...
  • Print


For severely motor-impaired patients (eg, quadriplegics), the extremity videospace monitor approach will fail because the patient is incapable of the extremity volitional movement necessary to create a haptic input signal. As an alternative, video processing of the patient's facial expression can be used to perform this task. [11] This method is potentially simpler and more reliable to implement than other current approaches, such as EEG driving input, especially because no electrodes need be applied to the patient's head, and voice recognition may require excessive processing time. The only requirement for facial control is a video camera mounted to view the patient's face and a self-contained video digital signal processor (single-board freestanding) operating under algorithms under development in this laboratory.

Such techniques have been applied to detection of behavioral states, particularly drowsiness [12] and loss of consciousness (in addition to seizure detection [9, 10] ). For example, such a paradigm can detect sudden loss of consciousness, as in pilots undergoing high acceleration. [13] By using these techniques, scalar processing of converted video facial input can be used to develop robotic assistance regimens. Work is proceeding in the author's laboratory to develop algorithms for realization of this goal.

Did this answer your question?
Additional feedback? (Optional)
Thank you for your feedback!