Project 2011: Computer-Aided Telepathic Communications




 Introduction and Inspiration:

 Brain Computer Interfaces (BCIs) have been used in the past few years for the purpose of allowing humans to communicate non-verbally.


      Dr. Santhanam et al. (2006)

 –    Implanted micro-electrode array on the brain

     Patients able to “type” 15 words per minute using thought and no other   communication form

       Dr. D’Zmura (2009)

     Working with a team on project “Silent Soldier”

       Emotiv Wireless Headset (2006 Patent)

      Videogame Brain Computer Interface

      Non-invasive headset

       Dr. Deng et al (2010)

     Identification of syllables using non-invasive EEG

     “ba” and “ku” syllables identified in different cadences


  Locked-in patients who cannot speak or otherwise physically communicate could benefit greatly from having a non-invasive system to communicate their thoughts. 


  A non-invasive BCI using electroencephalograms from an EEG device has not yet been used to read human speech.  Breaking down and detecting the components of human speech (phonemes) that subjects are thinking, but not physically speaking, may be the key to reading unspoken speech.







Specific unique brain patterns in the form of spectrograms are hypothesized to accurately represent unique phonemes. It is further hypothesized that these phonemes can be interpreted in real time and translated into audio using a computerized text-to-speech voice.  Brain patterns in the form of spectrographs will be acquired using a self-constructed two-channel EEG apparatus.



1.   Test subjects comprised of 9 adults over the age of 18 who had given written informed consent were asked to imagine speaking six specific phonemes.  Each phoneme was displayed on a computer screen for one second, and then an interval slide was shown before the next phoneme.  Each phoneme was displayed 20 times and a baseline of brain activity was recorded for each phoneme (training dataset).  The process was completed a second time to create test data to compare against the reference (test dataset).  The data acquisition was completed in a quiet, dimly lit room with no close AC electrical appliances and no interruptions. The phonemes used in the experiment were /w/, /ah/, /t/, /s/, /uh/, /n/.

2.    Test subjects were prepared by pressing the silver electrodes directly on the skin with conductive 10-20 EEG paste and held in place using sponge and an elastic cap designed for this use.  The electrodes were placed in the F7 (Broca’s Area) and Fp1 (reference electrode) locations according to the International 10-20 system of electrode placement (Thompson, 2003).

3.    Data was collected using OpenVibe brain computer interface Open Source software (  Test datasets were completed twice for each subject using a data acquisition procedure programmed using the OpenVibe development environment.

4. After acquisition, the “test” dataset was compared to the “training” dataset using LDA (Linear      Discriminant Analysis).  Each member of the “test” dataset was compared to the training dataset to determine the successful detection rate per phoneme. Each of the collected datasets was compared and analyzed offline (the comparisons are made in the OpenVibe environment programmed for this task after the datasets had been recorded during the acquisition phase).  Simulation based epochs (chunks of the datastream) were analyzed using FFT (Fast Fourier Transformation) in the preprocessing step before the resultant streamed matrix was sent to the Classifier Trainer (from the training step) or Classifier (the testing step).






Challenges in Building the EEG Appliance:

 Removing Noise

 ·    Isolated the analogue PCB using a modified aluminum hard disk drive case

 ·    Isolated the modified aluminum hard drive case by suspending it on a plastic stand and securing it with duct tape.

 ·    Isolated the analogue and digital PCB board using modified PC Motherboard standoffs.

 ·    Switched to shielded cabling and mini-XLR connectors.

 ·    Shrink wrapped all open connections

 ·    Use a “Driven Right Leg” (DRL) connection to remove Mains Hum (@60Hz) using electrode placed on wrist.


(Mains Hum noise effects first conclusively detected by conducting initial device tests inside a non-running automobile 30 meters from all AC electrical activity.)








The results for the 9 subjects are shown in Table 1.  These results reflect the % recognition by the Classifiers for each phoneme in each indicated frequency range summarized for all subjects.


The values are bold and in red/underlined where the rates of recognition reach statistical significance with a 95% confidence interval using a Student’s one-sided t-test.  Rates of Recognition were compared against the baseline rate where the result is entirely random (16.7%).  

Significant values and recognition of phonemes occurred for the /N phoneme in nearly all the selected frequency bands (3-8 Hz, 18- 33 Hz and 38- 48 Hz).  The /uh phoneme appeared in the 28-48 Hz range.  The /ah phoneme was significant in the 3- 8Hz and 13- 18Hz ranges. /T was found to be significant in the 33-38 Hz range. 






 ·       A system of determining phonemes consistently from a two-electrode EEG was not completely successful.  The data does not support the consistent ability to read phonemes from the F7 locale in the 10-20 scheme by itself using the current system.  Some phonemes were significantly apparent in more than one band, but the system as presented cannot be used for speech as only 4 of the 6 phonemes could be adequately detected.  

 ·       What now needs to be investigated is whether individual phonemes are only significantly determined in specific frequency bands or whether they should be significant across the entire frequency spectrum known for human speech (3- 48 Hz).

 ·       Whether the use of a two-channel appliance affected the results is a matter of speculation.  Future work would need to determine whether having a 6-channel (or more) system additionally covering Wernicke’s Area, also known for higher language functions (Callies, 2006), would improve the detection of phonemes.

 ·       In this study, no attempt was made to “train” the user by giving real-time feedback to whether their thinking was matching the phoneme being detected.  The study was undertaken to find a system that does not require training, as many of the locked-in patients who need such a system would find training difficult.

 ·       The use of Fast Fourier Transformation in the pre-processing of detected signals could also be investigated.  One very positive outcome of this experiment is that scenarios were designed to acquire, train and classify brain signals in the OpenVIBE environment.  In particular, the connection to a MATLAB server developed for testing various pre-processing methods not available (e.g.  The Hilbert Huang Transform) makes future work in this area less time-consuming and more productive.

 ·       This experiment was important for determining if a simple non-invasive EEG detection device could be used for the detection of phonemes.  Such work is important in determining the requirements for building a non-invasive and inexpensive to deploy machine that allows for the non-verbal communication of unspoken speech.