Project 2012: Unspoken Speech Detection Using a Brain-Computer Interface

 

 

 

 

 

 

Data Acquisition:

  1. In the Linear Discriminant Analysis (LDA) training scenario, the recorded brainwaves, along with their associated labels, were split into epochs and then transformed with the Fast Fourier Transform (FFT). 
  2. The resultant spectra were separated into six different frequency ranges (8-11, 12-15, 16-19, 20-23, 24-27, and 28-31Hz) and the average amplitude of these frequencies was calculated.
  3. The six frequency range averages were sent to individual LDA trainers to be compared in a one-versus-all fashion (the first LDA trainer would compare letter A with the other letters; the second: letter B versus the other letters, etc.).
  4. In the Bayes training scenario, the same recorded brainwaves were split out into six different frequency ranges, and classified by the LDA classifiers.
  5. The resultant target and non-target class labels were coupled with a “letter label” (the letter occurring at that instant) and written to a Comma Separated Value file (.csv, similar to an Excel spreadsheet).
  6. In the final scenario, the online scenario, incoming brainwaves were split into six frequency ranges like before, and then classified using LDA. 
  7. The target and non-target class labels were sent to the Bayes classification box, and the Bayes box returned a prediction for the letter that was thought of. 
  8. If the prediction matched the target letter, the user would be prompted to complete the pattern for the next letter, and if not, they would be prompted to complete the last letter again. 
  9. Upon reaching the final letter, the word completion time was output.