Abstract:
Brain computer interface based-on silent speech decoding from electroencephalography signal is one of the purposes of this article. Brain computer interface help the pati...Show MoreMetadata
Abstract:
Brain computer interface based-on silent speech decoding from electroencephalography signal is one of the purposes of this article. Brain computer interface help the patients with locked-in syndrome to communicate with the world around them. In addition to the silent speech decoding from electroencephalography, also overt and semi-overt speech decoding from electroencephalography have been investigated. The collected data includes three syllables (/ka:/, /fi:/ and /su:/), 6 vowels (/æ/, /e/, /au/, /a:/, /i:/ and /u:/) and resting in Persian. Database was collected based on 3 protocols from 5 subjects. The 3 protocols are including overt speech without vibration of the vocal cords, semi-overt speech (vocal track forming without pronouncing) and covert (silent) speech. Feature vectors include empirical mode decomposition combinations with common spatial patterns filters, were extracted from electroencephalography signals. Classification done by non-linear support vector machines. There was a significant difference between the results of extraction 5 feature vectors include energy, variance, zero crossing rate, skewness and kurtosis against the only variance feature vector from the common spatial patterns filtered data (on average and p-value ≤ 0.05, about 3% accuracy improvement). There was no significant difference between the results of vowels and syllables databases. There was also no significant difference between the results of three protocols, which indicates adequacy and advantage of the “covert speech” protocol.
Date of Conference: 24-25 November 2016
Date Added to IEEE Xplore: 06 April 2017
ISBN Information: