The neuroscientists at UC Berkeley conducted the study in 15 patients with epilepsy or brain tumor, in which the subdural electrocorticographic (ECoG) recordings were used for deciphering the speech processing in the cortex. Brain activity was induced in the superior and middle temporal gyri, the cortical regions involved in speech comprehension, by listening to words. Each word was played to a patient for 5-10 minutes to collect enough data for analysis. The spectral features of the sounds were used for linear and non-linear regression algorithms in order to reconstruct the words from the pattern of brain activity. The reconstructed words were intelligible enough to recognize them, although they sounded as if spoken under water. The proposed technology is still rather immature but one day it hopefully would be able to convert the ECoG activity in the auditory cortex into spoken language for patients with a stroke, locked-in syndrome, and other disorders resulting in a paralysis of their vocal cord and arms (think of Stephen Hawking).