Improving short-term memory formation by neural recording and stimulation in the prefrontal cortex

NTZ blog has been paying close attention to neural prosthetic devices aimed at improving cognitive functions. We have presented the beneficial effects of direct current stimulation (tDCS) of the temporal lobe and frontal cortex, DBS of the fornix, and recording in CA3 region coupled with stimulation of CA1 region in the hippocampus.  The last-mentioned study was performed using a rat preparation in 2011 by a Prof. Ted Berger at the University of Southern California and Prof. Samuel Deadwyler at Wake Forest University. Merely a year later, the same group of researchers has accomplished a new feat – a closed loop recording and stimulation in the rhesus monkey’s prefrontal cortex, an important location for decision-making and short-term memory processes. The same nonlinear dynamic model (MIMO) was applied for decoding, enhancing, and re-encoding of the firing patterns. To access the prefrontal cortex, the ceramic-substrate multisite electrodes were chronically implanted, targeting the supra-granular layer 2/3 and infra-granular layer 5. Cocaine was used to disrupt cognitive activity, simulating the brain injury. As you can see in the graph above, the memory task performance was fully restored by MIMO-patterned electrical stimulation during the task execution. The far-reaching goal of the project is to replace the memory forming process in the brain area damaged by stroke, dementia or other disorder by using a neuroprosthetic device interfaced to the healthy decision-making area of the brain.

Video: futuristic use of retinal implants for enhanced visual perception

Two students, Eran May-raz and Daniel Lazo, studying at the Bezalel Academy of Arts and Design in Jerusalem, created the short film above. It offers a glimpse of near-future augmented visual perception in a form of visual overlays for the world in front of our eyes. Every aspect of our daily activities, from cutting vegetables to picking an outfit for a date, becomes a part of the game. The proposed visual overlays and communication link to a PC/smartphone can potentially be implemented using the contact lenses with embedded OLED screen and a Bluetooth chip. However, even inclusion of motion tracking inside the contact lens probably would probably not be able to provide smooth tracking of eye movements and saccades. For more natural integration of external visual information and an OLED overlay, the latter could be placed in front of the retina (epiretianlly). Alternatively, the retinal stimulation chip coupled to the videocamera and the smartphone can be placed subretinally or suprachoroidally. Augmented visual perception is likely to gain more interest in a near future as the retinal implant market is rapidly expanding worldwide.

Direct-to-consumer transcranial direct-current stimulation device

A few months ago, our blog post discussed a possibility of direct-to-consumer neurotech devices reaching the market in a near future. Among available brain stimulation techniques, transcranial direct-current stimulation (tDCS) is a fairly simple non-invasive method for cortical stimulation. A recent study, conducted by researchers at the University of New Mexico, seems to support the cognitive-enhancing effect of the technique. Learning and performance in a shooting video game was increased two-fold following 30-min tDCS as compared to control, even after one-hour delay. Other studies found beneficial tDCS effects on working and visual memory. Capitalizing on the enthusiasm generated by these studies, the website GoFlow is planning to offer a DIY kit for tDCS with a price tag of $99! Understandably, the kit is very bare-bone, and includes only a battery, a few scalp electrodes, resistor, and a potentiometer.  I would venture to guess that the kit would attract avid brain-hacking enthusiasts and perhaps some students desperately trying to memorize the material before the exam. For the rest of us, let’s wait for a more mature product to hit the shelves.

Decoding individual words from human superior temporal gyrus

The neuroscientists at UC Berkeley conducted the study in 15 patients with epilepsy or brain tumor, in which the subdural electrocorticographic (ECoG) recordings were used for deciphering the speech processing in the cortex. Brain activity was induced in the superior and middle temporal gyri, the cortical regions involved in speech comprehension, by listening to words. Each word was played to a patient for 5-10 minutes to collect enough data for analysis. The spectral features of the sounds were used for linear and non-linear regression algorithms in order to reconstruct the words from the pattern of brain activity. The reconstructed words were intelligible enough to recognize them, although they sounded as if spoken under water. The proposed technology is still rather immature but one day it hopefully would be able to convert the ECoG activity in the auditory cortex into spoken language for patients with a stroke, locked-in syndrome, and other disorders resulting in a paralysis of their vocal cord and arms (think of Stephen Hawking).

Consumer-oriented neural interfaces for non-medical applications

As documented in other posts on this blog, mutliple neural prosthetic devices are currently being developed by startup companies throughout the US, Europe, and Asia. Practically all of these startups are pursuing the well-established R&D strategy of building a device to treat a specific neurological disorder and going through a lengthy process toward eventual FDA approval and reimbursement by private and government-run health insurance companies. In following with this R&D strategy, the resulting device is usually fully implanted and contains only the circuitry needed for its primary function to treat a specific disorder. The device is designed for autonomous operation without user accessibility, and any device’s software tuning/upgrade requires a physician and specialized clinical equipment. These features are aimed at limiting the manufacturer’s and surgeon’s liabilities.
Here, I would like to propose a possibility of developing the consumer-oriented neural interfaces. Such a strategy is inspired by recent developments in the consumer electronics industry and, particularly, by a wide adoption of body-worn health monitoring gadgets (such as a sleep sensor Lark, EEG monitoring device Mynd, and muscle stimulator Compex). The proposed new strategy requires a fundamental shift in the user attitudes toward body-worn neural interfaces. Instead of treating the neural interface as a “band-aid” for restoring the lost or damaged neurological function, the users would treat the neural interface as a sensory or motor extension of their existing own nervous system. The following table illustrates the key attributes that differentiate a conventional neurological treatment device from a consumer-oriented device:

Attribute Conventional device Consumer-oriented device
Usage Repair of lost/damaged neural functions Enhanced use and preventing the decay of existing neural functions due to Alzheimer’s
Customer Hospital, doctor End user
Reimbursement Health insurance company End user
Implanted components Electrodes, active electronics Electrodes only (minimally-invasive placement)
Body-worn interface (BWI) Telemetry for battery re-charging and data input/output User-controlled multi-purpose graphical computer interface
Placement of BWI Inconspicuous or hidden from view Prominent
Operation of BWI Primarily by a physician By the end user
Communication with other devices None (standalone use) Standard wireless protocols (Bluetooth, WiFi, 3G)


As can be seen in the table above, the fundamental changes in the R&D strategy relate to every aspect from the device marketing to its configuration, operation, and user control. The reduced complexity and size of the implanted device are crucial for allowing a minimally invasive implantation that can be performed by a neurologist (rather than a neurosurgeon) in an outpatient clinic. Fabrication of a simple implantable device combined with a simple surgery can dramatically reduce the overall user cost (perhaps to a sub-$10,000 level) and therefore make the devices applicable for non-medical applications, such as memory improvement, cognitive training, and around-the-clock personal assistance.

Continuing the parallel with the consumer electronics, let’s think for a moment about our computer use just 10 years ago. The computers back then could serve specific functions, such as data entry, word processing, accounting, etc. Our everyday lives, however, have been rather “un-tethered”, as we lived our lives oblivious to a possibility of having constant access to our email inbox or a Facebook status. There is no denying, that we are evolving into a new social species, the “homo twitterus”, with the reported ~60% of smartphone users waking up voluntarily during the night to check their messages. Let’s compare that with our evolving attitude toward the neural interfaces.  In the classic SciFi movies Star Trek: First Contact (1996) and The Matrix (1999), a images of the brain and spinal interfaces were positively repulsive. A decade later, in the movie Tron: Legacy (2010), the Identity Discs worn by the Grid inhabitants, prominently featured on their back, appear rather attractive and stylish. The public interest in the consumer-oriented neural interfaces may start initially among the techno-gadget aficionados and gradually spread to general population. Similar evolution has occured with the computer use and has now reached the stage where pure functionality and low cost of the device are no longer as important as its esthetic, social-status, and “coolness” appeal (think of Apple’s Macbook Air, iPad, and iPhone). While many Android phones are arguably more feature-rich and less expensive than iPhone 4S, Apple Inc. is enjoying robust growth by strengthening its deep personal relationship with customers and by changing their lifestyle in a profound way. The proposed consumer uses of neural interfaces can bring such device-user relationship to a whole new level, with the person’s everyday life being dependent on bidirectional exchange with their body-worn personal assistant. A rich virtual environment provided by the neural interface can be used, for example, by retired baby-boomers for muscle exercise and rehabilitation; memory improvement and cognitive fitness; and learning of visual and motor skills (e.g. golf, tennis, driving). Many other applications, perhaps even more pervasive and lifestyle-changing (such as novel sensory/motor modalities), could emerge as the neural interface technology takes hold in the society.

DBS for Alzheimer’s disease

As the number of people with Alzheimer’s disease (AD) is rising with aging population, there is an increasing urgency in developing an effective approach to slow its progression. Despite the efforts by pharmaceutical companies, currently approved drugs provide only modest effects and are often difficult to target to the brain without avoiding the systemic side effects. A possibility of using electrical stimulation for combating the disease has not been considered until a serendipitous discovery reported in 2008 by Dr. Andres Lozano, a neurosurgeon at the University of Toronto. He applied the DBS stimulation at the satiety-controlling region of the brain, the fornix, in a patient with morbid obesity with a hope of reducing the sensation of hunger. Surprisingly, the psychological tests have shown a significant improvement in patient’s memory. The follow-up study in AD patients, published in 2010, showed that the fornix stimulation can slow the memory decay. The authors of the study speculate that possible mechanism of action involves plasticity in the limbic circuitry counteracting the AD-related neurodegeneration. As a result of these findings, a startup company called Functional Neuromodulation Inc. was formed in 2010 to commercialize the DBS use in the fornix for AD patients. It recently obtained funding from Genesys Capital and Medtronic to conduct the second clinical trial in the AD patients. It is worth mentioning that other companies, such as Medtronic and St. Jude Medical, have considerable intellectual property on electrical stimulation of other limbic areas, such as the anterior thalamic nucleus, internal capsule, and subgenual cingulate cortex, which may also play an important role the memory formation process. We will anxiously await further developments in the use of DBS to counteract the progression of AD.

Active tactile exploration using a brain-machine-brain interface

The quest for highly functional neuroprosthetics in activities of daily living has implicitly assumed that the neural interface would include both motor and sensory (i.e. tactile and proprioceptive) functionalities. It is likely that for reaching and grasping tasks, the dynamic sensorimotor programs will need to be developed to enable dexterous control. Interestingly,  the neural decoding, stimulation, and hardware principles for sensorimotor interfaces are often developed in isolation in motor-only or sensory-only studies. In this week’s issue of the journal Nature, a new study was published by Prof. Nicolelis group from Duke University attempting to create a bi-directional sensorimotor neural interface for reaching tasks. Primates used both direct brain motor control and artificial tactile sensory feedback delivered back to the brain to complete the task. Both the motor and sensory channels bypassed the subject’s body, effectively liberating a brain from the physical constraints of slow nerve implse propagation  through the nervous system. Potential use of such bidirectional control is not limited to artificial limbs and can include fast communication with a variety of external sensors and actuators.

Hippocampus implant enhances memory formation in rats

In the DARPA-led project REMIND, Prof. Ted Berger from the University of Southern California and Prof. Samuel Deadwyler from Wake Forest University have been developing an innovative type of neural prosthetic device for restoring and enhancing the formation of long-term memories. Their strategy is to build a computational model of the information processing in the hippocampus and use it as a substitute for normal memory encoding in people with brain trauma, dementia, stroke, and other disorders affecting learning. In their new work, the scientists have described achieving an important milestone –  improving the memory formation in laboratory rats. In the performed behavioral tests, the rats were trained to remember the lever location and, after being distracted, had to recollect which lever to push. Two 16-electrode devices were implanted bilaterally for recording communication between the CA3 and CA1 sub-regions of the hippocampus. After the CA3 neuronal activity was recorded during successful recollection of the lever location, it was played back during the next recollection trial by stimulating the neurons at the CA1. And the rats displayed an amazing 20% improvement in their memory recollection (see the figure). Then, the scientists did something even more remarkable. They temporarily blocked the intrinsic CA1 activity (using a glutamate receptor antagonist), fully substituting it by the electrical stimulation. And the animals were able to remember the lever location equally well or even better than with their natural CA1 processing! These findings generate a lot of excitement, but the scientists are still facing a long road ahead to develop a fully functional replacement for hippocampus. One major challenge would be to build a scaled-up device for recording the activity of thousands of neurons in the hippocampus.  Another hurdle, perhaps even more significant, would be to create a memory encoder that can go beyond replaying the previously-remembered tasks and to create brand new memories. After all, learning something new is a lot more exciting than, say, reciting the Pythagorean theorem for the N-th time.

Hi-def and infrared vision in a subretinal implant from Retina Implant AG

A remarkable milestone has been reached in the resolution of retinal implants – a whopping 1520 pixels! In addition to vision restoration, the implant provides a first-ever vision-enhancing capability – the sensitivity to near-infrared light.

A remarkable milestone has been reached in the resolution of retinal implants – a whopping 1520 pixels (38×40)! Following on the heels of a recent success of Argus II retinal implants developed by the Second Sight, this implant by the German Retina Implant AG brings a 25-fold increase in resolution and several other unique features. Its subretinal placement is closer to the retinal pigment epithelium than can be achieved with epiretinal placement. This provides more selective stimulation of photoreceptors and results in further improvement in the implant’s resolution. The light sensing circuitry (silicon photodiodes) is built into the implant allowing it to move along with the eye movements.  This is beneficial for more natural cortical processing of visual information, as the visual map in the visual is adjusted during each saccade. Other types of retinal implants use an external videocamera (usually mounted on the glasses) that does not adjust the video information during the eye movements. The implant is 3 x 4 mm and 50 µm thick.  In addition to vision restoration, the implant provides a first-ever vision-enhancing capability – the sensitivity to near-infrared light. Extending the spectrum of perceived light can have some interesting implications, such as the ability to see a thermal shape of the object (the black body radiation) even in complete darkness. The ongoing research by Prof. Eberhart Zrenner at the University of Tuebingen aims to evaluate these implants to develop strategies for further improvements in the sensitivity and targeting of the implants. According to the paper published in the November issue of Proceedings of Royal Society B, the implants have been tested in three patients with hereditary retinal degeneration. All patients could locate bright objects on a dark table, and one patient discerned shades of grey with only 15% contrast. An important question for the retinal implant community, so far not answered by the study, is: how many pixels in the implant provide truly unique information to the retina and whether this spatial threshold has been reached with a 70-µm spacing used in the implant. The answer to this question has far-reaching implications for further technology developments: 1) whether further improvements in the density of planar arrays will translate into more focal stimulation and 2) whether the stimulating sites should be microfabricated to extend from the chip toward the retina in order to achieve the intended 70-µm spatial resolution.

Neurotech for achieving supernatural memorizing skills

Centre for the Mind at the University of Sydney NSW, directed by Prof. Allan Snyder, in collaboration with Neuromodulation Lab of Spaulding Rehabilitation Hospital in Boston, has undertaken an interesting study to demonstrate how neuromodulation can unleash supernatural sensory abilities hidden in normal people.  Their study, published in September issue of Brain Research, shows that a 13-minute application of transcranial direct current stimulation (tDCS) – with cathodal current (inhibitory) on the left anterior temporal lobe and anodal current (excitatory) on the right anterior temporal lobe – results in twice as accurate visual recollection as compared to sham stimulation or stimulation with reversed current polarities. Interestingly, the autistic people with a deficit in left anterior temporal lobe also have better visual memory. It is conceivable that improved visual memory is due to a diminishing left hemisphere dominance leading to a right hemisphere compensation. This hypothesis is supported by the finding that people without strong hemisphere dominance have better memory for semantically related words. Right hemisphere is important for recalling specific details without their understanding, which is important for arts, music, mathematics, mechanical and spatial skills – the skills that autistic people are exceptional at. Perhaps, we are all capable of accessing such raw information in the right hemisphere but our abilities are greatly inhibited by our conscious left-dominated awareness. With the help of neuromodulation technology, we can, perhaps, unleash the savant-like mental state, the autistic genius inside of us. For more information about this, please see this comprehensive review by Allan Snyder, published in Phil. Trans. R. Soc. B in 2009.

Neuro-enhancing neurotechnologies: what lies ahead?

Development of new neurotechnologies is driven by a paramount goal of restoring neural functions. Presently, no commercial companies or government-funded research laboratories are actively pursuing the technologies aiming at augmenting and enhancing the functions of the brain or spinal cord in able-bodied humans. Yet, such technologies can readily be developed with minimal modifications of the existing neuro-restorative technologies. Let’s consider, for example the retinal implants (e.g. from Second Sight) that use an external video camera mounted on the glasses. The video camera sensor can be easily replaced with the near-infrared sensitive one to enable perception of thermal signatures in complete darkness (heat vision is actually not that unnatural, just think of the snakes). More peculiar sensory enhancement can be achieved by employing terahertz sensors to enable x-ray-like vision, similar to the full-body scanners deployed recently in airports.  Other types of supernatural sensitivity can be soon be possible with some creativity and engineering ingenuity, by adapting other types of sensors (e.g.  narrow-band spectral detection, ultrasound, accelerometers, etc) and by using the sensors with faster response time as compared with the perception delay of our natural five senses. On the other end of the spectrum of the neuro-restorative devices are the one providing rehabilitation, mobility, and muscle reanimation in paralyzed patients. Novel motor control modalities are already being explored, ranging from a forthcoming powerful neurally-controlled robotic arm (Revolutionizing Prosthetics project by DARPA) to a tongue-implanted joystick (by Prof. Ghovanloo at GATech). There is a great potential for developing a range a supernatural skills using the motor-control neurotechnologies. The researchers behind novel sensory and motor technologies are really trying to “play God”, rather they are using the technical resources at their disposal to get to the maximal clinical benefit. Philosophical and ethical concerns will inevitable rise as a result of wider adoption and acceptance of the neuroprosthetic devices.  I foresee a particularly sharp debate in the Christianity-dominated societies, stemming from their strong beliefs in subordinate position of a man relative to God. In parts of Asia, situation is rather different. Buddhist and Hindu religions have a less defined relationship between a man and God, and, as a possible result of that, the Buddhist and Hindu-dominated societies have already become more receptive to novel forms of biotechnology, such as stem cells and genetic engineering. Countries like Singapore and China provided heavy centralized investments to become leading innovation incubators of the 21st century (see “Biotech Without Borders” by Parag and Ayesha Khanna).