Applied Neuroscience from scratch

It has been six years since I started working in Starlab. For the first year and a half I was involved in projects from the Space department inside the company. Then I became a member of the Applied Neuroscience team. By that time I asked myself what could I do, a complete neuroscience outsider, in that department. Today, I can say what I have done the most is to learn.

In this blog, I would like to share with you the most relevant things I have learned so far. I think this compilation might be useful for anyone who is interested in applied neuroscience. Most of these concepts have been already presented in previous posts on this blog (which I will be linking to). This shows the brilliant job we are doing with my colleagues who regularly write in the NE blog.

It all starts with the electrical brain activity and the way of recording it. Among all the available techniques, the Electroencephalography (EEG) is the one that allows access to the brain activity in a cheap and non-invasive way. In addition, it is relatively easy to set-up, comparing to the other options.

I had the opportunity to be part of the team that developed the wireless Enobio device which can record EEG from up to 32 channels. In general, an EEG system consists of a set of electrodes with conductive media, amplifiers with filters and an A/D converter along with a recording device to store the data. We have used Enobio to build the EEG-based experiments and applications explained below.

Recording EEG, is very important to pay attention to the definition of the recording protocol, as well as the experiment set-up. This is the key to get a good EEG signal which fits to the goals of the experiment or the application you are carrying out. You will find two very useful guidelines to conduct successful EEG experiments here and here.

After recording the EEG you will extract as much information as possible by using processing signal techniques. Before processing the EEG it will be difficult to distinguish it from white noise at naked eye. However, thanks to these processing signal techniques it’s possible to obtain features characterizing the EEG. Performing an statistical analysis of those features might lead to recognize what the differences are on the EEG for the different conditions that are present on the experiment or application you are building.

Brain to brain neuroelectrics

One thing you will be inevitably facing when processing the EEG is dealing with artifacts that contaminate the signal. Check this blog post out to learn more about processing techniques which might remove or at least reduce the contribution of such artifacts. If you succeed in the pre-processing stage you will obtain a cleaner EEG which should allow you to get better results on your experiment or application.

When we are talking about applications, with no doubt, the ones that arouse curiosity and fascination are the ones based on brain-computer interfaces (BCI). These applications are based on recognizing patterns on the EEG activity which are translated to commands that are sent to the computer. Thanks to being involved in the development of such applications I found out the most widely-used toolbox to build them as BCI2000 or openVibe.

Some of these applications are based on the event-related potentials (ERP). The detectionof these automatic responses that the brain produces after an external stimulus might allow controlling a BCI application like this P300-based speller.

Apart from BCI applications, we have also worked, in Starlab, with ERPs for health application purposes. This is the case of the following study with involving two-years old babies leaded by the Oxford University. This study presented a double challenge for us. In the first place, provide a recording system as easy and quickly as possible to set-up. The use of dry electrodes and the wireless Enobio sensor along with integrated software that both recorded the EEG and presented the external stimuli coped with this requirement. Also, the design of the custom cap full of colours and with Mickey’s style ears which helped a lot to convince the babies to wear the recording system. The second challenge that my colleagues faced was to process and detect the auditory ERPs from a signal coming from a two-year old baby that can’t stop movingand touching the recording cap. If you want to goo deeply into the ERPs that can be detected through EEG I recommend this blog post that lists up to 14 different ones.

Personally, the field I have enjoyed learning the most all these years has been Data Fusion and Machine Learning. An example where we have applied these technologies is this project about biometrics. We built two different biometrics systems: one based on the EEG signal and the other on the Electrocardiogram one (ECG). The systems had classifiers that delivered the probability of an EEG or ECG signal being of a specific subject. Those classifiers are trained in advance with the EEG and ECG signals from every subject to make a signature that will be used. The fusion operators were in charge of combining the results from each of the biometric systems to deliver a single value which is what the application would ultimately take into account to decide whether the subject in front of the biometric sensor is who claims to be.

Another example where we have applied the machine knowledge and data fusion concepts is the development of our emotion recognition system based on EEG and also in other electrophisiological signals. Thanks to the projects where we have put this system running I could depend on the  mechanisms for synchronizing the different actors forming part of them such as the different sensors, the application in charge of interfacing the sensors and monitoring the emotional response from the subjects as well as the system that co-registers the external events that might be present depending on the specific protocol of the application.

We found  brain stimulation. The foregoing was all based on monitoring the brain activity. In contrast, we carried out a brain stimulation session; we induced current into the brain which modified the brain activity. The StarStim device, which we developed at the same time that Enobio, was designed to deliver multichannel transcranial current stimulation (tCS) and monitor the brain activity at the same time. The use of this technology has been proved in recent studies to be effective in stroke rehabilitation, the treatment of depression and pain.

I need to say that it is very rewarding to know that part of your work has a real positive impact on other people’s lives.

Other studies have explored by using brain stimulation for cognitive enhancement like mathematical skills or memory. In this case, what I think it’s interesting beyond the application itself is the associated ethical debate that is launched. Would it be necessary in the near future, regulations for accessing this kind of technologies? Will having access or not to those treatments create a new degree of social inequality? Interesting questions that yet have not been well addressed by our societies.

The very last thing I want to mention here is the experiment where a brain-to-brain communication was established between two people separated 7000 km away. Although I participated very collaterally here (just providing tools to code and decode the message to tolerate some extent bit errors on the message), it gave me firsthand experience of the experiment. I was very happy when it succeeded and it made a great impact in the media, and among the scientific community.

After all this time, although I keep thinking of myself as a neuroscience outsider, I feel very privileged of having the opportunity of going in depth on this passionate world. I hope that in the future I can keep expanding my knowledge in this field or another I have the opportunity to do my bit.

Leave a Reply

Your email address will not be published. Required fields are marked *