P= positive deflection, P1 = first
N = negative defelction, N2 = second
P300= positive deflection after 300ms
When measuring different brain potentials after a certain stimuli, one often looks for the positive or negative deflections. The deflections are numbered by order, where P1 corresponds to positive deflection number 1.
This is trure until p3, where it actually indicates a postive potential after 300ms. This signal has been named the P300 wave as, when a certain stimuli is provided, it always indicates a positive deflection after 300ms.
In Clinical research, the P300 wave has been used ERPs to study ADHD and schizophrenia.
In hardware implementation, it has been used for example in the Brain computer interface: p300 has been used to facilitate direct comunication between brain and the device, allowing individuals to control prosthetics, wheelchairs and computers. (See improvements section)[1]
The P300 represents the brain's response to recognizing something unexpected or important. This wave is thought to reflect cognitive processes related to attention and working memory.
A common way to study the P300 is through the oddball paradigm. In this setup, a person is shown a series of repetitive stimuli, like tones or images, with occasional rare or "oddball" stimuli mixed in. When the brain notices the oddball, it generates the P300 wave as a reaction to the unexpected event. This makes the P300 useful for studying how the brain processes novel or significant information.
An interesting application of this wave is in lying studies. If someone recognises a specific stimulus, like a detail from a crime scene or a personal item, the brain reacts even if they try to hide it. This recognition triggers the P300 wave. By using recognition stimuli, researchers can test whether a person has knowledge of something they claim not to know. For example, in a guilty knowledge test, if the P300 wave appears after a crime-related image or word, it suggests that the person recognises it, making it a potential tool for detecting deception.
P300 is a signal evoked after 300ms after stimuli, and measures recognition. Government agencies in America have started to use this response with lie detection, if they showed a picture of the crime scene or of a murder, as the wave responds to recognition, it could activate the wave.
Jennifer Marsman, a Principal Engineer in the Office of the CTO at Microsoft, experimented with the EEG and machine learning for lie detection studies. She used an Emotiv EPOC headset, an EEG with 14 channels, and measured brain activity while participants answered questions. Using the Azure machine learning, she developed a classifier which could predict whether someone was lying or telling the truth.
The learning method used is called “supervised machine learning”, here algorithms are created and tried out using a trained model and 70% of the collected data. The model is then elaborated and when considered ready, the other 30% of the data is used as test data, to see if the model works and to evaluate it. To understand what the best algorithm to use was, she showed the machine learning algorithm cheat sheet, and went for a two-class classification model. As results, an accuracy of 0.932 was reached. This model was made by trying to sense the P300 wave, which is a wave that is detected when one, which appears when a person recognizes a stimulus but tries to mask their response.
This model was still preliminary and would need further work. For example, no baseline EEG was measured, subject wasn’t always sitting down and had their eyes open, and the questions weren’t always randomized. However it shows promising results for future studies. Challenges such as individual variability, difficulty in controlling confounding factors, and the need for large datasets still need to be solved. Current research is focused on refining algorithms and combining EEG data with other physiological signals like heart rate and skin conductance, to improve accuracy. [1]
The challenges have been noted, and a further project could actually be implented, however, after these results in 2017, neither Marsman’s or others seemed to have published in this field. However, the more I searched for future projects, the less information I found and the more I got redirected to human rights websites which reminded privacy rights and ethical issues. [2]
However, one must also remember that the P300 wave cannot distinguish liars, just a stimuli of something a person either remembers or recognizes.
In another experiment, Rosenfeld, using undergraduate subjects, showed each subject three photographs related to a terrorism scenario. Afterwards, each subject was shown a random sequence of pictures, including the three that the subject had already been exposed to. The aim was to find their P300 wave reaction. Apparently, he was able to detect with a 90% accuracy, which photos a particular subject had seen earlier. Nonetheless, this study didn’t involve speaking, but just recalling already seen images and seeing if this would activate the wave.
At the moment it’s very hard to find significant correlations between patterns of brain activation and lies, it seems to mostly correspond with attention and creativity areas, but it is still extremely vague and difficult to research as the experiment will never correspond to real life situations. [3]
EEG-based lie detection, like in brain fingerprinting, assesses whether the brain recognizes specific information related to a lie using mostly the P300 wave.
As fMRI has better spatial resolution, it has advantadges for checking which areas are active during lying, however, one cannot distinguish the areas activated only during lies. They often correspond to creativity areas and increase in attention but this is still to vague and not always accurate.
Concerns over accuracy and ethical issues lead to the refusal to use these techniques in real case law scenarios. Both techniques need to improve their accuracy in distinguishing lies from other forms of cognitive or emotional stress. False positives and misinterpretation of brain data are significant issues.
Another interesting option would be combining fNIRS with EEG as they are both portable and do not interact with eachother. (see here for more)