Project description:Brain-computer interfaces (BCIs) based on electroencephalogram (EEG) have recently attracted increasing attention in virtual reality (VR) applications as a promising tool for controlling virtual objects or generating commands in a "hands-free" manner. Video-oculography (VOG) has been frequently used as a tool to improve BCI performance by identifying the gaze location on the screen, however, current VOG devices are generally too expensive to be embedded in practical low-cost VR head-mounted display (HMD) systems. In this study, we proposed a novel calibration-free hybrid BCI system combining steady-state visual-evoked potential (SSVEP)-based BCI and electrooculogram (EOG)-based eye tracking to increase the information transfer rate (ITR) of a nine-target SSVEP-based BCI in VR environment. Experiments were repeated on three different frequency configurations of pattern-reversal checkerboard stimuli arranged in a 3 × 3 matrix. When a user was staring at one of the nine visual stimuli, the column containing the target stimulus was first identified based on the user's horizontal eye movement direction (left, middle, or right) classified using horizontal EOG recorded from a pair of electrodes that can be readily incorporated with any existing VR-HMD systems. Note that the EOG can be recorded using the same amplifier for recording SSVEP, unlike the VOG system. Then, the target visual stimulus was identified among the three visual stimuli vertically arranged in the selected column using the extension of multivariate synchronization index (EMSI) algorithm, one of the widely used SSVEP detection algorithms. In our experiments with 20 participants wearing a commercial VR-HMD system, it was shown that both the accuracy and ITR of the proposed hybrid BCI were significantly increased compared to those of the traditional SSVEP-based BCI in VR environment.
Project description:Background and objectives: Many brain-computer interfaces (BCIs) for people with severe disabilities present stimuli in the visual modality with little consideration of the visual skills required for successful use. The primary objective of this tutorial is to present researchers and clinical professionals with basic information about the visual skills needed for functional use of visual BCIs, and to offer modifications that would render BCI technology more accessible for persons with vision impairments.Methods: First, we provide a background on BCIs that rely on a visual interface. We then describe the visual skills required for BCI technologies that are used for augmentative and alternative communication (AAC), as well as common eye conditions or impairments that can impact the user's performance. We summarize screening tools that can be administered by the non-eye care professional in a research or clinical setting, as well as the role of the eye care professional. Finally, we explore potential BCI design modifications to compensate for identified functional impairments. Information was generated from literature review and the clinical experience of vision experts.Results and conclusions: This in-depth description culminates in foundational information about visual skills and functional visual impairments that affect the design and use of visual interfaces for BCI technologies. The visual interface is a critical component of successful BCI systems. We can determine a BCI system for potential users with visual impairments and design BCI visual interfaces based on sound anatomical and physiological visual clinical science.Implications for RehabilitationAs brain-computer interfaces (BCIs) become possible access methods for people with severe motor impairments, it is critical that clinicians have a basic knowledge of the visual skills necessary for use of visual BCI interfaces.Rehabilitation providers must have a knowledge of objectively gathering information regarding a potential BCI user's functional visual skills.Rehabilitation providers must understand how to modify BCI visual interfaces for the potential user with visual impairments.Rehabilitation scientists should understand the visual demands of BCIs as they develop and evaluate these new access methods.
Project description:Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.
Project description:Brain computer interfaces allow users to preform various tasks using only the electrical activity of the brain. BCI applications often present the user a set of stimuli and record the corresponding electrical response. The BCI algorithm will then have to decode the acquired brain response and perform the desired task. In rapid serial visual presentation (RSVP) tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. In this work, we suggest a multimodal neural network for RSVP tasks. The network operates on the brain response and on the initiating stimulus simultaneously, providing more information for the BCI application. We present two variants of the multimodal network, a supervised model, for the case when the targets are known in advanced, and a semi-supervised model for when the targets are unknown. We test the neural networks with a RSVP experiment on satellite imagery carried out with two subjects. The multimodal networks achieve a significant performance improvement in classification metrics. We visualize what the networks has learned and discuss the advantages of using neural network models for BCI applications.
Project description:Brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP) has been widely studied due to the high information transfer rate (ITR), little user training, and wide subject applicability. However, there are also disadvantages such as visual discomfort and "BCI illiteracy." To address these problems, this study proposes to use low-frequency stimulations (12 classes, 0.8-2.12 Hz with an interval of 0.12 Hz), which can simultaneously elicit visual evoked potential (VEP) and pupillary response (PR) to construct a hybrid BCI (h-BCI) system. Classification accuracy was calculated using supervised and unsupervised methods, respectively, and the hybrid accuracy was obtained using a decision fusion method to combine the information of VEP and PR. Online experimental results from 10 subjects showed that the averaged accuracy was 94.90 ± 2.34% (data length 1.5 s) for the supervised method and 91.88 ± 3.68% (data length 4 s) for the unsupervised method, which correspond to the ITR of 64.35 ± 3.07 bits/min (bpm) and 33.19 ± 2.38 bpm, respectively. Notably, the hybrid method achieved higher accuracy and ITR than that of VEP and PR for most subjects, especially for the short data length. Together with the subjects' feedback on user experience, these results indicate that the proposed h-BCI with the low-frequency stimulation paradigm is more comfortable and favorable than the traditional SSVEP-BCI paradigm using the alpha frequency range.
Project description:A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.
Project description:BACKGROUND: In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. METHOD: Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. RESULTS: We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. CONCLUSIONS: Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision.
Project description:Groups have increased sensing and cognition capabilities that typically allow them to make better decisions. However, factors such as communication biases and time constraints can lead to less-than-optimal group decisions. In this study, we use a hybrid Brain-Computer Interface (hBCI) to improve the performance of groups undertaking a realistic visual-search task. Our hBCI extracts neural information from EEG signals and combines it with response times to build an estimate of the decision confidence. This is used to weigh individual responses, resulting in improved group decisions. We compare the performance of hBCI-assisted groups with the performance of non-BCI groups using standard majority voting, and non-BCI groups using weighted voting based on reported decision confidence. We also investigate the impact on group performance of a computer-mediated form of communication between members. Results across three experiments suggest that the hBCI provides significant advantages over non-BCI decision methods in all cases. We also found that our form of communication increases individual error rates by almost 50% compared to non-communicating observers, which also results in worse group performance. Communication also makes reported confidence uncorrelated with the decision correctness, thereby nullifying its value in weighing votes. In summary, best decisions are achieved by hBCI-assisted, non-communicating groups.
Project description:Visual evoked potentials (VEPs) can be measured in the EEG as response to a visual stimulus. Commonly, VEPs are displayed by averaging multiple responses to a certain stimulus or a classifier is trained to identify the response to a certain stimulus. While the traditional approach is limited to a set of predefined stimulation patterns, we present a method that models the general process of VEP generation and thereby can be used to predict arbitrary visual stimulation patterns from EEG and predict how the brain responds to arbitrary stimulation patterns. We demonstrate how this method can be used to model single-flash VEPs, steady state VEPs (SSVEPs) or VEPs to complex stimulation patterns. It is further shown that this method can also be used for a high-speed BCI in an online scenario where it achieved an average information transfer rate (ITR) of 108.1 bit/min. Furthermore, in an offline analysis, we show the flexibility of the method allowing to modulate a virtually unlimited amount of targets with any desired trial duration resulting in a theoretically possible ITR of more than 470 bit/min.
Project description:Visual evoked potential-based brain-computer interfaces (BCIs) have been widely investigated because of their easy system configuration and high information transfer rate (ITR). However, the uncomfortable flicker or brightness modulation of existing methods restricts the practical interactivity of BCI applications. In our study, a flicker-free steady-state motion visual evoked potential (FF-SSMVEP)-based BCI was proposed. Ring-shaped motion checkerboard patterns with oscillating expansion and contraction motions were presented by a high-refresh-rate display for visual stimuli, and the brightness of the stimuli was kept constant. Compared with SSVEPs, few harmonic responses were elicited by FF-SSMVEPs, and the frequency energy of SSMVEPs was concentrative. These FF-SSMVEPs evoked "single fundamental peak" responses after signal processing without harmonic and subharmonic peaks. More stimulation frequencies could thus be selected to elicit more responding fundamental peaks without overlap with harmonic peaks. A 40-target online SSMVEP-based BCI system was achieved that provided an ITR up to 1.52 bits per second (91.2 bits/min), and user training was not required to use this system. This study also demonstrated that the FF-SSMVEP-based BCI system has low contrast and low visual fatigue, offering a better alternative to conventional SSVEP-based BCIs.