Project description:Existing electronic skin (e-skin) sensing platforms are equipped to monitor physical parameters using power from batteries or near-field communication. For e-skins to be applied in the next generation of robotics and medical devices, they must operate wirelessly and be self-powered. However, despite recent efforts to harvest energy from the human body, self-powered e-skin with the ability to perform biosensing with Bluetooth communication are limited because of lack of a continuous energy source and limited power efficiency. Here, we report a flexible and fully perspiration-powered integrated electronic skin (PPES) for multiplexed metabolic sensing in situ. The battery-free e-skin contains multimodal sensors and highly efficient lactate biofuel cells that use a unique integration of zero- to three-dimensional nanomaterials to achieve high power intensity and long-term stability. The PPES delivered a record-breaking power density of 3.5 milliwatt-centimeter-2 for biofuel cells in untreated human body fluids (human sweat) and displayed a very stable performance during a 60-hour continuous operation. It selectively monitored key metabolic analytes (e.g., urea, NH4 +, glucose, and pH) and the skin temperature during prolonged physical activities and wirelessly transmitted the data to the user interface using Bluetooth. The PPES was also able to monitor muscle contraction and work as a human-machine interface for human- prosthesis walking.
Project description:The development of advanced technologies for wireless data collection and the analysis of quantitative data, with application to a human-machine interface (HMI), is of growing interest. In particular, various wearable devices related to HMIs are being developed. These devices require a customization process that considers the physical characteristics of each individual, such as mounting positions of electrodes, muscle masses, and so forth. Here, the authors report device and calculation concepts for flexible platforms that can measure electrical signals changed through electromyography (EMG). This soft, flexible, and lightweight EMG sensor can be attached to curved surfaces such as the forearm, biceps, back, legs, etc., and optimized biosignals can be obtained continuously through post-processing. In addition to the measurement of EMG signals, the application of the HMI has stable performance and high accuracy of more than 95%, as confirmed by 50 trials per case. The result of this study shows the possibility of application to various fields such as entertainment, the military, robotics, and healthcare in the future.
Project description:Motor imagery offers an excellent opportunity as a stimulus-free paradigm for brain-machine interfaces. Conventional electroencephalography (EEG) for motor imagery requires a hair cap with multiple wired electrodes and messy gels, causing motion artifacts. Here, a wireless scalp electronic system with virtual reality for real-time, continuous classification of motor imagery brain signals is introduced. This low-profile, portable system integrates imperceptible microneedle electrodes and soft wireless circuits. Virtual reality addresses subject variance in detectable EEG response to motor imagery by providing clear, consistent visuals and instant biofeedback. The wearable soft system offers advantageous contact surface area and reduced electrode impedance density, resulting in significantly enhanced EEG signals and classification accuracy. The combination with convolutional neural network-machine learning provides a real-time, continuous motor imagery-based brain-machine interface. With four human subjects, the scalp electronic system offers a high classification accuracy (93.22 ± 1.33% for four classes), allowing wireless, real-time control of a virtual reality game.
Project description:Untethered miniature robots have significant poten-tial and promise in diverse minimally invasive medical applications inside the human body. For drug delivery and physical contra-ception applications inside tubular structures, it is desirable to have a miniature anchoring robot with self-locking mechanism at a target tubular region. Moreover, the behavior of this robot should be tracked and feedback-controlled by a medical imaging-based system. While such a system is unavailable, we report a reversible untethered anchoring robot design based on remote magnetic actuation. The current robot prototype's dimension is 7.5 mm in diameter, 17.8 mm in length, and made of soft polyurethane elastomer, photopolymer, and two tiny permanent magnets. Its relaxation and anchoring states can be maintained in a stable manner without supplying any control and actuation input. To control the robot's locomotion, we implement a two-dimensional (2D) ultrasound imaging-based tracking and control system, which automatically sweeps locally and updates the robot's position. With such a system, we demonstrate that the robot can be controlled to follow a pre-defined 1D path with the maximal position error of 0.53 ± 0.05 mm inside a tubular phantom, where the reversible anchoring could be achieved under the monitoring of ultrasound imaging.
Project description:Complicated structures consisting of multi-layers with a multi-modal array of device components, i.e., so-called patterned multi-layers, and their corresponding circuit designs for signal readout and addressing are used to achieve a macroscale electronic skin (e-skin). In contrast to this common approach, we realized an extremely simple macroscale e-skin only by employing a single-layered piezoresistive MWCNT-PDMS composite film with neither nano-, micro-, nor macro-patterns. It is the deep machine learning that made it possible to let such a simple bulky material play the role of a smart sensory device. A deep neural network (DNN) enabled us to process electrical resistance change induced by applied pressure and thereby to instantaneously evaluate the pressure level and the exact position under pressure. The great potential of this revolutionary concept for the attainment of pressure-distribution sensing on a macroscale area could expand its use to not only e-skin applications but to other high-end applications such as touch panels, portable flexible keyboard, sign language interpreting globes, safety diagnosis of social infrastructures, and the diagnosis of motility and peristalsis disorders in the gastrointestinal tract.
Project description:We report the application of a nonvolatile ionic gel as a soft, conductive interface for electrotactile stimulation. Materials characterization reveals that, compared to a conventional ionic hydrogel, a glycerol-containing ionic gel does not dry out in air, has better adhesion to skin, and exhibits a similar impedance spectrum in the range of physiological frequencies. Moreover, psychophysical experiments reveal that the nonvolatile gel also exhibits a wider window of comfortable electrotactile stimulation. Finally, a simple pixelated device is fabricated to demonstrate spatial resolution of the haptic signal.
Project description:A barrier to practical use of electrotactile stimulation for haptic feedback has been large variability in perceived sensation intensity due to changes in the impedance of the electrode-skin interface, such as when electrodes peel or users sweat. Here, we show how to significantly reduce this variability by modulating stimulation parameters in response to measurements of impedance. Our method derives from three contributions. First, we created a model between stimulation parameters and impedance at constant perceived sensation intensity by looking at the peak pulse energy and phase charge. Our model fits experimental data better than previous models (mean R2 > 0.9) and holds over a larger set of conditions (subjects, sessions, magnitudes of sensation, stimulation locations, electrode sizes). Second, we implemented a controller that regulates perceived sensation intensity by using our model to derive a new current amplitude and pulse duration in response to changes in impedance. Our controller accurately predicts subject-chosen stimulation parameters at constant sensation intensity (mean R2 > 0.9). Third, we demonstrated as a proof-of-concept on two subjects with below-elbow amputations-using a prosthesis with electrotactile touch feedback-that our controller can regulate sensation intensity in response to large impedance changes that occur in activities of daily living. These results make electrotactile stimulation for human-machine interfaces more reliable during activities of daily living.
Project description:Brain-machine interfaces (BMI) allows individuals to control an external device by controlling their own brain activity, without requiring bodily or muscle movements. Performing voluntary movements is associated with the experience of agency ("sense of agency") over those movements and their outcomes. When people voluntarily control a BMI, they should likewise experience a sense of agency. However, using a BMI to act presents several differences compared to normal movements. In particular, BMIs lack sensorimotor feedback, afford lower controllability and are associated with increased cognitive fatigue. Here, we explored how these different factors influence the sense of agency across two studies in which participants learned to control a robotic hand through motor imagery decoded online through electroencephalography. We observed that the lack of sensorimotor information when using a BMI did not appear to influence the sense of agency. We further observed that experiencing lower control over the BMI reduced the sense of agency. Finally, we observed that the better participants controlled the BMI, the greater was the appropriation of the robotic hand, as measured by body-ownership and agency scores. Results are discussed based on existing theories on the sense of agency in light of the importance of BMI technology for patients using prosthetic limbs.
Project description:The efficacy of wireless intracortical brain-computer interfaces (iBCIs) is limited in part by the number of recording channels, which is constrained by the power budget of the implantable system. Designing wireless iBCIs that provide the high-quality recordings of today's wired neural interfaces may lead to inadvertent over-design at the expense of power consumption and scalability. Here, we report analyses of neural signals collected from experimental iBCI measurements in rhesus macaques and from a clinical-trial participant with implanted 96-channel Utah multielectrode arrays to understand the trade-offs between signal quality and decoder performance. Moreover, we propose an efficient hardware design for clinically viable iBCIs, and suggest that the circuit design parameters of current recording iBCIs can be relaxed considerably without loss of performance. The proposed design may allow for an order-of-magnitude power savings and lead to clinically viable iBCIs with a higher channel count.
Project description:OBJECTIVE:In this article, we investigated the effects of external human-machine interfaces (eHMIs) on pedestrians' crossing intentions. BACKGROUND:Literature suggests that the safety (i.e., not crossing when unsafe) and efficiency (i.e., crossing when safe) of pedestrians' interactions with automated vehicles could increase if automated vehicles display their intention via an eHMI. METHODS:Twenty-eight participants experienced an urban road environment from a pedestrian's perspective using a head-mounted display. The behavior of approaching vehicles (yielding, nonyielding), vehicle size (small, medium, large), eHMI type (1. baseline without eHMI, 2. front brake lights, 3. Knightrider animation, 4. smiley, 5. text [WALK]), and eHMI timing (early, intermediate, late) were varied. For yielding vehicles, the eHMI changed from a nonyielding to a yielding state, and for nonyielding vehicles, the eHMI remained in its nonyielding state. Participants continuously indicated whether they felt safe to cross using a handheld button, and "feel-safe" percentages were calculated. RESULTS:For yielding vehicles, the feel-safe percentages were higher for the front brake lights, Knightrider, smiley, and text, as compared with baseline. For nonyielding vehicles, the feel-safe percentages were equivalent regardless of the presence or type of eHMI, but larger vehicles yielded lower feel-safe percentages. The Text eHMI appeared to require no learning, contrary to the three other eHMIs. CONCLUSION:An eHMI increases the efficiency of pedestrian-AV interactions, and a textual display is regarded as the least ambiguous. APPLICATION:This research supports the development of automated vehicles that communicate with other road users.