Dean Krusienski, Ph.D.

Professor and Graduate Program Director, Department of Biomedical Engineering | B.S., M.S., Ph.D., The Pennsylvania State University VCU College of Engineering

  • Richmond VA

Focusing on neural signal processing and analysis for the development of brain-computer interfaces and neuroprosthetic devices.

Contact

VCU College of Engineering

View more experts managed by VCU College of Engineering

Biography

Dean J. Krusienski received the B.S., M.S., and Ph.D. degrees in electrical engineering from The Pennsylvania State University, University Park, PA. He completed his postdoctoral research at the New York State Department of Health’s Wadsworth Center Brain-Computer Interface (BCI) Laboratory in Albany, NY. His primary research focus is on the application of advanced signal processing and pattern recognition techniques to brain-computer interfaces, which allow individuals with severe neuromuscular disabilities to communicate and interact with their environments using their brainwaves. His research interests include decoding and translation of neural signals, digital signal and image processing, machine learning, evolutionary algorithms, artificial neural networks, and biomedical and musical applications. His research has been supported by the National Science Foundation (NSF), the National Institutes of Health (NIH), the Department of Defense (DoD), and the National Institute of Aerospace (NIA)/NASA.

Areas of Expertise

Brain-Computer Interfaces
EEG Analysis
Signal Processing
Machine Learning
Neuroprosthetics
Deep-brain Stimulation

Education

The Pennsylvania State University

Doctor of Philosophy

Electrical Engineering

2004

The Pennsylvania State University

Master of Science

Electrical Engineering

2001

The Pennsylvania State University

Bachelor of Science

Electrical Engineering

1999

minor in Biomedical Engineering

Research Focus

Brain-Computer Interfaces

Focusing on neural signal processing and analysis for the development of brain-computer interfaces and neuroprosthetic devices.

Research Grants

US-German Research Proposal: ADaptive low-latency SPEEch Decoding and synthesis using intracranial signals (ADSPEED)

NSF

2021-01-01

Recent research has demonstrated that it is possible to synthesize intelligible speech sounds directly from invasive measurements of brain activity. However, these approaches have a perceptible delay between brain activity and audible speech output, preventing a natural spoken communication. Furthermore, the approaches generally require pre-recorded speech and thus cannot be directly applied to people who are unable to speak and generate such recordings. This project aims to develop methods for synthesizing speech from brain activity without perceptible processing delay that do not rely on pre-recorded speech from the user. The ultimate goal is to develop a system that restores natural spoken communication to the millions of people who suffer from severe speech disorders, including those with complete loss of speech.

The project is organized into three research thrusts. The first thrust focuses on asynchronous and acoustics-free model training, where novel surrogates to the user's vocalized speech will be created using approaches based on dynamic time warping and the inference of intended inner-speech acoustics from corresponding textual representations. The second thrust focuses on online validation and user adaptation, where the existing low-latency speech decoding and synthesis scheme, which is not inherently adaptable, will be validated in a closed-loop fashion using online human-subject experiments. This will provide valuable insights into how the user responds and adapts to the artificial, synthesized speech output. The third thrust focuses on the development and testing of low-latency system-user co-adaptation schemes. Co-adaptation, where both the user and system adapt to optimize the synthesized output, is crucial for revealing the elusive representations of inner (i.e., imagined or attempted) speech in the absence of a reliable surrogate for modeling. As a result, this research will simultaneously advance the understanding of the neural representations of inner speech and, in turn, co-adaptive inner speech decoding toward the development of practical closed-loop speech neuroprosthetics.

EAGER: EEG-based Cognitive-state Decoding for Interactive Virtual Reality

NSF

2019-10-01

The increasing availability of affordable, high-performance virtual reality (VR) headsets creates great potential for applications including education, training, and therapy. In many applications, being able to sense a user's mental state could provide key benefits. For instance, VR environments could use brain signals such as the electroencephalogram (EEG) to infer aspects of the user's mental workload or emotional state; this, in turn, could be used to change the difficulty of a training task to make it better-suited to each user's unique experience. Using such EEG feedback could be valuable not just for training, but in improving people's performance in real applications including aviation, healthcare, defense, and driving. This project's goal is to develop methods and algorithms for integrating EEG sensors into current VR headsets, which provide a logical and unobtrusive framework for mounting these sensors. However, there are important challenges to overcome. For instance, EEG sensors in labs are typically used with a conducting gel, but for VR headsets these sensors will need to work reliably in "dry" conditions without the gel. Further, in lab settings, motion isn't an issue, but algorithms for processing the EEG data will need to account for people's head and body motion when they are using headsets.

To address these challenges, the project team will build on recent advances in dry EEG electrode technologies and motion artifact suppression algorithms, focusing on supporting passive monitoring and cognitive state feedback. Such passive feedback is likely to be more usable in virtual environments than active EEG feedback, both because people will be using other methods to interact with the environment directly and because passive EEG sensing is more robust to slower response times and decoding errors than active control. Prior studies have demonstrated the potential of EEG for cognitive-state decoding in controlled laboratory scenarios, but practical EEG integration for closed-loop neurofeedback in interactive VR environments requires addressing three critical next questions: (1) can more-practical and convenient EEG dry sensors achieve comparable results to wet sensors?, (2) can passive EEG cognitive-state decoding be made robust to movement-related artifacts?, and (3) can these decoding schemes be generalized across a variety of cognitive tasks and to closed-loop paradigms? To address these questions, classical cognitive tasks and more-complex sim

US-German Data Sharing Proposal: CRCNS Data Sharing: REvealing SPONtaneous Speech Processes in Electrocorticography (RESPONSE)

NSF

2016-08-01

The uniquely human capability to produce speech enables swift communication of abstract and substantive information. Currently, nearly two million people in the United States, and far more worldwide, suffer from significant speech production deficits as a result of severe neuromuscular impairments due to injury or disease. In extreme cases, individuals may be unable to speak at all. These individuals would greatly benefit from a device that could alleviate speech deficits and enable them to communicate more naturally and effectively. This project will explore aspects of decoding a user's intended speech directly from the electrical activity of the brain and converting it to synthesized speech that could be played through a loudspeaker in real-time to emulate natural speaking from thought. In particular, this project will uniquely focus on decoding continuous, spontaneous speech processes to achieve more natural and practical communication device for the severely disabled.

The complex dynamics of brain activity and the fundamental processing units of continuous speech production and perception are largely unknown, and such dynamics make it challenging to investigate these speech processes with traditional neuroimaging techniques. Electrocorticography (ECoG) measures electrical activity directly from the brain surface and covers an area large enough to provide insights about widespread networks for speech production and understanding, while simultaneously providing localized information for decoding nuanced aspects of the underlying speech processes. Thus, ECoG is instrumental and unparalleled for investigating the detailed spatiotemporal dynamics of speech. The research team's prior work has shown for the first time the detailed spatiotemporal progression of brain activity during prompted continuous speech, and that the team's Brain-to-text system can model phonemes and decode words. However, in pursuit of the ultimate objective of developing a natural speech neuroprosthetic for the severely disabled, research must move beyond studying prompted and isolated aspects of speech. This project will extend the research team's prior experiments to investigate the neural processes of spontaneous and imagined speech production. In conjunction with in-depth analysis of the recorded neural signals, the researchers will apply customized ECoG-based automatic speech recognition (ASR) techniques to facilitate the analysis of the large amount of phones occurring in contin

Show All +

Courses

EGRB 603. Biomedical Signal Processing

Explores theory and application of discrete-time signal processing techniques in biomedical data processing. Includes discrete-time signals and systems, the Discrete/Fast Fourier Transforms (DFT/FFT), digital filter design and implementation, and an introduction into processing of discrete-time random signals.

EGRB 601. Numerical Methods and Modeling in Biomedical Engineering

The goal of this course is to develop an enhanced proficiency in the use of computational methods and modeling, to solve realistic numerical problems in advanced biomedical engineering courses and research, as well careers. The course will discuss and students will develop advanced technical skills in the context of numerical data analysis and modeling applications in biology and medicine. An important component of this course is developing problem-solving skills and an understanding of the strengths and weaknesses of different numerical approaches applied in biomedical engineering applications.

EGRB 308. Biomedical Signal Processing

Explores the basic theory and application of digital signal processing techniques related to the acquisition and processing of biomedical and physiological signals including signal modeling, AD/DA, Fourier transform, Z transform, digital filter design, continuous and discrete systems.

Selected Articles

The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings

NeuroImage

P Zanganeh Soroush, C Herff, SK Ries, JJ Shih, T Schultz, DJ Krusienski

2023-04-01

Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.

Prefrontal High Gamma in ECoG Tags Periodicity of Musical Rhythms in Perception and Imagination

eNeuro

SA Herff, C Herff, AJ Milne, GD Johnson, JJ Shih and DJ Krusienski

2020-06-25

Rhythmic auditory stimuli are known to elicit matching activity patterns in neural populations. Furthermore, recent research has established the particular importance of high-gamma brain activity in auditory processing by showing its involvement in auditory phrase segmentation and envelope tracking. Here, we use electrocorticographic (ECoG) recordings from eight human listeners to see whether periodicities in high-gamma activity track the periodicities in the envelope of musical rhythms during rhythm perception and imagination. Rhythm imagination was elicited by instructing participants to imagine the rhythm to continue during pauses of several repetitions. To identify electrodes whose periodicities in high-gamma activity track the periodicities in the musical rhythms, we compute the correlation between the autocorrelations (ACCs) of both the musical rhythms and the neural signals. A condition in which participants listened to white noise was used to establish a baseline. High-gamma autocorrelations in auditory areas in the superior temporal gyrus and in frontal areas on both hemispheres significantly matched the autocorrelations of the musical rhythms. Overall, numerous significant electrodes are observed on the right hemisphere. Of particular interest is a large cluster of electrodes in the right prefrontal cortex that is active during both rhythm perception and imagination. This indicates conscious processing of the rhythms’ structure as opposed to mere auditory phenomena. The autocorrelation approach clearly highlights that high-gamma activity measured from cortical electrodes tracks both attended and imagined rhythms.

View more

The Potential of Stereotactic-EEG for Brain-Computer Interfaces: Current Progress and Future Directions

Frontiers in Neuroscience, (Neuroprosthetics)

C Herff, DJ Krusienski, P Kubben

2020-02-27

Stereotactic electroencephalogaphy (sEEG) utilizes localized, penetrating depth electrodes to measure electrophysiological brain activity. It is most commonly used in the identification of epileptogenic zones in cases of refractory epilepsy. The implanted electrodes generally provide a sparse sampling of a unique set of brain regions including deeper brain structures such as hippocampus, amygdala and insula that cannot be captured by superficial measurement modalities such as electrocorticography (ECoG). Despite the overlapping clinical application and recent progress in decoding of ECoG for Brain-Computer Interfaces (BCIs), sEEG has thus far received comparatively little attention for BCI decoding. Additionally, the success of the related deep-brain stimulation (DBS) implants bodes well for the potential for chronic sEEG applications. This article provides an overview of sEEG technology, BCI-related research, and prospective future directions of sEEG for long-term BCI applications.

View more

Show All +