Dean J. Krusienski received the B.S., M.S., and Ph.D. degrees in electrical engineering from The Pennsylvania State University, University Park, PA. He completed his postdoctoral research at the New York State Department of Health’s Wadsworth Center Brain-Computer Interface (BCI) Laboratory in Albany, NY. His primary research focus is on the application of advanced signal processing and pattern recognition techniques to brain-computer interfaces, which allow individuals with severe neuromuscular disabilities to communicate and interact with their environments using their brainwaves. His research interests include decoding and translation of neural signals, digital signal and image processing, machine learning, evolutionary algorithms, artificial neural networks, and biomedical and musical applications. His research is supported by the National Science Foundation (NSF), the National Institutes of Health (NIH), and the National Institute of Aerospace (NIA)/NASA.
The Pennsylvania State University: Doctor of Philosophy, Electrical Engineering 2004
Selected Articles (3)
T Schultz, M Wand,T Hueber, DJ Krusienski, C Herff, JS Brumberg
Speech is a complex process involving a wide range of biosignals, including but not limited to acoustics. These biosignals-stemming from the articulators, the articulator muscle activities, the neural pathways, and the brain itself-can be used to circumvent limitations of conventional speech processing in particular, and to gain insights into the process of speech production in general. Research on biosignal-based speech processing is a wide and very active field at the intersection of various disciplines, ranging from engineering, computer science, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Consequently, a variety of methods and approaches have been used to investigate the common goal of creating biosignal-based speech processing devices for communication applications in everyday situations and for speech rehabilitation, as well as gaining a deeper understanding of spoken communication. This paper gives an overview of the various modalities, research approaches, and objectives for biosignal-based spoken communication.
NR Waytowich, Y Yamani, DJ Krusienski
Steady-state visual evoked potentials (SSVEPs) are oscillations of the electroencephalogram (EEG) which are mainly observed over the occipital area that exhibit a frequency corresponding to a repetitively flashing visual stimulus. SSVEPs have proven to be very consistent and reliable signals for rapid EEG-based brain-computer interface (BCI) control. There is conflicting evidence regarding whether solid or checkerboard-patterned flashing stimuli produce superior BCI performance. Furthermore, the spatial frequency of checkerboard stimuli can be varied for optimal performance. The present study performs an empirical evaluation of performance for a 4-class SSVEP-based BCI when the spatial frequency of the individual checkerboard stimuli is varied over a continuum ranging from a solid background to single-pixel checkerboard patterns. The results indicate that a spatial frequency of 2.4 cycles per degree can maximize the information transfer rate with a reduction in subjective visual irritation compared to lower spatial frequencies. This important finding on stimulus design can lead to improved performance and usability of SSVEP-based BCIs.
JS Brumberg, DJ Krusienski, S Chakrabarti, A Gunduz, P Brunner, AL Ritaccio, G Schalk
How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain