Ryan Stables is a Senior Lecturer in Digital Audio Processing and the subject leader for Sound Technology, based in our Digital Media Technology (DMT) Lab. He currently teaches in the areas of digital audio effects, digital signal processing and audio software development.
Ryan currently leads The SAFE Project, which will provide a suite of audio plug-ins able to recognise transferrable semantic terms. He has had both national and international news coverage, for his work on both intelligent music production and cancer diagnostics via data sonification.
He is a visiting researcher at the University of Strathclyde and currently supervises research students in the areas of semantic audio engineering, intelligent music production and audio source separation.
Areas of Expertise (4)
Digital Signal Processing
Birmingham City University: Ph.D., Digital Music Processing
- Audio Engineering Society
Selected Media Appearances (4)
The Sound of Cancer
BBC Radio 4/World Service online
You'll hear the sounds a cell with cancer and a healthy cell. Ryan Stables is a researcher at the Digital Media Technology Laboratory at Birmingham City University who's helped develop a way of detecting cancerous cells using sound...
Detecting Cancer by Sound
Scientific American online
Ryan Stables, a musician and digital media technologist at Birmingham City University in England, working with an analytic chemist and a physicist, transformed these visual signals into audio sounds...
Exclusive: Revolutionary Laser Will Allow On-the-Spot Cancer Diagnosis in Seconds
Express U.K. online
Dr Ryan Stables said: “This could change the way we approach cancer diagnosis so it is faster, potentially saving thousands of lives. This method of identifying cancerous cells is similar to that of using a metal detector. It allows you to recognise the characteristics of cancer in real-time, which we hope could have life-changing implications.”
Metal Detector for Cancer’ Helps GPS Spot Tumours Instantly by Listening to the Cells
Daily Mail online
‘It’s like a metal detector for cancer,’ says Dr Ryan Stables, researcher for the School of Digital Media Technology at Birmingham City University. ‘Just as a Geiger counter clicks when it picks up radiation, sonification can indicate the presence of tumours by different sounds made by different types of cells. As well as developing better diagnostic tools, we are hopeful the research could be used for treating other diseases.’
Selected Articles (5)
2017 The Web Audio API introduced native audio processing into web browsers. Audio plugin standards have been created for developers to create audio-rich processors and deploy them into media rich websites. It is critical these standards support flexible designs with clear host-plugin interaction to ease integration and avoid non-standard plugins. Intelligent features should be embedded into standards to help develop next-generation interfaces and designs. This paper presents a discussion on audio plugins in the web audio API, how they should behave and leverage web technologies with an overview of current standards.
2016 Spectroscopic diagnostics have been shown to be an effective tool for the analysis and discrimination of disease states from human tissue. Furthermore, Raman spectroscopic probes are of particular interest as they allow for in vivo spectroscopic diagnostics, for tasks such as the identification of tumour margins during surgery. In this study, we investigate a feature-driven approach to the classification of metastatic brain cancer, glioblastoma (GB) and non-cancer from tissue samples, and we provide a real-time feedback method for endoscopic diagnostics using sound. To do this, we first evaluate the sensitivity and specificity of three classifiers (SVM, KNN and LDA), when trained with both sub-band spectral features and principal components taken directly from Raman spectra. We demonstrate that the feature extraction approach provides an increase in classification accuracy of 26.25% for SVM and 25% for KNN. We then discuss the molecular assignment of the most salient sub-bands in the dataset. The most salient sub-band features are mapped to parameters of a frequency modulation (FM) synthesizer in order to generate audio clips from each tissue sample. Based on the properties of the sub-band features, the synthesizer was able to maintain similar sound timbres within the disease classes and provide different timbres between disease classes. This was reinforced via listening tests, in which participants were able to discriminate between classes with mean classification accuracy of 71.1%. Providing intuitive feedback via sound frees the surgeons' visual attention to remain on the patient, allowing for greater control over diagnostic and surgical tools during surgery, and thus promoting clinical translation of spectroscopic diagnostics.
2016 In music production, descriptive terminology is used to define perceived sound transformations. By understanding the underlying statistical features associated with these descriptions, we can aid the retrieval of contextually relevant processing parameters using natural language, and create intelligent systems capable of assisting in audio engineering. In this study, we present an analysis of a dataset containing descriptive terms gathered using a series of processing modules, embedded within a Digital Audio Workstation. By applying hierarchical clustering to the audio feature space, we show that similarity in term representations exists within and between transformation classes. Furthermore, the organisation of terms in low-dimensional timbre space can be explained using perceptual concepts such as size and dissonance. We conclude by performing Latent Semantic Indexing to show that similar groupings exist based on term frequency.
2016 In semantic equalisation, descriptions of audio transformations can be used to control low-level audio effect parameters. In this paper, we explore sub-representations of these descriptions in order to suggest more contextually relevant processing parameters to users, based on external influence. We propose a methodology for finding sub-representations, and an intuitive low-dimensional interface, which can be used to recommend equalisation curves based on proximity to cluster centroids.
2016 The Web Audio Evaluation Tool is an open-source, browser-based framework for creating and conducting listening tests. It allows remote deployment, GUI-guided setup, and analysis in the browser. While currently being used for listening tests in various fields, it was initially developed specifically for the study of music production practices. In this work, we highlight some of the features that facilitate evaluation of such content.