Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

Chemical and Life Science Engineering Professor Michael “Pete” Peters, Ph.D., is investigating more efficient ways to manufacture biologic pharmaceuticals using a radial flow bioreactor he developed. With applications in vaccines and other personalized therapeutic treatments, biologics are versatile. Their genetic base can be manipulated to create a variety of effects from fighting infections by stimulating an immune response to weight loss by producing a specific hormone in the body. Ozempic, Wegovy and Victoza are some of the brand names for Glucagon-Like Peptide-1 (GLP-1) receptor agonists used to treat diabetes. These drugs mimic the GLP-1 peptide, a hormone naturally produced in the body that regulates appetite, hunger and blood sugar. “I have a lot of experience with helical peptides like GLP-1 from my work with COVID therapeutics,” says Peters. “When it was discovered that these biologic pharmaceuticals can help with weight loss, demand spiked. These drug types were designed for people with type-2 diabetes and those diabetic patients couldn’t get their GLP-1 treatments. We wanted to find a way for manufacturers to scale up production to meet demand, especially now that further study of GLP-1 has revealed other applications for the drug, like smoking cessation.” Continuous Manufacturing of Biologic Pharmaceuticals Pharmaceuticals come in two basic forms: small-molecule and biologic. Small-molecule medicines are synthetically produced via chemical reactions while biologics are produced from microorganisms. Both types of medications are traditionally produced in a batch process, where base materials are fed into a staged system that produces “batches” of the small-molecule or biologic medication. This process is similar to a chef baking a single cake. Once these materials are exhausted, the batch is complete and the entire system needs to be reset before the next batch begins. “ The batch process can be cumbersome,” says Peters. “Shutting the whole process down and starting it up costs time and money. And if you want a second batch, you have to go through the entire process again after sterilization. Scaling the manufacturing process up is another problem because doubling the system size doesn’t equate to doubling the product. In engineering, that’s called nonlinear phenomena.” Continuous manufacturing improves efficiency and scalability by creating a system where production is ongoing over time rather than staged. These manufacturing techniques can lead to “end-to-end” continuous manufacturing, which is ideal for producing high-demand biologic pharmaceuticals like Ozempic, Wegovy and Victoza. Virginia Commonwealth University’s Medicines for All Institute is also focused on these production innovations. Peters’ continuous manufacturing system for biologics is called a radial flow bioreactor. A disk containing the microorganisms used for production sits on a fixture with a tube coming up through the center of the disk. As the transport fluid comes up the tube, the laminar flow created by its exiting the tube spreads it evenly and continuously over the disk. The interaction between the transport medium coming up the tube and the microorganisms on the disk creates the biological pharmaceutical, which is then taken away by the flow of the transport medium for continuous collection. Flowing the transport medium liquid over a disc coated with biologic-producing microorganisms allows the radial flow bioreactor to continuously produce biologic pharmaceuticals. “There are many advantages to a radial flow bioreactor,” says Peters. “It takes minutes to switch out the disk with the biologic-producing microorganisms. While continuously producing your biologic pharmaceutical, a manufacturer could have another disk in an incubator. Once the microorganisms in the incubator have grown to completely cover the disk, flow of the transport medium liquid to the radial flow bioreactor is shut off. The disk is replaced and then the transport medium flow resumes. That’s minutes for a production changeover instead of the many hours it takes to reset a system in the batch flow process.” The Building Blocks of Biologic Pharmaceuticals Biologic pharmaceuticals are natural molecules created by genetically manipulating microorganisms, like bacteria or mammalian cells. The technology involves designing and inserting a DNA plasmid that carries genetic instructions to the cells. This genetic code is a nucleotide sequence used by the cell to create proteins capable of performing a diverse range of functions within the body. Like musical notes, each nucleotide represents specific genetic information. The arrangement of these sequences, like notes in a song, changes what the cell is instructed to do. In the same way notes can be arranged to create different musical compositions, nucleotide sequences can completely alter a cell’s behavior. Microorganisms transcribe the inserted DNA into a much smaller, mRNA coded molecule. Then the mRNA molecule has its nucleotide code translated into a chain of amino acids, forming a polypeptide that eventually folds into a protein that can act within the body. “One of the disadvantages of biologic design is the wide range of molecular conformations biological molecules can adopt,” says Peters. “Small-molecule medications, on the other hand, are typically more rigid, but difficult to design via first-principle engineering methods. A lot of my focus has been on helical peptides, like GLP-1, that are a programmable biologic pharmaceutical designed from first principles and have the stability of a small-molecule.” The stability Peters describes comes from the helical peptide’s structure, an alpha helix where the amino acid chain coils into a spiral that twists clockwise. Hydrogen bonds that occur between the peptide’s backbone creates a repeating pattern that pulls the helix tightly together to resist conformational changes. “It’s why we used it in our COVID therapeutic and makes it an excellent candidate for GLP-1 continuous production because of its relative stability,” says Peters. Programming The Cell Chemical and Life Science Engineering Assistant Professor Leah Spangler, Ph.D., is an expert at instructing cells to make specific things. Her material science background employs proteins to build or manipulate products not found in nature, like purifying rare-earth elements for use in electronics. “My lab’s function is to make proteins every day,” says Spangler. “The kind of proteins we make depends entirely on the project they are for. More specifically I use proteins to make things that don’t occur in nature. The reason proteins don’t build things like solar cells or the quantum dots used in LCD TVs is because nature is not going to evolve a solar cell or a display surface. Nature doesn’t know what either of those things are. However, proteins can be instructed to build these items, if we code them to.” Spangler is collaborating with Peters in the development of his radial flow bioreactor, specifically to engineer a microorganismal bacteria cell capable of continuously producing biologic pharmaceuticals. “We build proteins by leveraging bacteria to make them for us,” says Spangler. “It’s a well known technology. For this project, we’re hypothesizing that Escherichia coli (E. coli) can be modified to make GLP-1. Personally, I like working with E. coli because it’s a simple bacteria that has been thoroughly studied, so there’s lots of tools available for working with it compared to other cell types.” Development of the process and technique to use E. coli with the radial flow bioreactor is ongoing. “Working with Dr. Spangler has been a game changer for me,” says Peters. “She came to the College of Engineering with a background in protein engineering and an expertise with bacteria. Most of my work was in mammalian cells, so it’s been a great collaboration. We’ve been able to work together and develop this bioreactor to produce GLP-1.” Other Radial Flow Bioreactor Applications Similar to how the GLP-1 peptide has found applications beyond diabetes treatment, the radial flow bioreactor can also be used in different roles. Peters is currently exploring the reactor’s viability for harnessing solar energy. “One of the things we’ve done with the internal disc is to use it as a solar panel,” says Peters. “The disk can be a black body that absorbs light and gets warm. If you run water through the system, water also absorbs the radiation’s energy. The radial flow pattern automatically optimizes energy driving forces with fluid residence time. That makes for a very effective solar heating system. This heating system is a simple proof of concept. Our next step is to determine a method that harnesses solar radiation to create electricity in a continuous manner.” The radial flow bioreactor can also be implemented for environmental cleanup. With a disk tailored for water filtration, desalination or bioremediation, untreated water can be pushed through the system until it reaches a satisfactory level of purification. “The continuous bioreactor design is based on first principles of engineering that our students are learning through their undergraduate education,” says Peters. “The nonlinear scaling laws and performance predictions are fundamentally based. In this day of continued emphasis on empirical AI algorithms, the diminishing understanding of fundamental physics, chemistry, biology and mathematics that underlie engineering principles is a challenge. It’s important we not let first-principles and fundamental understanding be degraded from our educational mission, and projects like the radial flow bioreactor help students see these important fundamentals in action.”

AI-powered model predicts post-concussion injury risk in college athletes
Athletes who suffer a concussion have a serious risk of reinjury after returning to play, but identifying which athletes are most vulnerable has always been a bit of a mystery, until now. Using artificial intelligence (AI), University of Delaware researchers have developed a novel machine learning model that predicts an athlete’s risk of lower-extremity musculoskeletal (MKS) injury after concussion with 95% accuracy. A recent study published in Sports Medicine details the development of the AI model, which builds on previously published research showing that the risk of post-concussion injury doubles, regardless of the sport. The most common post-concussive injuries include sprains, strains, or even broken bones or torn ACLs. “This is due to brain changes we see post-concussion,” said Thomas Buckley, professor of kinesiology and applied physiology at the College of Health Sciences. These brain changes affect athletes’ balance, cognition, and reaction times and can be difficult to detect in standard clinical testing. “Even a minuscule difference in balance, reaction time, or cognitive processing of what’s happening around you can make the difference between getting hurt and not,” Buckley said. How AI is changing injury risk assessment Recognizing the need for enhanced injury reduction risk tools, Buckley collaborated with colleagues in UD’s College of Engineering, Austin Brockmeier, assistant professor of electrical and computer engineering, and César Claros, a fourth-year doctoral student; Wei Qian, associate professor of statistics in the College of Agriculture and Natural Resources; and former KAAP postdoctoral fellow Melissa Anderson, who’s now an assistant professor at Ohio University. To assess injury risk, Brockmeier and Claros developed a comprehensive AI model that analyzes more than 100 variables, including sports and medical histories, concussion type, and pre- and post-concussion cognitive data. “Every athlete is unique, especially across various sports,” said Brockmeier. “Tracking an athlete’s performance over time, rather than relying on absolute values, helps identify disturbances, deviations, or deficits that, when compared to their baseline, may signal an increased risk of injury.” While some sports, such as football, carry higher injury risk, the model revealed that individual factors are just as important as the sport played. “We tested a version of the model that doesn’t have access to the athlete’s sport, and it still accurately predicted injury risk,” Brockmeier said. “This highlights how unique characteristics—not just the inherent risks of a sport—play a critical role in determining the likelihood of future injury,” said Brockmeier. The research, which tracked athletes over two years, also found that the risk of MSK injury post-concussion extends well into the athlete’s return to play. “Common sense would suggest that injuries would occur early in an athlete’s return to play, but that’s simply not true,” said Buckley. “Our research shows that the risk of future injury increases over time as athletes compensate and adapt to small deficits they may not even be aware of.” The next step for Buckey’s Concussion Research Lab is to further collaborate with UD Athletics’ strength and conditioning staff to design real-time interventions that could reduce injury risk. Beyond sports: AI’s potential in aging research The implications of the UD-developed machine-learning model extend far beyond sports. Brockmeier believes the algorithm could be used to predict fall risk in patients with Parkinson’s disease. Claros is also exploring how the injury risk reduction model can be applied to aging research with the Delaware Center for Cognitive Aging. “We want to use brain measurements to investigate whether baseline lifestyle measurements such as weight, BMI, and smoking history are predictive of future mild cognitive impairment or Alzheimer’s disease,” said Claros. To arrange an interview with Buckley, email UD's media relations team at MediaRelations@udel.edu

A final disbursement of $8.8 million completes the $17.8 million grant awarded by the Department of Defense (DoD) to Virginia Commonwealth University’s (VCU) Convergence Lab Initiative (CLI). The funding allows CLI to continue advancing research in the areas of quantum and photonic devices, microelectronics, artificial intelligence, neuromorphic computing, arts and biomedical science. “The Convergence Lab Initiative represents a unique opportunity to drive innovation at the intersection of advanced technologies, preparing our students to tackle the critical challenges of tomorrow,” said Nibir Dhar, Ph.D., electrical and computer engineering professor and CLI director. “By combining cutting-edge research in electro-optics, infrared, radio frequency and edge computing, we are equipping the next generation of engineers with the skills to shape the future of both defense and commercial industries.” Working with Industry Partnership is at the heart of CLI and what makes the initiative unique. CivilianCyber, Sivananthan Laboratories and the University of Connecticut are among several collaborators focusing on cutting-edge, multidisciplinary research and workforce development. The lightweight, low-power components CLI helps develop are capable of transforming military operations and also have commercial applications. The Convergence Lab Initiative has 25 collaborative projects in this area focused on: Electro-optic and Infrared Technologies: Enhancing thermal imaging for medical diagnostics, search-and-rescue operations and environmental monitoring. This improves military intelligence, surveillance and reconnaissance capabilities. Radio Frequency and Beyond 5G Communication: Developing ultra-fast, low-latency communication systems for autonomous vehicles, smart cities and telemedicine. Accelerating advancements in this area also address electronic warfare challenges and security vulnerabilities. Optical Communication in the Infrared Wavelength: Increasing data transmission rates to create more efficient networks that support cloud computing, data centers, AI research and covert military communications. Edge Technologies: Creating low size, weight and low power-consuming (SWaP) computing solutions for deployment in constrained environments, such as wearables, medical devices, internet of things devices and autonomous systems. These technologies enhance real-time decision-making capabilities for agriculture, healthcare, industrial automation and defense. Benefits for Students College of Engineering students at VCU have an opportunity to engage with cutting-edge research as part of the DoD grant. Specialized workforce development programs, like the Undergraduate CLI Scholars Program, provide hands-on experience in advanced technologies. The STEM training also includes students from a diverse range of educational backgrounds to encourage a cross-disciplinary environment. Students can also receive industry-specific training through CLI’s Skill-Bridge Program, which facilitates direct connections between business needs and academic education. Unlike the DoD program for transitioning military personnel, the CLI Skill-Bridge is open to students from VCU and other local universities, creating direct connections between industry needs and academic training. This two-way relationship between academia and industry is unlike traditional academic research centers. With the College of Engineering’s focus on public-private partnerships, VCU becomes a registered partner with the participating businesses, collaborating to design individualized training programs focused on the CLI’s core research areas. This approach ensures students receive relevant, up-to-date training while companies gain access to a pipeline of skilled talent familiar with the latest industry trends and innovations. “The significance of this grant extends beyond immediate research outcomes. It addresses critical capability gaps for both the DoD and commercial sectors,” says Dhar. “This dual-use approach maximizes DoD investment impacts and accelerates innovation in areas that affect everyday life — from healthcare and environmental monitoring to communication networks and smart infrastructure. Breakthroughs emerging from these collaborations will strengthen national security while creating commercial spinoffs that drive economic growth and improve quality of life for communities both locally and globally. Advances in infrared technology, in particular, will position the VCU College of Engineering as a center for defense technologies and new ideas.”

Why generative AI 'hallucinates' and makes up stuff
Generative artificial intelligence tools, like OpenAI’s GPT-4, are sometimes full of bunk. Yes, they excel at tasks involving human language, like translating, writing essays, and acting as a personalized writing tutor. They even ace standardized tests. And they’re rapidly improving. But they also “hallucinate,” which is the term scientists use to describe when AI tools produce information that sounds plausible but is incorrect. Worse, they do so with such confidence that their errors are sometimes difficult to spot. Christopher Kanan, an associate professor of computer science with an appointment at the Goergen Institute for Data Science and Artificial Intelligence at the University of Rochester, explains that the reasoning and planning capabilities of AI tools are still limited compared with those of humans, who excel at continual learning. “They don’t continually learn from experience,” Kanan says of AI tools. “Their knowledge is effectively frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.” Current generative AI systems also lack what’s known as metacognition. “That means they typically don’t know what they don’t know, and they rarely ask clarifying questions when faced with uncertainty or ambiguous prompts,” Kanan says. “This absence of self-awareness limits their effectiveness in real-world interactions.” Kanan is an expert in artificial intelligence, continual learning, and brain-inspired algorithms who welcomes inquiries from journalists and knowledge seekers. He recently shared his thoughts on AI with WAMC Northeast Public Radio and with the University of Rochester News Center. Reach out to Kanan by clicking on his profile.

Decoding the Future of AI: From Disruption to Democratisation and Beyond
The global AI landscape has become a melting pot for innovation, with diverse thinking pushing the boundaries of what is possible. Its application extends beyond just technology, reshaping traditional business models and redefining how enterprises, governments, and societies operate. Advancements in model architectures, training techniques and the proliferation of open-source tools are lowering barriers to entry, enabling organisations of all sizes to develop competitive AI solutions with significantly fewer resources. As a result, the long-standing notion that AI leadership is reserved for entities with vast computational and financial resources is being challenged. This shift is also redrawing the global AI power balance, with a decentralised approach to AI where competition and collaboration coexist across different regions. As AI development becomes more distributed, investment strategies, enterprise innovation and global technological leadership are being reshaped. However, established AI powerhouses still wield significant leverage, driving an intense competitive cycle of rapid innovation. Amid this acceleration, it is critical to distinguish true technological breakthroughs from over-hyped narratives, adopting a measured, data-driven approach that balances innovation with demonstrable business value and robust ethical AI guardrails. Implications of the Evolving AI Landscape The democratisation of AI advancements, intensifying competitive pressures, the critical need for efficiency and sustainability, evolving geopolitical dynamics and the global race for skilled talent are all fuelling the development of AI worldwide. These dynamics are paving the way for a global balance of technological leadership. Democratisation of AI Potential The ability to develop competitive AI models at lower costs is not only broadening participation but also reshaping how AI is created, deployed and controlled. Open-source AI fosters innovation by enabling startups, researchers, and enterprises to collaborate and iterate rapidly, leading to diverse applications across industries. For example, xAI has made a significant move in the tech world by open sourcing its Grok AI chatbot model, potentially accelerating the democratisation of AI and fostering innovation. However, greater accessibility can also introduce challenges, including risks of misuse, uneven governance, and concerns over intellectual property. Additionally, as companies strategically leverage open-source AI to influence market dynamics, questions arise about the evolving balance between open innovation and proprietary control. Increased Competitive Pressure The AI industry is fuelled by a relentless drive to stay ahead of the competition, a pressure felt equally by Big Tech and startups. This is accelerating the release of new AI services, as companies strive to meet growing consumer demand for intelligent solutions. The risk of market disruption is significant; those who lag, face being eclipsed by more agile players. To survive and thrive, differentiation is paramount. Companies are laser-focused on developing unique AI capabilities and applications, creating a marketplace where constant adaptation and strategic innovation are crucial for success. Resource Optimisation and Sustainability The trend toward accessible AI necessitates resource optimisation, which means developing models with significantly less computational power, energy consumption and training data. This is not just about cost; it is crucial for sustainability. Training large AI models is energy-intensive; for example, training GPT-3, a 175-billion-parameter model, is believed to have consumed 1,287 MWh of electricity, equivalent to an average American household’s use over 120 years1. This drives innovation in model compression, transfer learning, and specialised hardware, like NVIDIA’s TensorRT. Small language models (SLMs) are a key development, offering comparable performance to larger models with drastically reduced resource needs. This makes them ideal for edge devices and resource-constrained environments, furthering both accessibility and sustainability across the AI lifecycle. Multifaceted Global AI Landscape The global AI landscape is increasingly defined by regional strengths and priorities. The US, with its strength in cloud infrastructure and software ecosystem, leads in “short-chain innovation”, rapidly translating AI research into commercial products. Meanwhile, China excels in “long-chain innovation”, deeply integrating AI into its extended manufacturing and industrial processes. Europe prioritises ethical, open and collaborative AI, while the APAC counterparts showcase a diversity of approaches. Underlying these regional variations is a shared trajectory for the evolution of AI, increasingly guided by principles of responsible AI: encompassing ethics, sustainability and open innovation, although the specific implementations and stages of advancement differ across regions. The Critical Talent Factor The evolving AI landscape necessitates a skilled workforce. Demand for professionals with expertise in AI and machine learning, data analysis, and related fields is rapidly increasing. This creates a talent gap that businesses must address through upskilling and reskilling initiatives. For example, Microsoft has launched an AI Skills Initiative, including free coursework and a grant program, to help individuals and organisations globally develop generative AI skills. What does this mean for today’s enterprise? New Business Horizons AI is no longer just an efficiency tool; it is a catalyst for entirely new business models. Enterprises that rethink their value propositions through AI-driven specialisation will unlock niche opportunities and reshape industries. In financial services, for example, AI is fundamentally transforming operations, risk management, customer interactions, and product development, leading to new levels of efficiency, personalisation and innovation. Navigating AI Integration and Adoption Integrating AI is not just about deployment; it is about ensuring enterprises are structurally prepared. Legacy IT architectures, fragmented data ecosystems and rigid workflows can hinder the full potential of AI. Organisations must invest in cloud scalability, intelligent automation and agile operating models to make AI a seamless extension of their business. Equally critical is ensuring workforce readiness, which involves strategically embedding AI literacy across all organisational functions and proactively reskilling talent to collaborate effectively with intelligent systems. Embracing Responsible AI Ethical considerations, data security and privacy are no longer afterthoughts but are becoming key differentiators. Organisations that embed responsible AI principles at the core of their strategy, rather than treating them as compliance check boxes, will build stronger customer trust and long-term resilience. This requires proactive bias mitigation, explainable AI frameworks, robust data governance and continuous monitoring for potential risks. Call to Action: Embracing a Balanced Approach The AI revolution is underway. It demands a balanced and proactive response. Enterprises must invest in their talent and reskilling initiatives to bridge the AI skills gap, modernise their infrastructure to support AI integration and scalability and embed responsible AI principles at the core of their strategy, ensuring fairness, transparency and accountability. Simultaneously, researchers must continue to push the boundaries of AI’s potential while prioritising energy efficiency and minimising environmental impact; policymakers must create frameworks that foster responsible innovation and sustainable growth. This necessitates combining innovative research with practical enterprise applications and a steadfast commitment to ethical and sustainable AI principles. The rapid evolution of AI presents both an imperative and an opportunity. The next chapter of AI will be defined by those who harness its potential responsibly while balancing technological progress with real-world impact. Resources Sudhir Pai: Executive Vice President and Chief Technology & Innovation Officer, Global Financial Services, Capgemini Professor Aleks Subic: Vice-Chancellor and Chief Executive, Aston University, Birmingham, UK Alexeis Garcia Perez: Professor of Digital Business & Society, Aston University, Birmingham, UK Gareth Wilson: Executive Vice President | Global Banking Industry Lead, Capgemini 1 https://www.datacenterdynamics.com/en/news/researchers-claim-they-can-cut-ai-training-energy-demands-by-75/?itm_source=Bibblio&itm_campaign=Bibblio-related&itm_medium=Bibblio-article-related

Virtual reality training tool helps nurses learn patient-centered care
University of Delaware computer science students have developed a digital interface as a two-way system that can help nurse trainees build their communication skills and learn to provide patient-centered care across a variety of situations. This virtual reality training tool would enable users to rehearse their bedside manner with expectant mothers before ever encountering a pregnant patient in person. The digital platform was created by students in Assistant Professor Leila Barmaki’s Human-Computer Interaction Laboratory, including senior Rana Tuncer, a computer science major, and sophomore Gael Lucero-Palacios. Lucero-Palacios said the training helps aspiring nurses practice more difficult and sensitive conversations they might have with patients. "Our tool is targeted to midwifery patients,” Lucero-Palacios said. “Learners can practice these conversations in a safe environment. It’s multilingual, too. We currently offer English or Turkish, and we’re working on a Spanish demo.” This type of judgement-free rehearsal environment has the potential to remove language barriers to care, with the ability to change the language capabilities of an avatar. For instance, the idea is that on one interface the “practitioner” could speak in one language, but it would be heard on the other interface in the patient’s native language. The patient avatar also can be customized to resemble different health stages and populations to provide learners a varied experience. Last December, Tuncer took the project on the road, piloting the virtual reality training program for faculty members in the Department of Midwifery at Ankara University in Ankara, Turkey. With technical support provided by Lucero-Palacios back in the United States, she was able to run a demo with the Ankara team, showcasing the UD-developed system’s interactive rehearsal environment’s capabilities. Last winter, University of Delaware senior Rana Tuncer (left), a computer science major, piloted the virtual reality training program for Neslihan Yilmaz Sezer (right), associate professor in the Department of Midwifery, Ankara University in Ankara, Turkey. Meanwhile, for Tuncer, Lucero-Palacios and the other students involved in the Human-Computer Interaction Laboratory, developing the VR training tool offered the opportunity to enhance their computer science, data science and artificial intelligence skills outside the classroom. “There were lots of interesting hurdles to overcome, like figuring out a lip-sync tool to match the words to the avatar’s mouth movements and figuring out server connections and how to get the languages to switch and translate properly,” Tuncer said. Lucero-Palacios was fascinated with developing text-to-speech capabilities and the ability to use technology to impact patient care. “If a nurse is well-equipped to answer difficult questions, then that helps the patient,” said Lucero-Palacios. The project is an ongoing research effort in the Barmaki lab that has involved many students. Significant developments occurred during the summer of 2024 when undergraduate researchers Tuncer and Lucero-Palacios contributed to the project through funding support from the National Science Foundation (NSF). However, work began before and continued well beyond that summer, involving many students over time. UD senior Gavin Caulfield provided foundational support to developing the program’s virtual environment and contributed to development of the text-to-speech/speech-to-text capabilities. CIS doctoral students Fahim Abrar and Behdokht Kiafar, along with Pinar Kullu, a postdoctoral fellow in the lab, used multimodal data collection and analytics to quantify the participant experience. “Interestingly, we found that participants showed more positive emotions in response to patient vulnerabilities and concerns,” said Kiafar. The work builds on previous research Barmaki, an assistant professor of computer and information sciences and resident faculty member in the Data Science Institute, completed with colleagues at New Jersey Institute of Technology and University of Central Florida in an NSF-funded project focused on empathy training for healthcare professionals using a virtual elderly patient. In the project, Barmaki employed machine learning tools to analyze a nursing trainee’s body language, gaze, verbal and nonverbal interactions to capture micro-expressions (facial expressions), and the presence or absence of empathy. “There is a huge gap in communication when it comes to caregivers working in geriatric care and maternal fetal medicine,” said Barmaki. “Both disciplines have high turnover and challenges with lack of caregiver attention to delicate situations.” UD senior Rana Tuncer (center) met with faculty members Neslihan Yilmaz Sezer (left) and Menekse Nazli Aker (right) of Ankara University in Ankara, Turkey, to educate them about the virtual reality training tool she and her student colleagues have developed to enhance patient-centered care skills for health care professionals. When these human-human interactions go wrong, for whatever reason, it can extend beyond a single patient visit. For instance, a pregnant woman who has a negative health care experience might decide not to continue routine pregnancy care. Beyond the project’s potential to improve health care professional field readiness, Barmaki was keen to note the benefits of real-world workforce development for her students. “Perceptions still exist that computer scientists work in isolation with their computers and rarely interact, but this is not true,” Barmaki said, pointing to the multi-faceted team members involved in this project. “Teamwork is very important. We have a nice culture in our lab where people feel comfortable asking their peers or more established students for help.” Barmaki also pointed to the potential application of these types of training environments, enabled by virtual reality, artificial intelligence and natural language processing, beyond health care. With the framework in place, she said, the idea could be adapted for other types of training involving human-human interaction, say in education, cybersecurity, even in emerging technology such as artificial intelligence (AI). Keeping people at the center of any design or application of this work is critical, particularly as uses for AI continue to expand. “As data scientists, we see things as spreadsheets and numbers in our work, but it’s important to remember that the data is coming from humans,” Barmaki said. While this project leverages computer vision and AI as a teaching tool for nursing assistants, Barmaki explained this type of system can also be used to train AI and to enable more responsible technologies down the road. She gave the example of using AI to study empathic interactions between humans and to recognize empathy. “This is the most important area where I’m trying to close the loop, in terms of responsible AI or more empathy-enabled AI,” Barmaki said. “There is a whole area of research exploring ways to make AI more natural, but we can’t work in a vacuum; we must consider the human interactions to design a good AI system.” Asked whether she has concerns about the future of artificial intelligence, Barmaki was positive. “I believe AI holds great promise for the future, and, right now, its benefits outweigh the risks,” she said.

NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems
Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor. Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft. Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above. Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways. With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image. This can be an overwhelming process. To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said. ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft. Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics. Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight. “We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.” Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes. “This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.” FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images. “That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said. Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action. Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user. “That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.” Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior. Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

For autonomous machines to flourish, scalability is everything
The past decade has seen remarkable advancements in robotics and AI technologies, ushering in the era of autonomous machines. While the rise of these machines promises to revolutionize our economy, the reality has fallen short of expectations. That’s not for a lack of intensive investments in research in development, says Yuhao Zhu, an associate professor of computer science at the University of Rochester. The reason we’re not seeing more service robots, autonomous drones, and self-driving vehicles, Zhu says, is that autonomation development is currently scaling with the size of engineering teams rather than the amount of relevant data and computational resources. This limitation prevents the autonomy industry from fully leveraging economies of scale, Zhu says, particularly the exponentially decreasing cost of computing power and the explosion of available data. Zhu recently co-authored a report on the quest for economies of scale in autonomation in Communications of the ACM and is part of an international team of computer scientists focused on making autonomous machines more reliable and less costly. He can be reached by email at yzhu@rochester.edu.

Managing cyber risk is no longer a technical necessity but also a strategic imperative in global business. As companies are more interconnected and reliant on artificial intelligence (AI), the Internet of Things, and the rest of the digital ecosystem, they are exposed to greater opportunities and risks. In this video, Senior Managing Director and cybersecurity expert Denis Calderone shares topics covered in the 2025 J.S. Held Global Risk Report focused on managing cyber risk in the year ahead. The global regulatory landscape is evolving rapidly in response to the increasing severity of cyber threats. Governments and regulatory bodies, including the U.S. Securities and Exchange Commission (SEC), the European Union (EU), and the U.S. Transportation Security Administration (TSA), have introduced cybersecurity mandates that require businesses to strengthen their defenses, improve incident reporting, and ensure compliance with new industry standards. The 2025 Global Risk Report by J.S. Held provides perspectives on these regulatory shifts, helping businesses navigate the complexities of cyber risk and compliance. The growing frequency and severity of cyberattacks are reshaping how businesses approach risk management. The J.S. Held 2025 Global Risk Report explores key issues facing business today, including: Business Interruption from Cyber Incidents: High-profile cases like Change Healthcare’s 2024 breach demonstrate how cyberattacks can halt operations, lead to regulatory scrutiny, and result in massive financial losses. Reputational and Legal Fallout: Cyber incidents can trigger lawsuits and damage a company’s reputation, often leading to prolonged trust recovery periods with customers and investors. Loss of Sensitive Data: Data breaches can expose critical information, including personal, financial, and proprietary data, amplifying risks of identity theft and fraud. Tightening Regulatory Landscape: New cybersecurity laws, such as the EU’s NIS2 Directive and Cyber Resilience Act, alongside the US SEC’s disclosure rules, demand stricter compliance from businesses in key sectors. Complexities in Cyber Insurance: Many companies lack clarity on whether their policies cover ransomware or meet legal and operational needs, leaving them exposed to potential financial risks. Ransomware Dilemmas and Legal Risks: Paying a ransom may violate international sanctions, creating additional legal complications for organizations already dealing with cyberattacks. Proactive Cybersecurity Enhancements: Companies implementing advanced cybersecurity measures like MFA, EDR, and immutable backup systems improve their defenses and reduce risks of disruption. AI-Powered Threat Detection: Artificial intelligence enables companies to identify fraud and cyberattacks faster by analyzing patterns and anomalies in real time, minimizing damage, and reducing costs. Increased Demand for Cyber Insurance: As companies across industries seek better coverage, insurers have opportunities to innovate new products, though exclusionary clauses are becoming more common. Business Continuity and Resilience: Organizations with strong cyber hygiene, incident response plans, and dependency mapping are better prepared for attacks and may benefit from reduced insurance premiums. Cybersecurity risk is just one of the five key areas analyzed in the J.S. Held 2025 Global Risk Report. Other topics include sustainability, supply chain, cryptocurrency and digital assets, AI and data regulations. If you have any questions or would like to further discuss the risks and opportunities outlined in the report, email GlobalRiskReport@jsheld.com. To connect with Denis Calderone simply click on his icon now. For any other media inquiries - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com

J.S. Held Experts Examine Crypto’s Pitfalls and Potential
The global cryptocurrency market has surged to a staggering USD 3.4 trillion. However, alongside this rapid expansion, significant challenges and risks continue to emerge. The J.S. Held 2025 Global Risk Report examines the evolving landscape of crypto and digital assets, highlighting both the potential and the pitfalls of this dynamic sector. The explosion of cryptocurrency adoption across industries—from gaming to decentralized finance (DeFi)—has led to increased regulatory scrutiny and security concerns. With the expected growth in the number of users to exceed 107.3 million in the market by 2025, every sector is looking at what crypto and this blockchain technology can do to transform their business. Even the gaming industry has entered the crypto space with bridging services offering “Play-to-Earn” (P2E) games. While anonymity remains a key feature in both the risk and success of cryptocurrency, the concept of “Know Your Customer” on centralized platforms is still required but continues to evolve because not all anonymity is evil. Despite regulatory, environmental, geopolitical, and other business risks, the J.S. Held 2025 Global Risk Report reveals how the crypto industry continues to evolve, offering new opportunities for businesses and investors around: Enhanced Transparency & Security Regulatory Clarity Education & Compliance Digital Identity Solutions “With regulatory frameworks tightening globally—from the European Union’s Markets in Crypto-Assets (MiCA) law to China’s outright ban—the future of crypto remains at a critical inflection point,” observes J.P. Brennan, Global Head of Fintech, Payments, Crypto Compliance and Investigations at J.S. Held. “As the industry matures, the balance between risk mitigation and innovation will shape the next phase of digital asset adoption,” J.P. Brennan adds. J.P. Brennan examines the crypto risks and opportunities outlined in the 2025 J.S. Held Global Risk Report in this video: Cryptocurrency and digital asset risk is just one of the five key areas analyzed in the J.S. Held 2025 Global Risk Report. Other topics include sustainability, supply chain, Artificial Intelligence (AI) and data regulations, and managing cyber risk. If you have any questions or would like to further discuss the risks and opportunities outlined in the report, please email GlobalRiskReport@jsheld.com. To connect with J.P. Brennan, simply click on his icon now. For any other media inquiries - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com






