Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

First AI-powered Smart Care Home system to improve quality of residential care
Partnership between Lee Mount Healthcare and Aston University will develop and integrate a bespoke AI system into a care home setting to elevate the quality of care for residents By automating administrative tasks and monitoring health metrics in real time, the smart system will support decision making and empower care workers to focus more on people The project will position Lee Mount Healthcare as a pioneer of AI in the care sector and opening the door for more care homes to embrace technology. Aston University is partnering with dementia care provider Lee Mount Healthcare to create the first ‘Smart Care Home’ system incorporating artificial intelligence. The project will use machine learning to develop an intelligent system that can automate routine tasks and compliance reporting. It will also draw on multiple sources of resident data – including health metrics, care needs and personal preferences – to inform high-quality care decisions, create individualised care plans and provide easy access to updates for residents’ next of kin. There are nearly 17,000 care homes in the UK looking after just under half a million residents, and these numbers are expected to rise in the next two decades. Over half of social care providers still retain manual and paper-based approaches to care management, offering significant opportunity to harness the benefits of AI to enhance efficiency and care quality. The Smart Care Home system will allow for better care to be provided at lower cost, freeing up staff from administrative tasks so they can spend more time with residents. Manjinder Boo Dhiman, director of Lee Mount Healthcare, said: “As a company, we’ve always focused on innovation and breaking barriers, and this KTP builds on many years of progress towards digitisation. We hope by taking the next step into AI, we’ll also help to improve the image of the care sector and overcome stereotypes, to show that we are forward thinking and can attract the best talent.” Dr Roberto Alamino, lecturer in Applied AI & Robotics with the School of Computer Science and Digital Technologies at Aston University said: “The challenges of this KTP are both technical and human in nature. For practical applications of machine learning, it’s important to establish a common language between us as researchers and the users of the technology we are developing. We need to fully understand the problems they face so we can find feasible, practical solutions. For specialist AI expertise to develop the smart system, LMH is partnering with the Aston Centre for Artificial Intelligence Research and Application (ACAIRA) at Aston University, of which Dr Alamino is a member. ACAIRA is recognised internationally for high-quality research and teaching in computer science and artificial intelligence (AI) and is part of the College of Engineering and Physical Sciences. The Centre’s aim is to develop AI-based solutions to address critical social, health, and environmental challenges, delivering transformational change with industry partners at regional, national and international levels. The project is a Knowledge Transfer Partnership. (KTP). Funded by Innovate UK, KTPs are collaborations between a business, a university and a highly qualified research associate. The UK-wide programme helps businesses to improve their competitiveness and productivity through the better use of knowledge, technology and skills. Aston University is a sector leading KTP provider, ranked first for project quality, and joint first for the volume of active projects. For more information on the KTP visit the webpage.

Artificial Intelligence Makes Energy Demand More Complex — And More Achievable
In a 2024 paper, researchers from Carnegie Mellon University and machine learning development corporation Hugging Face found that generative AI systems could use as much as 33 times more energy to complete a task than task-specific software would. “The climate and sustainability challenge can be overwhelming in the amount of new clean technology that we have to deploy and develop, and the ways that the energy system has to evolve,” said Costa Samaras, head of the university-wide Wilton E. Scott Institute for Energy Innovation. “The scale of the challenge alone can be overwhelming to folks.” However, Carnegie Mellon University’s standing commitment to the United Nations' Sustainable Development Goals and its position as a nationally recognized leader in technologies like artificial intelligence mean that it is uniquely positioned to address growing concerns around energy demand, climate resilience and social good.

Why Your Experts Might Not Show Up in Google AI Overviews — And How to Fix It
The way we find expert information online is changing fast. With the rise of Google’s AI-generated overviews (formerly called Search Generative Experience), the top spot on the search page no longer goes to the highest-ranking blue link. Instead, AI now summarizes answers using a blend of machine learning, structured data, and trust signals—pulling directly from a variety of select sources across the web. If institutions—whether academic, healthcare, corporate or others—aren't aligning its expert content with these new rules of discovery, your experts may be left out of the conversation altogether. Don't miss being featured in media stories, invited to speak at events, or approached for business and collaboration opportunities. This is the moment to double down on structured data and transparent authorship—because AI-first search is rewarding expert clarity, not just content volume. The following provides a quick breakdown as to how AI Search, Google’s EEAT principles, and Schema.org structured data work together—and what you can do to ensure your expert content...and your experts, gets surfaced, cited, and trusted. What Is EEAT and Why It Matters in AI Search EEAT stands for Experience, Expertise, Authoritativeness, and Trustworthiness—the core framework Google uses to evaluate whether content is reliable and deserves to rank, especially in high-stakes areas like health, education, and finance. In AI-powered summaries, Google doesn’t just look at keywords—it looks for: Real people with demonstrable credentials Clear affiliations with reputable institutions Consistent authorship and transparency Trust signals like citations, bios, and professional history EEAT in Action: Why Schema Markup Is Your AI SEO Power Tool EEAT signals work best when they’re machine-readable—that’s where Schema.org structured data comes in. It acts as a translator between your content and Google’s AI. Schema tags are pieces of structured data that help search engines understand the content and context of your web pages. They translate human-readable information—like author names, job titles, and article types—into machine-readable signals that boost visibility AI overviews and search results. Implementing Schema helps ensure your expert content is eligible for inclusion in AI overviews. Key schema types include: {Person} – for expert bios {ScholarlyArticle}, {Article}, {FAQ} – for authored content {Organization}, {MedicalOrganization}, {EducationalOrganization} – to establish credibility {sameAs} – to reinforce expertise by connecting external profiles (LinkedIn, ORCID, Google Scholar) Schema in Action: AI Overviews Favor Structured, Credible Expert Content Google’s AI overviews are designed to synthesize trustworthy sources—not just surface-level blog posts or SEO-churned pages. That means expert content that is: Authored by named individuals with clear credentials Structured for readability and machine parsing Linked to institutional authority and trust domains If your experts don’t meet these criteria—or if Google’s crawlers can’t understand the relationships between person, organization, and content—your insights may never reach the surface of the AI summary box. How ExpertFile Optimizes for AI-Driven Search AI search is no longer just about keywords—it’s about credibility, structure, and clarity. Institutions that invest in properly structured expert content will not only rank better—they’ll become the source quoted in the next generation of search. ExpertFile is purpose-built to maximize visibility and trust in this new era of AI search. Here’s how: Structured Expert Profiles: Every expert has a dedicated page with rich Person schema, bios, credentials, affiliations, and publication history. Schema-Tagged Content: Articles, media spotlights, and FAQs are marked up using Schema.org types like ScholarlyArticle, FAQPage, and Article. Institutional Credibility: Profiles are embedded within .edu, .org, or corporate domains—reinforcing trust with Google’s algorithms. Cross-Linked Authority: Integration with Google Scholar, LinkedIn, and ORCID ensures a 360° trust profile across the web. Mobile-Ready & Indexed: ExpertFile content is fully indexable and distributed across web and mobile platforms—supporting discoverability everywhere AI pulls from. With ExpertFile, your experts are not just listed—they’re positioned, structured, and ready for the AI spotlight. Learn more about how ExpertFile helps organization's benefit in the new era of AI.

Hiring More Nurses Generates Revenue for Hospitals
Underfunding is driving an acute shortage of trained nurses in hospitals and care facilities in the United States. It is the worst such shortage in more than four decades. One estimate from the American Hospital Association puts the deficit north of one million. Meanwhile, a recent survey by recruitment specialist AMN Healthcare suggests that 900,000 more nurses will drop out of the workforce by 2027. American nurses are quitting in droves, thanks to low pay and burnout as understaffing increases individual workload. This is bad news for patient outcomes. Nurses are estimated to have eight times more routine contact with patients than physicians. They shoulder the bulk of all responsibility in terms of diagnostic data collection, treatment plans, and clinical reporting. As a result, understaffing is linked to a slew of serious problems, among them increased wait times for patients in care, post-operative infections, readmission rates, and patient mortality—all of which are on the rise across the U.S. Tackling this crisis is challenging because of how nursing services are reimbursed. Most hospitals operate a payment system where services are paid for separately. Physician services are billed as separate line items, making them a revenue generator for the hospitals that employ them. But under Medicare, nursing services are charged as part of a fixed room and board fee, meaning that hospitals charge the same fee regardless of how many nurses are employed in the patient’s care. In this model, nurses end up on the other side of hospitals’ balance sheets: a labor expense rather than a source of income. For beleaguered administrators looking to sustain quality of care while minimizing costs (and maximizing profits), hiring and retaining nursing staff has arguably become something of a zero-sum game in the U.S. The Hidden Costs of Nurse Understaffing But might the balance sheet in fact be skewed in some way? Could there be potential financial losses attached to nurse understaffing that administrators should factor into their hiring and remuneration decisions? Research by Goizueta Professors Diwas KC and Donald Lee, as well as recent Goizueta PhD graduates Hao Ding 24PhD (Auburn University) and Sokol Tushe 23PhD (Muma College of Business), would suggest there are. Their new peer-reviewed publication* finds that increasing a single nurse’s workload by just one patient creates a 17% service slowdown for all other patients under that nurse’s care. Looking at the data another way, having one additional nurse on duty during the busiest shift (typically between 7am and 7pm) speeds up emergency department work and frees up capacity to treat more patients such that hospitals could be looking at a major increase in revenue. The researchers calculate that this productivity gain could equate to a net increase of $470,000 per 10,000 patient visits—and savings to the tune of $160,000 in lost earnings for the same number of patients as wait times are reduced. “A lot of the debate around nursing in the U.S. has focused on the loss of quality in care, which is hugely important,” says Diwas KC. But looking at the crisis through a productivity lens means we’re also able to understand the very real economic value that nurses bring too: the revenue increases that come with capacity gains. Diwas KC, Goizueta Foundation Term Professor of Information Systems & Operations Management “Our findings challenge the predominant thinking around nursing as a cost,” adds Lee. “What we see is that investing in nursing staff more than pays for itself in downstream financial benefits for hospitals. It is effectively a win-win-win for patients, nurses, and healthcare providers.” Nurse Load: the Biggest Impact on Productivity To get to these findings, the researchers analyzed a high-resolution dataset on patient flow through a large U.S. teaching hospital. They looked at the real-time workloads of physicians and nurses working in the emergency department between April 2018 and March 2019, factoring in variables such as patient demographics and severity of complaint or illness. Tracking patients from admission to triage and on to treatment, the researchers were able to tease out the impact that the number of nurses and physicians on duty had on patient throughput. Using a novel machine learning technique developed at Goizueta by Lee, they were able to identify the effect of increasing or reducing the workforce. The contrast between physicians and nursing staff is stark, says Tushe. “When you have fewer nurses on duty, capacity and patient throughput drops by an order of magnitude—far, far more than when reducing the number of doctors. Our results show that for every additional patient the nurse is responsible for, service speed falls by 17%. That compares to just 1.4% if you add one patient to the workload of an attending physician. In other words, nurses’ impact on productivity in the emergency department is more than eight times greater.” Boosting Revenue Through Reduced Wait Times Adding an additional nurse to the workforce, on the other hand, increases capacity appreciably. And as more patients are treated faster, hospitals can expect a concomitant uptick in revenue, says KC. “It’s well documented that cutting down wait time equates to more patients treated and more income. Previous research shows that reducing service time by 15 minutes per 30,000 patient visits translates to $1.4 million in extra revenue for a hospital.” In our study, we calculate that staffing one additional nurse in the 7am to 7pm emergency department shift reduces wait time by 23 minutes, so hospitals could be looking at an increase of $2.33 million per year. Diwas KC This far eclipses the costs associated with hiring one additional nurse, says Lee. “According to 2022 U.S. Bureau of Labor Statistics, the average nursing salary in the U.S. is $83,000. Fringe benefits account for an additional 50% of the base salary. The total cost of adding one nurse during the 7am to 7pm shift is $310,000 (for 2.5 full-time employees). When you do the math, it is clear. The net hospital gain is $2 million for the hospital in our study. Or $470,000 per 10,000 patient visits.” Incontrovertible Benefits to Hiring More Nurses These findings should provide compelling food for thought both to healthcare administrators and U.S. policymakers. For too long, the latter have fixated on the upstream costs, without exploring the downstream benefits of nursing services, say the researchers. Their study, the first to quantify the economic value of nurses in the U.S., asks “better questions,” argues Tushe; exploiting newly available data and analytics to reveal incontrovertible financial benefits that attach to hiring—and compensating—more nurses in American hospitals. We know that a lot of nurses are leaving the profession not just because of cuts and burnout, but also because of lower pay. We would say to administrators struggling to hire talented nurses to review current wage offers, because our analysis suggests that the economic surplus from hiring more nurses could be readily applied to retention pay rises also. Sokol Tushe 23PhD, Muma College of Business The Case for Mandated Ratios For state-level decision makers, Lee has additional words of advice. “In 2004, California mandated minimum nurse-to-patient ratios in hospitals. Since then, six more states have added some form of minimum ratio requirement. The evidence is that this has been beneficial to patient outcomes and nurse job satisfaction. Our research now adds an economic dimension to the list of benefits as well. Ipso facto, policymakers ought to consider wider adoption of minimum nurse-to-patient ratios.” However, decision makers go about tackling the shortage of nurses in the U.S., they should go about it fast and soon, says KC. “This is a healthcare crisis that is only set to become more acute in the near future. As our demographics shift and our population starts again out, demand for quality will increase. So too must the supply of care capacity. But what we are seeing is the nursing staffing situation in the U.S. moving in the opposite direction. All of this is manifesting in the emergency department. That’s where wait times are getting longer, mistakes are being made, and overworked nurses are quitting. It is creating a vicious cycle that needs to be broken.” Diwas Diwas KC is a professor of information systems & operations management and Donald Lee is an associate professor of information systems & operations management. Both experts are available to speak about this important topic - simply click on either icon now to arrange an interview today.

NASA Asks Researchers to Help Define Trustworthiness in Autonomous Systems
A Florida Tech-led group of researchers was selected to help NASA solve challenges in aviation through its prestigious University Leadership Initiative (ULI) program. Over the next three years, associate professor of computer science and software engineering Siddhartha Bhattacharyya and professor of aviation human factors Meredith Carroll will work to understand the vital role of trust in autonomy. Their project, “Trustworthy Resilient Autonomous Agents for Safe City Transportation in the Evolving New Decade” (TRANSCEND), aims to establish a common framework for engineers and human operators to determine the trustworthiness of machine-learning-enabled autonomous aviation safety systems. Autonomous systems are those that can perform independent tasks without requiring human control. The autonomy of these systems is expected to be enhanced with intelligence gained from machine learning. As a result, intelligence-based software is expected to be increasingly used in airplanes and drones. It may also be utilized in airports and to manage air traffic in the future. Learning-enabled autonomous technology can also act as contingency management when used in safety applications, proactively addressing potential disruptions and unexpected aviation events. TRANSCEND was one of three projects chosen for the latest ULI awards. The others hail from Embry-Riddle Aeronautical University in Daytona Beach – researching continuously updating, self-diagnostic vehicle health management to enhance the safety and reliability of Advanced Air Mobility vehicles – and University of Colorado Boulder – investigating tools for understanding and leveraging the complex communications environment of collaborative, autonomous airspace systems. Florida Tech’s team includes nine faculty members from five universities: Penn State; North Carolina A&T State University; University of Florida; Stanford University; Santa Fe College. It also involves the companies Collins Aerospace in Cedar Rapids, Iowa and ResilienX of Syracuse, New York. Carroll and Bhattacharyya will also involve students throughout the project. Human operators are an essential component of aviation technology – they monitor independent software systems and associated data and intervene when those systems fail. They may include flight crew members, air traffic controllers, maintenance personnel or safety staff monitoring overall system safety. A challenge in implementing independent software is that engineers and operators have different interpretations of what makes a system “trustworthy,” Carroll and Bhattacharyya explained. Engineers who develop autonomous software measure trustworthiness by the system’s ability to perform as designed. Human operators, however, trust and rely on systems to perform as they expect – they want to feel comfortable relying on a system to make an aeronautical decision in flight, such as how to avoid a traffic conflict or a weather event. Sometimes, that reliance won’t align with design specifications. Equally important, operators also need to trust that the software will alert them when it needs a human to take over. This may happen if the algorithm driving the software encounters a scenario it wasn’t trained for. “We are looking at how we can integrate trust from different communities – from human factors, from formal methods, from autonomy, from AI…” Bhattacharyya said. “How do we convey assumptions for trust, from design time to operation, as the intelligent systems are being deployed, so that we can trust them and know when they’re going to fail, especially those that are learning-enabled, meaning they adapt based on machine learning algorithms?” With Bhattacharyya leading the engineering side and Carroll leading the human factors side, the research group will begin bridging the trust gap by integrating theories, principles, methods, measures, visualizations, explainability and practices from different domains – this will build the TRANSCEND framework. Then, they’ll test the framework using a diverse range of tools, flight simulators and intelligent decision-making to demonstrate trustworthiness in practice. This and other data will help them develop a safety case toolkit of guidelines for development processes, recommendations and suggested safety measures for engineers to reference when designing “trustworthy,” learning-enabled autonomous systems. Ultimately, Bhattacharyya and Carroll hope their toolkit will lay the groundwork for a future learning-enabled autonomous systems certification process. “The goal is to combine all our research capabilities and pull together a unified story that outputs unified products to the industry,” Carroll said. “We want products for the industry to utilize when implementing learning-enabled autonomy for more effective safety management systems.” The researchers also plan to use this toolkit to teach future engineers about the nuances of trust in the products they develop. Once developed, they will hold outreach events, such as lectures and camps, for STEM-minded students in the community. If you're interested in connecting with Meredith Carroll or Siddhartha Bhattacharyya - simply click on the expert's profile or contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

Name: Adrian Peter Title: Associate professor of mathematics and systems engineering and electrical engineering and computer science (joint appointment); director, Center for Advanced Data Analytics and Systems (CADAS) Department/College: Department of Mathematics and Systems Engineering and Department of Electrical Engineering and Computer Science/College of Engineering and Science Current research funding: $2.19 million General research focus: Our Multi-domain, Multi-sensor, Cyber-physical Tactical Exploitation (M2CTE) project addresses a critical need for a robust analytic processing framework capable of supporting autonomous sensing and analytics on the edge – where devices and sensors collect data – with the ability to reach back to the cloud for more improvement. Adrian Peter's research interests are in applying advanced analytics (e.g. machine learning, statistical modeling, optimization and visualization) to solve large-scale computing problems across a variety of domain areas (signal processing, geospatial, environmental, sensor fusion and enterprise intelligence). Q: What has you excited about your current research? We have built our entire infrastructure with the immensely talented graduate and undergraduate students at Florida Tech. Their tireless efforts have led to us delivering practical and operational real-world, machine-learning solutions that make us among the global leaders in machine learning at the edge. Q: Why is it important to conduct research? The objective of all research is to advance the frontiers of knowledge in a specific discipline. In my research, we are continually pushing state-of-the-art distributed sensing and edge analytics. Our results have helped transition conceptual ideas and customer requirements into operational solutions that improve situational awareness at tactical edge. Adrian Peter is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology, at adam@fit.edu to arrange an interview today.

AI-powered model predicts post-concussion injury risk in college athletes
Athletes who suffer a concussion have a serious risk of reinjury after returning to play, but identifying which athletes are most vulnerable has always been a bit of a mystery, until now. Using artificial intelligence (AI), University of Delaware researchers have developed a novel machine learning model that predicts an athlete’s risk of lower-extremity musculoskeletal (MKS) injury after concussion with 95% accuracy. A recent study published in Sports Medicine details the development of the AI model, which builds on previously published research showing that the risk of post-concussion injury doubles, regardless of the sport. The most common post-concussive injuries include sprains, strains, or even broken bones or torn ACLs. “This is due to brain changes we see post-concussion,” said Thomas Buckley, professor of kinesiology and applied physiology at the College of Health Sciences. These brain changes affect athletes’ balance, cognition, and reaction times and can be difficult to detect in standard clinical testing. “Even a minuscule difference in balance, reaction time, or cognitive processing of what’s happening around you can make the difference between getting hurt and not,” Buckley said. How AI is changing injury risk assessment Recognizing the need for enhanced injury reduction risk tools, Buckley collaborated with colleagues in UD’s College of Engineering, Austin Brockmeier, assistant professor of electrical and computer engineering, and César Claros, a fourth-year doctoral student; Wei Qian, associate professor of statistics in the College of Agriculture and Natural Resources; and former KAAP postdoctoral fellow Melissa Anderson, who’s now an assistant professor at Ohio University. To assess injury risk, Brockmeier and Claros developed a comprehensive AI model that analyzes more than 100 variables, including sports and medical histories, concussion type, and pre- and post-concussion cognitive data. “Every athlete is unique, especially across various sports,” said Brockmeier. “Tracking an athlete’s performance over time, rather than relying on absolute values, helps identify disturbances, deviations, or deficits that, when compared to their baseline, may signal an increased risk of injury.” While some sports, such as football, carry higher injury risk, the model revealed that individual factors are just as important as the sport played. “We tested a version of the model that doesn’t have access to the athlete’s sport, and it still accurately predicted injury risk,” Brockmeier said. “This highlights how unique characteristics—not just the inherent risks of a sport—play a critical role in determining the likelihood of future injury,” said Brockmeier. The research, which tracked athletes over two years, also found that the risk of MSK injury post-concussion extends well into the athlete’s return to play. “Common sense would suggest that injuries would occur early in an athlete’s return to play, but that’s simply not true,” said Buckley. “Our research shows that the risk of future injury increases over time as athletes compensate and adapt to small deficits they may not even be aware of.” The next step for Buckey’s Concussion Research Lab is to further collaborate with UD Athletics’ strength and conditioning staff to design real-time interventions that could reduce injury risk. Beyond sports: AI’s potential in aging research The implications of the UD-developed machine-learning model extend far beyond sports. Brockmeier believes the algorithm could be used to predict fall risk in patients with Parkinson’s disease. Claros is also exploring how the injury risk reduction model can be applied to aging research with the Delaware Center for Cognitive Aging. “We want to use brain measurements to investigate whether baseline lifestyle measurements such as weight, BMI, and smoking history are predictive of future mild cognitive impairment or Alzheimer’s disease,” said Claros. To arrange an interview with Buckley, email UD's media relations team at MediaRelations@udel.edu

Decoding the Future of AI: From Disruption to Democratisation and Beyond
The global AI landscape has become a melting pot for innovation, with diverse thinking pushing the boundaries of what is possible. Its application extends beyond just technology, reshaping traditional business models and redefining how enterprises, governments, and societies operate. Advancements in model architectures, training techniques and the proliferation of open-source tools are lowering barriers to entry, enabling organisations of all sizes to develop competitive AI solutions with significantly fewer resources. As a result, the long-standing notion that AI leadership is reserved for entities with vast computational and financial resources is being challenged. This shift is also redrawing the global AI power balance, with a decentralised approach to AI where competition and collaboration coexist across different regions. As AI development becomes more distributed, investment strategies, enterprise innovation and global technological leadership are being reshaped. However, established AI powerhouses still wield significant leverage, driving an intense competitive cycle of rapid innovation. Amid this acceleration, it is critical to distinguish true technological breakthroughs from over-hyped narratives, adopting a measured, data-driven approach that balances innovation with demonstrable business value and robust ethical AI guardrails. Implications of the Evolving AI Landscape The democratisation of AI advancements, intensifying competitive pressures, the critical need for efficiency and sustainability, evolving geopolitical dynamics and the global race for skilled talent are all fuelling the development of AI worldwide. These dynamics are paving the way for a global balance of technological leadership. Democratisation of AI Potential The ability to develop competitive AI models at lower costs is not only broadening participation but also reshaping how AI is created, deployed and controlled. Open-source AI fosters innovation by enabling startups, researchers, and enterprises to collaborate and iterate rapidly, leading to diverse applications across industries. For example, xAI has made a significant move in the tech world by open sourcing its Grok AI chatbot model, potentially accelerating the democratisation of AI and fostering innovation. However, greater accessibility can also introduce challenges, including risks of misuse, uneven governance, and concerns over intellectual property. Additionally, as companies strategically leverage open-source AI to influence market dynamics, questions arise about the evolving balance between open innovation and proprietary control. Increased Competitive Pressure The AI industry is fuelled by a relentless drive to stay ahead of the competition, a pressure felt equally by Big Tech and startups. This is accelerating the release of new AI services, as companies strive to meet growing consumer demand for intelligent solutions. The risk of market disruption is significant; those who lag, face being eclipsed by more agile players. To survive and thrive, differentiation is paramount. Companies are laser-focused on developing unique AI capabilities and applications, creating a marketplace where constant adaptation and strategic innovation are crucial for success. Resource Optimisation and Sustainability The trend toward accessible AI necessitates resource optimisation, which means developing models with significantly less computational power, energy consumption and training data. This is not just about cost; it is crucial for sustainability. Training large AI models is energy-intensive; for example, training GPT-3, a 175-billion-parameter model, is believed to have consumed 1,287 MWh of electricity, equivalent to an average American household’s use over 120 years1. This drives innovation in model compression, transfer learning, and specialised hardware, like NVIDIA’s TensorRT. Small language models (SLMs) are a key development, offering comparable performance to larger models with drastically reduced resource needs. This makes them ideal for edge devices and resource-constrained environments, furthering both accessibility and sustainability across the AI lifecycle. Multifaceted Global AI Landscape The global AI landscape is increasingly defined by regional strengths and priorities. The US, with its strength in cloud infrastructure and software ecosystem, leads in “short-chain innovation”, rapidly translating AI research into commercial products. Meanwhile, China excels in “long-chain innovation”, deeply integrating AI into its extended manufacturing and industrial processes. Europe prioritises ethical, open and collaborative AI, while the APAC counterparts showcase a diversity of approaches. Underlying these regional variations is a shared trajectory for the evolution of AI, increasingly guided by principles of responsible AI: encompassing ethics, sustainability and open innovation, although the specific implementations and stages of advancement differ across regions. The Critical Talent Factor The evolving AI landscape necessitates a skilled workforce. Demand for professionals with expertise in AI and machine learning, data analysis, and related fields is rapidly increasing. This creates a talent gap that businesses must address through upskilling and reskilling initiatives. For example, Microsoft has launched an AI Skills Initiative, including free coursework and a grant program, to help individuals and organisations globally develop generative AI skills. What does this mean for today’s enterprise? New Business Horizons AI is no longer just an efficiency tool; it is a catalyst for entirely new business models. Enterprises that rethink their value propositions through AI-driven specialisation will unlock niche opportunities and reshape industries. In financial services, for example, AI is fundamentally transforming operations, risk management, customer interactions, and product development, leading to new levels of efficiency, personalisation and innovation. Navigating AI Integration and Adoption Integrating AI is not just about deployment; it is about ensuring enterprises are structurally prepared. Legacy IT architectures, fragmented data ecosystems and rigid workflows can hinder the full potential of AI. Organisations must invest in cloud scalability, intelligent automation and agile operating models to make AI a seamless extension of their business. Equally critical is ensuring workforce readiness, which involves strategically embedding AI literacy across all organisational functions and proactively reskilling talent to collaborate effectively with intelligent systems. Embracing Responsible AI Ethical considerations, data security and privacy are no longer afterthoughts but are becoming key differentiators. Organisations that embed responsible AI principles at the core of their strategy, rather than treating them as compliance check boxes, will build stronger customer trust and long-term resilience. This requires proactive bias mitigation, explainable AI frameworks, robust data governance and continuous monitoring for potential risks. Call to Action: Embracing a Balanced Approach The AI revolution is underway. It demands a balanced and proactive response. Enterprises must invest in their talent and reskilling initiatives to bridge the AI skills gap, modernise their infrastructure to support AI integration and scalability and embed responsible AI principles at the core of their strategy, ensuring fairness, transparency and accountability. Simultaneously, researchers must continue to push the boundaries of AI’s potential while prioritising energy efficiency and minimising environmental impact; policymakers must create frameworks that foster responsible innovation and sustainable growth. This necessitates combining innovative research with practical enterprise applications and a steadfast commitment to ethical and sustainable AI principles. The rapid evolution of AI presents both an imperative and an opportunity. The next chapter of AI will be defined by those who harness its potential responsibly while balancing technological progress with real-world impact. Resources Sudhir Pai: Executive Vice President and Chief Technology & Innovation Officer, Global Financial Services, Capgemini Professor Aleks Subic: Vice-Chancellor and Chief Executive, Aston University, Birmingham, UK Alexeis Garcia Perez: Professor of Digital Business & Society, Aston University, Birmingham, UK Gareth Wilson: Executive Vice President | Global Banking Industry Lead, Capgemini 1 https://www.datacenterdynamics.com/en/news/researchers-claim-they-can-cut-ai-training-energy-demands-by-75/?itm_source=Bibblio&itm_campaign=Bibblio-related&itm_medium=Bibblio-article-related

Virtual reality training tool helps nurses learn patient-centered care
University of Delaware computer science students have developed a digital interface as a two-way system that can help nurse trainees build their communication skills and learn to provide patient-centered care across a variety of situations. This virtual reality training tool would enable users to rehearse their bedside manner with expectant mothers before ever encountering a pregnant patient in person. The digital platform was created by students in Assistant Professor Leila Barmaki’s Human-Computer Interaction Laboratory, including senior Rana Tuncer, a computer science major, and sophomore Gael Lucero-Palacios. Lucero-Palacios said the training helps aspiring nurses practice more difficult and sensitive conversations they might have with patients. "Our tool is targeted to midwifery patients,” Lucero-Palacios said. “Learners can practice these conversations in a safe environment. It’s multilingual, too. We currently offer English or Turkish, and we’re working on a Spanish demo.” This type of judgement-free rehearsal environment has the potential to remove language barriers to care, with the ability to change the language capabilities of an avatar. For instance, the idea is that on one interface the “practitioner” could speak in one language, but it would be heard on the other interface in the patient’s native language. The patient avatar also can be customized to resemble different health stages and populations to provide learners a varied experience. Last December, Tuncer took the project on the road, piloting the virtual reality training program for faculty members in the Department of Midwifery at Ankara University in Ankara, Turkey. With technical support provided by Lucero-Palacios back in the United States, she was able to run a demo with the Ankara team, showcasing the UD-developed system’s interactive rehearsal environment’s capabilities. Last winter, University of Delaware senior Rana Tuncer (left), a computer science major, piloted the virtual reality training program for Neslihan Yilmaz Sezer (right), associate professor in the Department of Midwifery, Ankara University in Ankara, Turkey. Meanwhile, for Tuncer, Lucero-Palacios and the other students involved in the Human-Computer Interaction Laboratory, developing the VR training tool offered the opportunity to enhance their computer science, data science and artificial intelligence skills outside the classroom. “There were lots of interesting hurdles to overcome, like figuring out a lip-sync tool to match the words to the avatar’s mouth movements and figuring out server connections and how to get the languages to switch and translate properly,” Tuncer said. Lucero-Palacios was fascinated with developing text-to-speech capabilities and the ability to use technology to impact patient care. “If a nurse is well-equipped to answer difficult questions, then that helps the patient,” said Lucero-Palacios. The project is an ongoing research effort in the Barmaki lab that has involved many students. Significant developments occurred during the summer of 2024 when undergraduate researchers Tuncer and Lucero-Palacios contributed to the project through funding support from the National Science Foundation (NSF). However, work began before and continued well beyond that summer, involving many students over time. UD senior Gavin Caulfield provided foundational support to developing the program’s virtual environment and contributed to development of the text-to-speech/speech-to-text capabilities. CIS doctoral students Fahim Abrar and Behdokht Kiafar, along with Pinar Kullu, a postdoctoral fellow in the lab, used multimodal data collection and analytics to quantify the participant experience. “Interestingly, we found that participants showed more positive emotions in response to patient vulnerabilities and concerns,” said Kiafar. The work builds on previous research Barmaki, an assistant professor of computer and information sciences and resident faculty member in the Data Science Institute, completed with colleagues at New Jersey Institute of Technology and University of Central Florida in an NSF-funded project focused on empathy training for healthcare professionals using a virtual elderly patient. In the project, Barmaki employed machine learning tools to analyze a nursing trainee’s body language, gaze, verbal and nonverbal interactions to capture micro-expressions (facial expressions), and the presence or absence of empathy. “There is a huge gap in communication when it comes to caregivers working in geriatric care and maternal fetal medicine,” said Barmaki. “Both disciplines have high turnover and challenges with lack of caregiver attention to delicate situations.” UD senior Rana Tuncer (center) met with faculty members Neslihan Yilmaz Sezer (left) and Menekse Nazli Aker (right) of Ankara University in Ankara, Turkey, to educate them about the virtual reality training tool she and her student colleagues have developed to enhance patient-centered care skills for health care professionals. When these human-human interactions go wrong, for whatever reason, it can extend beyond a single patient visit. For instance, a pregnant woman who has a negative health care experience might decide not to continue routine pregnancy care. Beyond the project’s potential to improve health care professional field readiness, Barmaki was keen to note the benefits of real-world workforce development for her students. “Perceptions still exist that computer scientists work in isolation with their computers and rarely interact, but this is not true,” Barmaki said, pointing to the multi-faceted team members involved in this project. “Teamwork is very important. We have a nice culture in our lab where people feel comfortable asking their peers or more established students for help.” Barmaki also pointed to the potential application of these types of training environments, enabled by virtual reality, artificial intelligence and natural language processing, beyond health care. With the framework in place, she said, the idea could be adapted for other types of training involving human-human interaction, say in education, cybersecurity, even in emerging technology such as artificial intelligence (AI). Keeping people at the center of any design or application of this work is critical, particularly as uses for AI continue to expand. “As data scientists, we see things as spreadsheets and numbers in our work, but it’s important to remember that the data is coming from humans,” Barmaki said. While this project leverages computer vision and AI as a teaching tool for nursing assistants, Barmaki explained this type of system can also be used to train AI and to enable more responsible technologies down the road. She gave the example of using AI to study empathic interactions between humans and to recognize empathy. “This is the most important area where I’m trying to close the loop, in terms of responsible AI or more empathy-enabled AI,” Barmaki said. “There is a whole area of research exploring ways to make AI more natural, but we can’t work in a vacuum; we must consider the human interactions to design a good AI system.” Asked whether she has concerns about the future of artificial intelligence, Barmaki was positive. “I believe AI holds great promise for the future, and, right now, its benefits outweigh the risks,” she said.

NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems
Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor. Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft. Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above. Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways. With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image. This can be an overwhelming process. To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said. ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft. Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics. Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight. “We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.” Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes. “This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.” FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images. “That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said. Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action. Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user. “That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.” Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior. Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.






