Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Why generative AI 'hallucinates' and makes up stuff featured image

Why generative AI 'hallucinates' and makes up stuff

Generative artificial intelligence tools, like OpenAI’s GPT-4, are sometimes full of bunk. Yes, they excel at tasks involving human language, like translating, writing essays, and acting as a personalized writing tutor. They even ace standardized tests. And they’re rapidly improving. But they also “hallucinate,” which is the term scientists use to describe when AI tools produce information that sounds plausible but is incorrect. Worse, they do so with such confidence that their errors are sometimes difficult to spot. Christopher Kanan, an associate professor of computer science with an appointment at the Goergen Institute for Data Science and Artificial Intelligence at the University of Rochester, explains that the reasoning and planning capabilities of AI tools are still limited compared with those of humans, who excel at continual learning. “They don’t continually learn from experience,” Kanan says of AI tools. “Their knowledge is effectively frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.” Current generative AI systems also lack what’s known as metacognition. “That means they typically don’t know what they don’t know, and they rarely ask clarifying questions when faced with uncertainty or ambiguous prompts,” Kanan says. “This absence of self-awareness limits their effectiveness in real-world interactions.” Kanan is an expert in artificial intelligence, continual learning, and brain-inspired algorithms who welcomes inquiries from journalists and knowledge seekers. He recently shared his thoughts on AI with WAMC Northeast Public Radio and with the University of Rochester News Center. Reach out to Kanan by clicking on his profile.

Christopher Kanan profile photo
2 min. read
Decoding the Future of AI: From Disruption to Democratisation and Beyond featured image

Decoding the Future of AI: From Disruption to Democratisation and Beyond

The global AI landscape has become a melting pot for innovation, with diverse thinking pushing the boundaries of what is possible. Its application extends beyond just technology, reshaping traditional business models and redefining how enterprises, governments, and societies operate. Advancements in model architectures, training techniques and the proliferation of open-source tools are lowering barriers to entry, enabling organisations of all sizes to develop competitive AI solutions with significantly fewer resources. As a result, the long-standing notion that AI leadership is reserved for entities with vast computational and financial resources is being challenged. This shift is also redrawing the global AI power balance, with a decentralised approach to AI where competition and collaboration coexist across different regions. As AI development becomes more distributed, investment strategies, enterprise innovation and global technological leadership are being reshaped. However, established AI powerhouses still wield significant leverage, driving an intense competitive cycle of rapid innovation. Amid this acceleration, it is critical to distinguish true technological breakthroughs from over-hyped narratives, adopting a measured, data-driven approach that balances innovation with demonstrable business value and robust ethical AI guardrails. Implications of the Evolving AI Landscape The democratisation of AI advancements, intensifying competitive pressures, the critical need for efficiency and sustainability, evolving geopolitical dynamics and the global race for skilled talent are all fuelling the development of AI worldwide. These dynamics are paving the way for a global balance of technological leadership. Democratisation of AI Potential The ability to develop competitive AI models at lower costs is not only broadening participation but also reshaping how AI is created, deployed and controlled. Open-source AI fosters innovation by enabling startups, researchers, and enterprises to collaborate and iterate rapidly, leading to diverse applications across industries. For example, xAI has made a significant move in the tech world by open sourcing its Grok AI chatbot model, potentially accelerating the democratisation of AI and fostering innovation. However, greater accessibility can also introduce challenges, including risks of misuse, uneven governance, and concerns over intellectual property. Additionally, as companies strategically leverage open-source AI to influence market dynamics, questions arise about the evolving balance between open innovation and proprietary control. Increased Competitive Pressure The AI industry is fuelled by a relentless drive to stay ahead of the competition, a pressure felt equally by Big Tech and startups. This is accelerating the release of new AI services, as companies strive to meet growing consumer demand for intelligent solutions. The risk of market disruption is significant; those who lag, face being eclipsed by more agile players. To survive and thrive, differentiation is paramount. Companies are laser-focused on developing unique AI capabilities and applications, creating a marketplace where constant adaptation and strategic innovation are crucial for success. Resource Optimisation and Sustainability The trend toward accessible AI necessitates resource optimisation, which means developing models with significantly less computational power, energy consumption and training data. This is not just about cost; it is crucial for sustainability. Training large AI models is energy-intensive; for example, training GPT-3, a 175-billion-parameter model, is believed to have consumed 1,287 MWh of electricity, equivalent to an average American household’s use over 120 years1. This drives innovation in model compression, transfer learning, and specialised hardware, like NVIDIA’s TensorRT. Small language models (SLMs) are a key development, offering comparable performance to larger models with drastically reduced resource needs. This makes them ideal for edge devices and resource-constrained environments, furthering both accessibility and sustainability across the AI lifecycle. Multifaceted Global AI Landscape The global AI landscape is increasingly defined by regional strengths and priorities. The US, with its strength in cloud infrastructure and software ecosystem, leads in “short-chain innovation”, rapidly translating AI research into commercial products. Meanwhile, China excels in “long-chain innovation”, deeply integrating AI into its extended manufacturing and industrial processes. Europe prioritises ethical, open and collaborative AI, while the APAC counterparts showcase a diversity of approaches. Underlying these regional variations is a shared trajectory for the evolution of AI, increasingly guided by principles of responsible AI: encompassing ethics, sustainability and open innovation, although the specific implementations and stages of advancement differ across regions. The Critical Talent Factor The evolving AI landscape necessitates a skilled workforce. Demand for professionals with expertise in AI and machine learning, data analysis, and related fields is rapidly increasing. This creates a talent gap that businesses must address through upskilling and reskilling initiatives. For example, Microsoft has launched an AI Skills Initiative, including free coursework and a grant program, to help individuals and organisations globally develop generative AI skills. What does this mean for today’s enterprise? New Business Horizons AI is no longer just an efficiency tool; it is a catalyst for entirely new business models. Enterprises that rethink their value propositions through AI-driven specialisation will unlock niche opportunities and reshape industries. In financial services, for example, AI is fundamentally transforming operations, risk management, customer interactions, and product development, leading to new levels of efficiency, personalisation and innovation. Navigating AI Integration and Adoption Integrating AI is not just about deployment; it is about ensuring enterprises are structurally prepared. Legacy IT architectures, fragmented data ecosystems and rigid workflows can hinder the full potential of AI. Organisations must invest in cloud scalability, intelligent automation and agile operating models to make AI a seamless extension of their business. Equally critical is ensuring workforce readiness, which involves strategically embedding AI literacy across all organisational functions and proactively reskilling talent to collaborate effectively with intelligent systems. Embracing Responsible AI Ethical considerations, data security and privacy are no longer afterthoughts but are becoming key differentiators. Organisations that embed responsible AI principles at the core of their strategy, rather than treating them as compliance check boxes, will build stronger customer trust and long-term resilience. This requires proactive bias mitigation, explainable AI frameworks, robust data governance and continuous monitoring for potential risks. Call to Action: Embracing a Balanced Approach The AI revolution is underway. It demands a balanced and proactive response. Enterprises must invest in their talent and reskilling initiatives to bridge the AI skills gap, modernise their infrastructure to support AI integration and scalability and embed responsible AI principles at the core of their strategy, ensuring fairness, transparency and accountability. Simultaneously, researchers must continue to push the boundaries of AI’s potential while prioritising energy efficiency and minimising environmental impact; policymakers must create frameworks that foster responsible innovation and sustainable growth. This necessitates combining innovative research with practical enterprise applications and a steadfast commitment to ethical and sustainable AI principles. The rapid evolution of AI presents both an imperative and an opportunity. The next chapter of AI will be defined by those who harness its potential responsibly while balancing technological progress with real-world impact. Resources Sudhir Pai: Executive Vice President and Chief Technology & Innovation Officer, Global Financial Services, Capgemini Professor Aleks Subic: Vice-Chancellor and Chief Executive, Aston University, Birmingham, UK Alexeis Garcia Perez: Professor of Digital Business & Society, Aston University, Birmingham, UK Gareth Wilson: Executive Vice President | Global Banking Industry Lead, Capgemini 1 https://www.datacenterdynamics.com/en/news/researchers-claim-they-can-cut-ai-training-energy-demands-by-75/?itm_source=Bibblio&itm_campaign=Bibblio-related&itm_medium=Bibblio-article-related

Alexeis Garcia Perez profile photo
5 min. read
Virtual reality training tool helps nurses learn patient-centered care featured image

Virtual reality training tool helps nurses learn patient-centered care

University of Delaware computer science students have developed a digital interface as a two-way system that can help nurse trainees build their communication skills and learn to provide patient-centered care across a variety of situations. This virtual reality training tool would enable users to rehearse their bedside manner with expectant mothers before ever encountering a pregnant patient in person. The digital platform was created by students in Assistant Professor Leila Barmaki’s Human-Computer Interaction Laboratory, including senior Rana Tuncer, a computer science major, and sophomore Gael Lucero-Palacios. Lucero-Palacios said the training helps aspiring nurses practice more difficult and sensitive conversations they might have with patients. "Our tool is targeted to midwifery patients,” Lucero-Palacios said. “Learners can practice these conversations in a safe environment. It’s multilingual, too. We currently offer English or Turkish, and we’re working on a Spanish demo.” This type of judgement-free rehearsal environment has the potential to remove language barriers to care, with the ability to change the language capabilities of an avatar. For instance, the idea is that on one interface the “practitioner” could speak in one language, but it would be heard on the other interface in the patient’s native language. The patient avatar also can be customized to resemble different health stages and populations to provide learners a varied experience. Last December, Tuncer took the project on the road, piloting the virtual reality training program for faculty members in the Department of Midwifery at Ankara University in Ankara, Turkey. With technical support provided by Lucero-Palacios back in the United States, she was able to run a demo with the Ankara team, showcasing the UD-developed system’s interactive rehearsal environment’s capabilities. Last winter, University of Delaware senior Rana Tuncer (left), a computer science major, piloted the virtual reality training program for Neslihan Yilmaz Sezer (right), associate professor in the Department of Midwifery, Ankara University in Ankara, Turkey. Meanwhile, for Tuncer, Lucero-Palacios and the other students involved in the Human-Computer Interaction Laboratory, developing the VR training tool offered the opportunity to enhance their computer science, data science and artificial intelligence skills outside the classroom. “There were lots of interesting hurdles to overcome, like figuring out a lip-sync tool to match the words to the avatar’s mouth movements and figuring out server connections and how to get the languages to switch and translate properly,” Tuncer said. Lucero-Palacios was fascinated with developing text-to-speech capabilities and the ability to use technology to impact patient care. “If a nurse is well-equipped to answer difficult questions, then that helps the patient,” said Lucero-Palacios. The project is an ongoing research effort in the Barmaki lab that has involved many students. Significant developments occurred during the summer of 2024 when undergraduate researchers Tuncer and Lucero-Palacios contributed to the project through funding support from the National Science Foundation (NSF). However, work began before and continued well beyond that summer, involving many students over time. UD senior Gavin Caulfield provided foundational support to developing the program’s virtual environment and contributed to development of the text-to-speech/speech-to-text capabilities. CIS doctoral students Fahim Abrar and Behdokht Kiafar, along with Pinar Kullu, a postdoctoral fellow in the lab, used multimodal data collection and analytics to quantify the participant experience. “Interestingly, we found that participants showed more positive emotions in response to patient vulnerabilities and concerns,” said Kiafar. The work builds on previous research Barmaki, an assistant professor of computer and information sciences and resident faculty member in the Data Science Institute, completed with colleagues at New Jersey Institute of Technology and University of Central Florida in an NSF-funded project focused on empathy training for healthcare professionals using a virtual elderly patient. In the project, Barmaki employed machine learning tools to analyze a nursing trainee’s body language, gaze, verbal and nonverbal interactions to capture micro-expressions (facial expressions), and the presence or absence of empathy. “There is a huge gap in communication when it comes to caregivers working in geriatric care and maternal fetal medicine,” said Barmaki. “Both disciplines have high turnover and challenges with lack of caregiver attention to delicate situations.” UD senior Rana Tuncer (center) met with faculty members Neslihan Yilmaz Sezer (left) and Menekse Nazli Aker (right) of Ankara University in Ankara, Turkey, to educate them about the virtual reality training tool she and her student colleagues have developed to enhance patient-centered care skills for health care professionals. When these human-human interactions go wrong, for whatever reason, it can extend beyond a single patient visit. For instance, a pregnant woman who has a negative health care experience might decide not to continue routine pregnancy care. Beyond the project’s potential to improve health care professional field readiness, Barmaki was keen to note the benefits of real-world workforce development for her students. “Perceptions still exist that computer scientists work in isolation with their computers and rarely interact, but this is not true,” Barmaki said, pointing to the multi-faceted team members involved in this project. “Teamwork is very important. We have a nice culture in our lab where people feel comfortable asking their peers or more established students for help.” Barmaki also pointed to the potential application of these types of training environments, enabled by virtual reality, artificial intelligence and natural language processing, beyond health care. With the framework in place, she said, the idea could be adapted for other types of training involving human-human interaction, say in education, cybersecurity, even in emerging technology such as artificial intelligence (AI). Keeping people at the center of any design or application of this work is critical, particularly as uses for AI continue to expand. “As data scientists, we see things as spreadsheets and numbers in our work, but it’s important to remember that the data is coming from humans,” Barmaki said. While this project leverages computer vision and AI as a teaching tool for nursing assistants, Barmaki explained this type of system can also be used to train AI and to enable more responsible technologies down the road. She gave the example of using AI to study empathic interactions between humans and to recognize empathy. “This is the most important area where I’m trying to close the loop, in terms of responsible AI or more empathy-enabled AI,” Barmaki said. “There is a whole area of research exploring ways to make AI more natural, but we can’t work in a vacuum; we must consider the human interactions to design a good AI system.” Asked whether she has concerns about the future of artificial intelligence, Barmaki was positive. “I believe AI holds great promise for the future, and, right now, its benefits outweigh the risks,” she said.

5 min. read
Measuring how teachers' emotions can impact student learning featured image

Measuring how teachers' emotions can impact student learning

University of Delaware professor Leigh McLean has developed a new tool for measuring teachers’ emotional expressions and studying how these expressions affect their students’ attitudes toward learning. McLean uses this tool to gather new data showing emotional transmission between teachers and their students in fourth-grade classrooms. McLean and co-author Nathan Jones of Boston University share the results of their use of the tool in a new article in Contemporary Educational Psychology. They found that teachers displayed far more positive emotions than negative ones. But they also found that some teachers showed high levels of negative emotions. In these cases, teachers’ expressions of negative emotions were associated with reduced student enjoyment of learning and engagement. These findings add to a compelling body of research highlighting the importance of teachers’ and students’ emotional experiences within the context of teaching and learning. “Anyone who has been in a classroom knows that it is an inherently emotional environment, but we still don’t fully understand exactly how emotions, and especially the teachers’ emotions, work to either support or detract from students’ learning,” said McLean, who studies teachers’ emotions and well-being in the College of Education and Human Development’s School of Education (SOE) and Center Research in Education and Social Policy. “This new tool, and these findings, help us understand these processes more precisely and point to how we might provide emotion-centered classroom supports.” Measuring teacher and student emotions McLean and Jones collected survey data and video-recorded classroom observations from 65 fourth-grade teachers and 805 students in a Southwestern U.S. state. The surveys asked participants to report their emotions and emotion-related experiences — like feelings of enjoyment, worry or boredom — as well as their teaching and learning behaviors in mathematics and English language arts (ELA). Using the new observational tool they developed — the Teacher Affect Coding System — McLean and Jones also assessed teachers’ vocal tones, body posturing, body movements and facial expressions during classroom instruction and categorized outward displays of emotion as positive, negative or neutral. For example, higher-pitched or lilting vocal tones were categorized as positive, while noticeably harsh or sad vocal tones were categorized as negative. Overall, McLean and Jones found that teachers spent most of their instructional time displaying outward positive emotions. Interestingly though, they did not find any associations between these positive emotions and students’ content-related emotions or learning attitudes in ELA or math. “This lack of association might be because outward positivity is the relative ‘norm’ for elementary school teachers, and our data seem to support that,” McLean said. “That’s not to say that teachers’ positivity isn’t important, though. Decades of research has shown us that when teachers are warm, responsive and supportive, and when they foster positive relationships with their students, students do better in almost every way. It could be that positivity works best when done in tandem with other important teacher behaviors or routines, or it could be that it is more relevant for different student outcomes.” However, they did find that a small subset of teachers — about 10% — displayed notable amounts of negative emotions, with some showing negativity during as much as 80% of their instructional time. The students of these teachers reported reduced enjoyment and engagement in their ELA classes and reduced engagement in their math classes. “We think that these teachers are struggling with their real-time emotion regulation skills,” McLean said. “Any teacher, even a very positive one, will tell you that managing a classroom of students is challenging, and staying positive through the frustrating times takes a lot of emotional regulation. Emotion regulation is a particularly important skill for teachers because children inherently look to the social cues of adults in their immediate environment to gauge their level of safety and comfort. When a teacher is dysregulated, their students pick up on this in ways that can detract from learning.” Recommendations for supporting teacher well-being Given the findings of their study, McLean and Jones make several recommendations for teacher preparation and professional learning programs. As a first step, they recommend that teacher preparation and professional learning programs share information about how negative emotions and experiences are a normal part of the teaching experience. As McLean said, “It’s okay to be frustrated!” However, it is also important to be aware that repeated outward displays of negative emotion can impact students. McLean and Jones also suggest that these programs provide specific training to teachers on skills such as mindfulness and emotion regulation to help teachers manage negative emotions while they’re teaching. “Logically, these findings and recommendations make complete sense,” said Steve Amendum, professor and director of CEHD’s SOE, which offers a K-8 teacher education program. “After working with many, many teachers, I often see teachers' enthusiasm or dislike for a particular activity or content area transfer to their students.” McLean and Jones, however, emphasize that supporting teacher well-being can’t just be up to the teachers. Assistant principals, principals and other educational leaders should prioritize teacher wellness across the school and district. If teachers’ negative emotions in the classroom result in part from challenging working conditions or insufficient resources, educational leaders and policymakers should consider system-wide changes and supports to foster teacher well-being. To learn more about CEHD research in social and emotional development, visit its research page. To arrange an interview with McLean, connect with her directly by clicking on the contact button found on her ExpertFile profile page.

Leigh McLean profile photo
4 min. read
How authorship language helped catch a domestic terrorist – new podcast featured image

How authorship language helped catch a domestic terrorist – new podcast

In the latest episode of Writing Wrongs, hosts Professor Tim Grant and Dr Nicci MacLeod interview Dr Isobelle Clarke to unravel a case where forensic linguistics helped track down and convict a dangerous individual. Episode three, Imposters Tending to the Wild with Dr Isobelle Clarke, dives into the chilling case of Nikolaos Karvounakis, a self-proclaimed anarchist who planted a viable explosive device in Princes Street Gardens, Edinburgh, in 2018. Karvounakis, a Greek national, evaded capture for years, hiding behind online anonymity and extremist rhetoric. However, forensic linguists stepped in to analyse his anonymous blog posts, revealing patterns in his language that ultimately helped Police Scotland link him to the crime. The case not only demonstrates how linguistic evidence can be a powerful forensic tool but also raises crucial questions about the role of language analysis in modern terrorism investigations. On 11 January 2018, a suspicious cardboard box was discovered in a public seating area in Edinburgh’s Princes Street Gardens. After a controlled explosion, investigators determined the device could have caused serious harm had it detonated. With no immediate leads, the investigation stalled - until an anonymous blog post surfaced, claiming responsibility for the attack. The post, written in both English and Spanish, was linked to an eco-anarchist group called Individualists Tending to the Wild, a Mexican-based extremist organisation advocating violent action against technological progress. Crucially, the post included an image of the bomb’s interior, a detail only the perpetrator or law enforcement could have known. Police Scotland sought the expertise of Professor Tim Grant, who analysed the text, producing a linguistic profile that suggested the writer was neither a native English nor Spanish speaker - but rather someone influenced by another language entirely. Two years later, police identified Nikolaos Karvounakis as a suspect. Using comparative authorship analysis, Professor Tim Grant compared his online writings - including song lyrics from his rock band - to the manifesto. By dissecting word patterns, grammatical structures and stylistic quirks, he established that Karvounakis was the likely author. This evidence -alongside forensic meteorology, which linked photos of clouds in Karvounakis’ blog posts to the same weather conditions on the day of the crime - was used to secure a warrant and seize computers containing known writings by Karvounakis. To eliminate inevitable bias that would result from having worked the case for more than two years, Professor Grant invited Dr Isabelle Clarke onto the case as an independent forensic linguist. Using a version of the General Imposters Method, a technique similar to a police lineup but for language, Dr Clarke confirmed that the writing style in the blog post was the closest to Karvounakis’ known writings. Police Scotland put the evidence in the case, including the linguistic evidence, to Karvounakis, and secured a guilty plea. In February 2022, Nikolaos Karvounakis was sentenced to over eight years in prison under the UK’s Terrorism Act. Tim Grant, professor of forensic linguistics at Aston University, said: “The case highlights the growing importance of forensic linguistics in solving crimes, particularly in an age where digital anonymity combines with extremist ideologies. “It also highlights the how different types of language analysis can assist as a case moves through different stages of investigation.” Dr Nicci MacLeod, deputy director of the Aston Institute for Forensic Linguistics, said: “This episode offers listeners a behind-the-scenes look at the forensic methods that expose deception, identify threats and ultimately bring criminals to justice.” Dr Isobelle Clarke, a lecturer in security and protection science at Lancaster University and one of the first graduates from the campus-based MA Forensic Linguistics programme at Aston University, said: “It was great to be back at Aston University talking about the Karvounakis case for the Writing Wrongs podcast. “It’s an interesting case to highlight, as it shows how different types of language analysis can help with police investigations.” Writing Wrongs is available on Spotify, Apple Podcasts and all major streaming platforms. Listeners are encouraged to subscribe, share and engage with the hosts by submitting their forensic linguistics questions. Whether it’s about this case or broader forensic linguistic techniques, Professor Grant and Dr MacLeod welcome inquiries from listeners.

Professor Tim Grant profile photo
3 min. read
Best-selling author Kate Summerscale joins Writing Wrongs to explore true crime and justice featured image

Best-selling author Kate Summerscale joins Writing Wrongs to explore true crime and justice

The true crime podcast Writing Wrongs continues its exploration of language and justice with a special bonus episode featuring best-selling author and historian Kate Summerscale. Kate is an award-winning historian, journalist and best-selling author known for her meticulous research into historical true crime cases. Her book The Suspicions of Mr. Whicher won the Samuel Johnson Prize for Non-Fiction and was adapted into a major ITV drama. Her latest book, The Peep Show: The Murders at 10 Rillington Place, revisits the infamous Christie case, shedding new light on the victims’ lives, the social conditions of post-war Britain and the power of the press in shaping public perceptions of crime. In this episode, hosts Professor Tim Grant and Dr Nicci MacLeod explore a fresh perspective on the Rillington Place murders, the wrongful execution of Timothy Evans and how forensic linguistics has helped uncover the truth in criminal cases. Following on from the first episode of the series, which examined the Timothy Evans case and the origins of forensic linguistics, this conversation with Kate Summerscale provides fresh historical insights into one of Britain’s most infamous miscarriages of justice. The episode revisits the horrifying crimes of John Christie, whose calculated murders led to one of the most infamous miscarriages of justice in British history. The wrongful conviction and execution of Timothy Evans cast a long shadow over the UK’s legal system and played a pivotal role in the eventual abolition of the death penalty. Through expert discussion, the episode examines how Evans’ case became a turning point for criminal justice reform. The conversation also looks at the role of the media in shaping crime narratives. Sensationalist reporting during the Rillington Place murders fuelled public perceptions, sometimes distorting the truth in favour of dramatic storytelling. The episode draws comparisons between 1950s tabloid journalism and today’s true crime media, examining how crime reporting has evolved - and the ethical challenges it still faces. A deeply unsettling aspect of this case is its gendered nature. The majority of John Christie's victims were vulnerable women, many facing financial and social instability. The episode delves into how structural inequalities, from the lack of legal abortion to economic dependence, made women more susceptible to predatory figures like Christie, a pattern that remains relevant in crime analysis today. Finally, the episode scrutinises government complicity in covering up a miscarriage of justice. The Brabin Inquiry, launched in the 1960s, sought to reexamine Evans’ conviction but delivered a highly controversial conclusion, failing to fully exonerate him. The discussion highlights how political interests and legal reputation management influenced the case’s outcome, leading to Evans’ eventual posthumous pardon - but not a full legal exoneration. Tim Grant, professor of forensic linguistics at Aston University, said: “It was wonderful to have Kate on Writing Wrongs. “Her work challenges the traditional true crime narrative, shifting focus from the murderer to the victims and the broader social structures that allow such crimes to happen. “Her insights in this episode provide a fresh and deeply researched perspective on a case that still haunts British legal history.” Writing Wrongs is available on Spotify, Apple Podcasts and all major streaming platforms. Listeners are encouraged to subscribe, share and engage with the hosts by submitting their forensic linguistics questions. Whether it’s about this case or broader forensic linguistic techniques, Professor Grant and Dr MacLeod welcome inquiries from listeners.

Professor Tim Grant profile photo
3 min. read
New true crime podcast Writing Wrongs launches with a chilling case of miscarriage of justice featured image

New true crime podcast Writing Wrongs launches with a chilling case of miscarriage of justice

True crime enthusiasts and forensic linguistics fans have a gripping new podcast to add to their playlists. Writing Wrongs, an original podcast from the Aston Institute for Forensic Linguistics (AIFL) at Aston University, provides a deep dive into how forensic language analysis plays a crucial role in solving crimes and improving the delivery of justice. Hosts Professor Tim Grant and Dr Nicci MacLeod, leading experts in forensic linguistics, explore how police interviews and linguistic evidence played a key role in one of Britain’s most infamous miscarriages of justice. Throughout the series, they’ll explore real-life cases where forensic linguistics has played a pivotal role in solving crimes, joined by expert guests who reveal the fascinating - and sometimes chilling - ways language can expose the truth. The first episode, Timothy Evans: A Case for Forensic Linguistics, launched on 7 March 2025, 75 years after Timothy Evans’ wrongful conviction and subsequent execution (9 March 1950). The Timothy Evans case was instrumental in the UK’s decision to abolish the death penalty, raising critical questions about police interviewing techniques, false confessions and linguistic analysis in legal proceedings. In 1950, Evans was convicted and later hanged for the murder of his baby daughter, Geraldine, while his wife, Beryl Evans, was also presumed to be his victim. However, three years later, his neighbour at 10 Rillington Place, London, John ‘Reg’ Christie, a former police officer, was exposed as a serial killer, responsible for at least eight murders – and almost certainly including Geraldine and Beryl Evans. Despite evidence casting doubt on Evans’ guilt, he was executed before Christie’s crimes came to light. This case was instrumental in the early development of forensic linguistics, as experts later analysed Evans’ police confessions to expose inconsistencies. Tim Grant, professor of forensic linguistics at Aston University, said: “We are delighted to launch Writing Wrongs with this episode focussing on the wrongful conviction and execution of Timothy Evans. This episode clearly shows how language analysis can provide evidence to help resolve one of the most controversial cases in British legal history. “In other episodes we show how contemporary forensic linguists are making contributions to the delivery of justice in cases of murder, rape and terrorism. In each case we discuss with a linguist how they assisted, and demonstrate how providing linguistic evidence to the courts can exonerate or incriminate and change the outcome of cases.” Dr Nicci MacLeod, deputy director of the Aston Institute for Forensic Linguistics, said: “This is the origin story for forensic linguistics, a phrase first coined by Jan Svartvik in his 1968 publication analysing the Evans statements. “Svartvik was able to show that there were clear differences in the language style of the incriminating sections of Evans’ ‘confession’, and other parts of the statements he gave to police. “One feature Svartvik focused on was the use of the word ‘then’ positioned after the subject of a clause, as in “I then came upstairs”, as opposed to what we might consider the more usual ordering of “then I came upstairs”. This is a feature of ‘policespeak’, and was also identified in the infamous Derek Bentley confession by Malcolm Coulthard some years later.” The first three episodes of the eight-part series of Writing Wrongs are available now on Spotify, Apple Podcasts and all major podcast platforms. They include a bonus episode with the author, Kate Summerscale ('The Suspicions of Mr Whicher' and 'The Queen of Whale Cay'), about her latest book ‘The Peepshow: The Murders at 10 Rillington Place’ and an episode featuring Dr Isobelle Clarke, which shows how a series of forensic authorship analyses assisted in the investigation and conviction of a terrorist who planted a pipe bomb in Edinburgh in 2018. Listeners are encouraged to follow, share and engage with the hosts by submitting their forensic linguistics questions. Whether it’s about the cases covered or broader issues in forensic linguistics, Professor Grant and Dr MacLeod welcome enquiries from listeners. Future episodes will be released on the first Friday of the month with episode four, Foreygensic Lingeyguistics: Cracking the Killer’s Code, dropping on 4 April 2025.

Professor Tim Grant profile photo
3 min. read
Legality, Next Steps for Canadian Tariffs featured image

Legality, Next Steps for Canadian Tariffs

Maurice A. Deane School of Law at Hofstra University Professor Julian Ku was quoted in The Globe and Mail article “The best hope for Canada in fighting a trade war with Trump may lie in U.S. courts." “Using IEEPA to impose tariffs has not been done before, so there has never been a court ruling on this question,” said Julian Ku, who studies the interaction of international law and U.S. constitutional law at Hofstra University. Mr. Trump has, however, argued that he is responding to external threats, citing the movement of fentanyl and illegal migrants to the U.S. from Canada, Mexico and China. That is likely to prove a potent defense, Prof. Ku said. “The court has also been deferential to the President on national-security matters, and the language of the statute is very broad, so it is far from clear which way the court would come down on this issue,” he said.

Julian Ku profile photo
1 min. read
Florida Tech's Rev. Randall Meissen Publishes Chapter Examining Influence of Dominican Spirituality on Natural History featured image

Florida Tech's Rev. Randall Meissen Publishes Chapter Examining Influence of Dominican Spirituality on Natural History

The Rev. Randall Meissen, LC, Florida Tech’s chaplain and director of the Catholic Campus Ministry and an adjunct faculty member of the College of Psychology and Liberal Arts, has published a new book chapter, “Contemplating Bats and Bees,” in the academic compendium, “In The Dominicans in the Americas and the Philippines (c. 1500-c. 1820),” edited by David Thomas Orique, Rady Roldán-Figueroa and Cynthia Folquer. The book was published online in August by Routledge. Meissen’s chapter examines the man credited with preserving the only surviving Mayan language texts, Friar Francisco Ximenez, and examples of the influences of Dominican spirituality on natural history. He conducted research in the rare book archives of Guatemala and Spain, and the chapter developed from a presentation Meissen gave at the International Conference on the History of the Order of the Preachers in the Americas several years ago. Ximenez was an 18th century Dominican priest and missionary linguist known for his preservation of the Maya–K’iche’ creation myth the Popol Vuh. He also had a keen interest in the plants and animals of Guatemala during his ministry, Meissen highlights, and recorded observations in his manuscript, “La historia natural del reino de Guatemala.” Meissen’s chapter examines Ximenez’s observations of nature and explores the cultural factors inspiring Ximenez’s research of the region. Those include: “the Dominican tradition of collecting anecdotes about animals as exempla for use in preaching, the expansive highland Mayan vocabulary for naming native organisms, the Mayan religious myths about animals in the Popol Vuh, the practice of using mission churches as spatial reference points and the material need of the Order of Preachers in Guatemala for items such as beeswax,” the abstract reads. Meissen’s research also connects back to the classroom. He is teaching a World Religions course this spring. If you're interested in learning more or a reporter looking to speak with Father Randall Meissen - simply contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

2 min. read
AI Everywhere: Where Artificial Intelligence and Health Care Intersect featured image

AI Everywhere: Where Artificial Intelligence and Health Care Intersect

Imagine a world where AI doesn’t just support health care providers, but anticipates their next move — detecting diseases faster than human eyes, analyzing patterns and patient data that humans might overlook and revolutionizing health care decision making at every level. Driven by data, AI can identify which patients are most likely to have repeated emergency department visits or thrive from personalized medicine. With the power of robotics enhanced by AI, people with medical needs can gain more independence, managing daily tasks such as taking medication, monitoring their health and receiving personalized care, all from the comfort of their own homes. And this is just the beginning. “AI is transforming – and is going to continue transforming – every industry, especially health care,” said Bharat Rao, a notable figure in the fields of health care, technology and AI. Rao himself has made significant contributions to artificial intelligence, machine learning and data analytics, particularly in health care innovation. His current start-up, CareNostics, uses AI technology to identify patients at increased risk for chronic disease. “We take this for granted,” he said, “but it’s like what I used to see on Star Trek as a kid. The opportunities are limitless.” Rao was a keynote speaker at ChristianaCare’s inaugural Innovation Summit, a two-day conference at ChristianaCare’s Newark campus in Delaware, in fall 2024. During panel discussions and keynotes, more than 200 attendees heard about current and future health tech from national innovators and thought leaders, as well as technical advice for inventors who want to patent ideas and protect intellectual property in a world where “AI Is Everywhere,” the conference’s theme. Speakers emphasized that it’s not just technologists, but also researchers, clinicians and other health care professionals who play an essential role in implementing AI-based health care solutions. “There’s no AI without HI, which is human intelligence,” said Catherine Burch, MS, CXA, CUA, vice president of innovation at ChristianaCare. “You want to help shape the future, not wait for it to shape you.” How AI helps improve patient care “AI is incredibly good at reducing noise in images,” said speaker David Lloyd, a technical leader at Amazon, who discussed the use of AI in radiology. “It can detect anomalies, and it can automate radiologist reports, which saves time for radiologists.” Data informatics is another example of the power of AI to help health professionals determine which patients are at an increased risk for falls, malnutrition or recurrent asthma attacks, enabling them to optimize patient health and prevent hospitalizations. “Some patients with asthma go to the ER repeatedly because their treatment plan isn’t working,” said speaker Vikram Anand, head of data at CareNostics. When patients have uncontrolled asthma, data-rich platforms like CareNostics can provide treating physicians with guidelines and other support to improve patient care, which may lead to evidence-based medication changes or other therapies, he said. Using robots as part of the health care team in patient homes may sound like science fiction, but speakers discussed the current evolution of consumer robotics, like Amazon’s Astro. Astro follows patients around their home, interacts with them and supports their care. When ChristianaCare tested Astro’s impact on HomeHealth patients, they found that it reduced feelings of isolation by 60%. “Astro is like Alexa on wheels,” said speaker Pam Szczerba, PT, MPT, CPHQ, director of ChristianaCare’s HomeHealth quality, education and risk management, who studied patients’ experiences with Astro. “People like interacting with Alexa, but they can only interact in the room they’re in. Astro’s mobility lets it go to the patient.” Based on early successes, health professionals are assessing robots as an extension of clinicians in the home. Early results show that patients with robots show improved activation with their care plans. This may lead to more widespread distribution of household robots to newly diagnosed patients to help prevent disease complications, avoidable emergency department visits and re-hospitalizations. How AI helps ease provider burden Speakers also discussed the potential of AI to improve health care delivery and patient outcomes by handling more administrative work for health professionals. “We can reduce some of the redundancy of work to free up time for people to be creative,” said speaker Terrance Bowman, managing director at Code Differently, a company that educates and prepares people to work in technology-driven workplaces. “AI should be taking the ‘administrivia’ – administrative trivial tasks – out of your life,” said speaker Nate Gach, director of innovation at Independence Blue Cross. “When you want folks to do the creative part of the job that takes brain power, have ChatGPT respond to easy emails.” Other examples shared included the power of AI to record meetings, create summaries and send participants automated meeting minutes. Benefits can be seen across industries. Specific to health care, eliminating the need for note-taking during visits enables more personalized and attentive provider-patient interaction. With the evolution of ambient speech apps, clinicians are no longer just dictating notes into the electronic health record. Now AI is listening to the conversation and creating the notes and associated recommendations. “The physician is no longer spending ‘pajama time’ doing catch-up work, at home late into the evening,” said speaker Tyler Flatt, a director and leading expert in AI and digital transformation at Microsoft. “Especially as we’re dealing with burnout, it’s better for patient and physician satisfaction.” AI may also help caregivers uncover details that they hadn’t noticed, helping them diagnosis patients with subtle symptoms. “We feed a large quantity of data and have it suggest commonalities about patients,” said speaker Matthew Mauriello, assistant professor of computer and information sciences at the University of Delaware. “Some things are very insightful, but humans miss them.” AI has also been used for patient engagement, including chatbots that can assist with tasks like scheduling clinical appointments or acknowledging patient questions. “One of the things AI is great at is natural language understanding,” said David Lloyd. “You can alleviate a lot of the burden if you have something that can talk to your patients, especially if it’s an administrative task.” Creating new health innovations “The key is to think of something you’ve done that’s original and non-obvious,” said Rao, who holds more than 60 patents in AI. “The process of writing about it will help you flesh it out.” Turning breakthrough ideas into game changers is just the start — protecting these innovations is what ensures they shape the future, rather than fade into the past. “Keeping it secret and internal to your organization until you know what you want to do with it is important,” Greg Bernabeo, partner at FisherBroyles, LLP, said. “Otherwise, the opportunity is lost, and you can’t get the genie back in the bottle.” Benefits of non-obvious thinking People who pursue “non-obvious” ideas are often on the cutting edge of technology in and out of health care, said keynote speaker, Ben DuPont, while discussing innovative ideas with Randy Gaboriault, MS, MBA, senior vice president and chief digital and information officer at ChristianaCare. “Amazon was not founded by a book retailer; Airbnb was not founded by somebody who was in hospitality,” said DuPont, author, entrepreneur, and co-founder and partner at Chartline Capital Partners venture capital fund. “Before Uber, the founders were running around Paris and they couldn’t get a taxi.” Innovative ideas often arise when people consider non-obvious points of view while thinking about solutions, DuPont said. Non-experts have the ability to cut through the clutter and find the frustration, which can lead to innovative solutions, which DuPont explores in his book “Non-Obvious Thinking: How to See What Others Miss.” Health providers, for example, may discover ideas when they move out of their comfort zones. “If you want to be a better doctor, go do something that has nothing to do with medicine,” he said. “Innovation happens at the collision of seemingly unrelated disciplines.” Diversity in the workplace is necessary, “but it’s not just diversity in the way people look: It’s diversity in how people think,” DuPont said. “There are people that think in dramatic and different ways. We need those people around the table. They might say: ‘If we just move this little thing over here’ … and it starts an avalanche that changes the world.” Involving the future generation During the Innovation Summit, students with an interest in STEM (science, technology, engineering, and mathematics) from St. Mark’s High School in Wilmington, Delaware, competed against one another at ChristianaCare’s inaugural HealthSpark ChallengeTM. Twenty-six high school juniors and seniors were divided into five teams, then challenged to brainstorm ideas for solutions to address the negative mental health effects of social media on teenagers. Each team created a concept poster and pitched their ideas to Summit attendees. The attendees then voted for their favorite solution. The winning solution, Editing Identifiers, is designed to help minimize negative feelings about body image among teens. The solution would use AI technology to identify altered photos on social media. The goal would be to show teens that photos of “perfect” people aren’t real and alleviate the feelings of body dysmorphia. Looking forward Summit speakers highlighted many ways that AI is already incorporated into health care, as well as ways that health tech, AI, and robotics may improve care for patients in the coming years. “We are just scratching the surface,” Rao said. “It’s like laparoscopic surgery – years ago, it was considered experimental or dangerous. Today, surgery is commonly done laparoscopically, with better outcomes and less infection. AI can help identify care gaps and get the right treatment to the right patient. It’s going to be good for the patient.” In a rapidly evolving landscape, the integration of AI into health care not only enhances patient care but also creates opportunities for innovation and collaboration, said ChristianaCare’s Gaboriault. “As AI continues to advance, the health care industry stands on the brink of a revolution, one where the possibilities are as vast as the data that fuels them.”

Randy Gaboriault, MS, MBA profile photoRobert Asante, Ed.D., MBA, CISSP, HCISPP profile photo
7 min. read