Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

UF professor to expand proven disease-prediction dashboard to monitor Gulf threats
After deploying life-saving cholera-prediction systems in Africa and Asia, a University of Florida researcher is turning his attention to the pathogen-plagued waters off Florida’s Gulf Coast. In the fight to end cholera deaths by 2030 – a goal set by the World Health Organization – UF researcher and professor Antar Jutla, Ph.D., has deployed his Cholera Risk Dashboard in about 20 countries, most recently in Kenya. Using NASA and NOAA satellite images and artificial intelligence algorithms, the dashboard is an interactive web interface that pinpoints areas ripe for thriving cholera bacteria. It can predict cholera risk four weeks out, allowing early and proactive humanitarian efforts, medical preparation and health warnings. Cholera is a bacterial disease spread through contaminated food and water; it causes severe intestinal issues and can be fatal if untreated. The US Centers for Disease Control reports between 21,000 and 143,000 cholera deaths each year globally. Make no mistake, the Cholera Risk Dashboard saves lives, existing users contend. His team now wants to set up a similar pathogen-monitoring and disease-prediction system for pathogenic bacteria in the warm, pathogen-fertile waters of the Gulf of America. “Its timeliness, its predictiveness and its ease of access to the right data is a game changer in responding to outbreaks and preventing potentially catastrophic occurrences.” - Linet Kwamboka Nyang’au, a senior program manager for Global Partnership for Sustainable Development Data Closer to home Jutla is seeking funding to develop a pathogen-prediction model to identify dangerous bacteria in the Gulf to warn people – particularly rescue workers – to use protective gear or avoid contaminated areas. He envisions post-hurricane systems for the Gulf that will help the U.S. Navy/Coast Guard and other rescue workers make informed health decisions before entering the water. And he wants UF to be at the forefront of this technology. “If we have enough resources, I think within a year we should have a prototype ready for the Gulf,” said Jutla, an associate professor with UF’s Engineering School Sustainable Infrastructure and Environment. “We want to build that expertise here at UF for the entire Gulf of America.” Jutla and his co-investigators have applied for a five-year, $4 million NOAA RESTORE grant to study pathogens known as vibrios off Florida’s West Coast and develop the Vibrio Warning System. These vibrios in the Gulf can cause diarrhea, stomach cramps, nausea, vomiting, fever and chills. One alarming example is Vibrio vulnificus, commonly known as flesh-eating bacteria, a bacterium that often leads to amputations or death. The Centers for Disease Control and Prevention (CDC) has reported increases in vibrio infections in the Gulf region, particularly from 2000 to 2018. The warm and ecologically sensitive Gulf waters provide a thriving habitat for harmful pathogens. “The grant builds directly on the success of our cholera-prediction system," Jutla noted. "By integrating AI technologies into public health decision-making, we would not only lead the nation but also become self-reliant in understanding the movement of environmentally sensitive pathogens, positioning ourselves as global leaders.” Learning from preparing early Jutla’s dashboards are critical tools for global health and humanitarian officials, said Linet Kwamboka Nyang’au, a senior program manager for Global Partnership for Sustainable Development Data. “Its timeliness, its predictiveness and its ease of access to the right data is a game changer in responding to outbreaks and preventing potentially catastrophic occurrences,” Kwamboka Nyang’au said. Over the last few years, Jutla and several health/government leaders have been working to deploy the cholera-predictive dashboard. “Our partnership with UF, the government of Kenya and others on the cholera dashboard is a life-saving mission for high-risk, extremely vulnerable populations in Africa. By predicting potential cholera outbreaks and coordinating multi-stakeholder interventions, we are enabling swift action and empowering local governments and communities to prevent crises before they unfold,” said Davis Adieno, senior director of programs for the Global Partnership for Sustainable Development Data. The early warnings for waterborne pathogens also allows the United Nations time to issue early assistance to residents in the outbreak’s path, said Juan Chaves-Gonzalez, a program advisor with the United Nations’ Office for the Coordination of Humanitarian Affairs. “There are several things we do with the money ahead of time. We provide hygiene kits. We repair and protect water sources. We start chlorination, we set up hand-washing stations, train and deploy rapid-response teams. At the community level, we try to inject funding to procure rapid-diagnostic tests,” he said. “We identify those very, very specific barriers and put money in organizations’ hands in advance to remove those barriers.” Eyes on the Gulf In the United States, hurricanes stir up vibrios in the Gulf, posing a high risk of infection for humans in the water. There has been a nearly 200% increase in these cases over the last 20 years in the U.S., according to the CDC. “After Hurricane Ian, we saw a very heavy presence of these vibrios in Sarasota Bay and the Charlotte Bay region. Not only that, but they were showing signs of antibiotic-resistance. Last year, we had one of the largest number of cases of vibriosis in the history of Florida,” Jutla said. Samples from 2024 hurricanes Helene and Milton are being analyzed with AI and complex bioinformatics algorithms. “If there is a risky operation by rescue personnel, not using personal protective equipment, then we would want them to know there is a significant concentration of these bacteria in the water,” Jutla said. “As an example, Navy divers operating in contaminated waters are at risk of infections from vibrios and other enteric pathogens, which can cause severe gastrointestinal and wound infections.” Safety and economics “Exposure to vibrios and other enteric pathogens,” Jutla added, “can disrupt economic activities, particularly in coastal regions that are dependent on tourism and fishing. And vibrios may be considered potential bioterrorism agents due to their ability to cause widespread illness and panic.” In developing the Vibrio Warning System, Jutla noted, he and his team want to significantly enhance public health safety and preparedness along the Gulf Coast. By leveraging advanced AI technologies, satellite datasets and predictive modeling, they plan to mitigate the risks posed by environmentally sensitive pathogenic bacteria, ensuring timely interventions and safeguarding human health and economic activities. “Hospital systems and healthcare providers in the Gulf region will have a tool for anticipatory decision making on where and when to anticipate illness from these environmentally sensitive vibrios, and issue a potential warning to the general public,” he said. “With the potential to become a leader in environmental pathogen prediction, UF stands at the forefront of this critical research, poised to make a lasting impact on local, regional, national and global health and safety.”

AI as IP™: A Framework for Boards, Executives, and Investors
Under current corporate accounting practices, artificial intelligence (AI) companies’ most valuable resources – large language models, training datasets, and algorithms – remain “off the books” or uncapitalized. As the importance of AI continues to grow in the global knowledge-based economy, financial statements are becoming less representative of a company’s true worth, creating a recognition gap. In this article, James E. Malackowski, Eric Carnick, and David Ngo present several conceptual frameworks to bridge this gap. They explain how the triangulation of three valuation approaches can reveal both the tangible investment base and the intangible, strategic upside of AI assets. In turn, these approaches provide board-level visibility into where AI capital resides and how it contributes to enterprise value. James E. Malackowski is the Chief Intellectual Property Officer (CIPO) of J.S. Held and Co-founder of Ocean Tomo, a part of J.S. Held. Mr. Malackowski has served as an expert on over one hundred occasions on intellectual property economics, including valuation, royalty, lost profits, price erosion, licensing terms, venture financing, copyright fair use, and injunction equities. He has substantial experience as a Board Director for leading technology corporations, research organizations, and companies with critical brand management issues. This article is the second installment in our three-part series, Artificial Intelligence as Intellectual Property or “AI as IP™”, which explores how artificial intelligence assets should be treated as a form of intellectual property and enterprise capital. The first article, “A Strategic Framework for the Legal Profession”, explored the legal foundations for recognizing and protecting AI assets. The upcoming third article, “Guide for SMEs to Classify, Protect, and Monetize AI Assets”, will provide practical steps for small and mid-sized enterprises to turn AI into measurable economic value. To explore the topic further, simply connect with James through his icon below.

With the MOMitor™ app, Florida mothers have better maternal care right at their fingertips
A program spearheaded by University of Florida physicians recently expanded to improve care for new mothers throughout the state, using tools they have right at home. Five years ago, a team of obstetricians and researchers at the UF College of Medicine launched MOMitor™, a smartphone app that allows new mothers to answer health screening questions and check vitals like blood pressure in the comfort of their own homes, using tools given to them by their health care providers. Depending on the data, the clinical team can then follow up with patients as needed for further medical intervention. Now, the app is expanding beyond North Central Florida — where nearly 4,400 mothers have participated in the program — to other areas in the state. Clinicians are also teaming up with data scientists at the College of Medicine who are using artificial intelligence to study data and identify trends that can lead to more personalized care. Program expansion Thanks to funding from the Florida Department of Health to support the state’s Telehealth Maternity Care Program, MOMitor™ has recently expanded for use in Citrus, Hernando, Sumter, Flagler, Volusia, Martin, St. Lucie and Okeechobee counties, said Kay Roussos-Ross, M.D. ’02, MPAS ’98, a UF professor of obstetrics/gynecology and psychiatry who is leading the program. “The Florida Legislature was really motivated and interested in improving maternal morbidity and mortality, and through this program we’re touching additional parts of the state and helping patients beyond North Central Florida,” she said. Maternal mortality is a serious concern in the United States, with more than 18 deaths recorded per 100,000 births in 2023, according to the latest data available from the U.S. Centers for Disease Control and Prevention. This is a much higher rate than most other developed countries, Roussos-Ross said. Common factors that may lead to maternal mortality, which is measured from pregnancy through the first year after giving birth, include infection, mental health conditions, cardiovascular conditions and endocrine disorders. Many of these complications can go unnoticed or unmonitored, particularly if at-risk mothers are not reporting complications to clinicians. A 2025 study published in the Journal of the American Medical Association shows that up to 40% of women do not attend postpartum visits. “By leveraging AI, we have the opportunity to target moms and moms-to-be who might be at greater risk of complications ... and encourage them to participate in the program to mitigate these.” — Tanja Magoc, Ph.D. “Whereas we’re used to seeing patients pretty routinely during pregnancy, after delivery visits quickly drop off and some women don’t make it back for postpartum care, so we may not have an opportunity to continue supporting them,” Roussos-Ross said. “This can often be because of barriers such as housing, transportation or food insecurity. We offer referrals to help with some of these services.” With MOMitor™, patients can let their clinician know how they are recovering without visiting the clinic, improving access to care in situations where that is not always an easy option for new mothers. “It’s a way to be proactive,” Roussos-Ross said. “Instead of waiting for a patient to come to us when they haven’t been doing well for a while, we connect with them through the app and follow up when they initially begin not doing well, so we can address concerns more quickly.” Studying data to personalize care Roussos-Ross’ team is collaborating with data scientists from the College of Medicine’s Quality and Patient Safety initiative, or QPSi, to determine how AI can assist in finding ways to further improve processes. “By leveraging AI, we have the opportunity to target moms and moms-to-be who might be at greater risk of complications, such as developing postpartum depression or hypertension, and encourage them to participate in the program to mitigate these complications,” said Tanja Magoc, Ph.D., the associate director of QPSi’s Artificial Intelligence/Quality Improvement Program. David Hall, Ph.D., a QPSi data scientist, said his team is working alongside the clinical team to analyze data that can be used to create recommendations for patients. “Everything we do comes from information supported in the patients’ charts,” Hall said. “We also make sure the data upholds compliance standards and protects patients’ privacy.” “We’re interested in finding out what areas might be hot spots and determining what makes them this way, so we can ... better identify areas where there may be high-risk patients and provide interventions to those who need it most.” — David Hall, Ph.D. The teams aim to intervene before patients encounter postpartum complications, addressing potential issues before they become significant problems. After taking into account a patient’s personal and family medical history, the team looks at information such as geolocation, drilling down to areas much smaller than the ZIP code level in order to find points of potential concern. “We’re interested in finding out what areas might be hot spots and determining what makes them this way, so we can study these patterns throughout the state and better identify areas where there may be high-risk patients and provide interventions to those who need it most,” Hall said. Roussos-Ross said she is proud of the work her team has done to improve patient outcomes through the program so far and is excited to empower more patients. “Every year, the participants give us recommendations on how to improve the app, which we love. But they also say, ‘This is so great. It helped me think about myself and not just my baby. It helped me learn about taking care of my own health. It made me remember I’m important too, and it’s not just about the baby,’” Roussos-Ross said. “And that is so gratifying, because women are willing to do anything to ensure the health of their baby, sometimes at the expense of their own care. This is a way for us to let them know they are still important, and we care about their health as well.”

New AI-powered tool helps students find creative solutions to complex math proofs
Math students may not blink at calculating probabilities, measuring the area beneath curves or evaluating matrices, yet they often find themselves at sea when first confronted with writing proofs. But a new AI-powered tool called HaLLMos — developed by a team led by Professor Vincent Vatter, Ph.D., in the University of Florida Department of Mathematics — now offers a lifeline. “Some students love proofs, but almost everyone struggles with them. The ones who love them just put in more work,” Vatter said. “It just kind of blows their minds that there’s no single correct answer — that there are many different ways to do this. It’s very different than just doing computational work.” Building the tool HaLLMos was developed by Vatter, as principal investigator, along with Sarah Sword, a mathematics education expert at the Education Development Center; Jay Pantone, an associate professor of mathematical and statistical sciences at Marquette University; and Ryota Matsuura, a professor of mathematics, statistics and computer science at St. Olaf College; with grant support from the National Science Foundation. The tool is freely available at hallmos.com. The team’s goal was to develop an AI tool powered by a large language model that would support student learning rather than short-circuiting it. HaLLMos provides immediate personalized feedback that guides students through the creative struggle that writing proofs requires, without solving the proofs for them. The tool’s name honors the late Paul Halmos, a renowned mathematician who argued that the mathematics field is a creative art, akin to how painters work. Students using HaLLMos can select from classic exercises — such as proving that, for all integers, if the square of the integer is even, the integer is even — or use “sandbox mode” to enter exercises from any course. Faculty can create exercises and share them with students. Vatter introduced HaLLMos to his students last spring in his “Reasoning and Proof in Mathematics” class, a core requirement for math majors that is often the first time students encounter proofs. “They could use this tool to try out their proofs before they brought them to me. We try to identify the error in a student’s proof and let them go fix it,” Vatter said. “It is difficult for faculty to devote enough time to working individually with students. Our goal is that this tool will provide the feedback in real time to students in the way we would do it if we were there with them as they construct a proof.” Helping professors and students excel “I think every math professor would love to give more feedback to students than we are able to,” Vatter said. “That’s one of the things that inspired this.” The next steps for Vatter and his colleagues include getting more pilot sites to use the tool and continuing to improve its responses. “We’d like it to be good at any kind of undergraduate mathematics proofs,” he said. Vatter also intends to explore moving HaLLMos to UF’s HiPerGator, the country's fastest university-owned supercomputer. “It’s our goal to have it remain publicly accessible,” Vatter said. This research was supported by a grant from the National Science Foundation Division of Undergraduate Education.

A year in the spotlight: University of Delaware’s most notable media mentions of 2025
In 2025, the University of Delaware had many exceptional media mentions. Here are some of the most notable. Science coverage dominated Where will the next big hurricane hit? Ask the sharks. (The Washington Post) – Aaron Carlisle, a marine ecologist, was featured for his revolutionary work using sharks to predict major weather events. Scientists could soon lose a key tool for studying Antarctica's melting ice sheets as climate risks grow (NBC News) – Carlos Moffat, an associate professor and oceanographer, spoke about the national budget and how it's impacting climate research. These Katrina Survivors Feel Overlooked. Now, They’re Using TikTok to Tell Their Stories (Rolling Stone) – Jennifer Trivedi, a disaster researcher, spoke about why Hurricane Katrina was such a major story. Malala Yousafzai, Migration and Sustainability (Forbes) – Saleem Ali, a professor of energy and environment, contributed regularly to Forbes on environmental topics. Scientists went hunting for freshwater deep beneath the Atlantic Ocean. What they found could have global implications (CNN) – Holly Michael, a professor of Earth sciences and civil and environmental engineering, spoke about the history of freshwater. Engineering Professor Answers Electric Car Questions (WIRED) – Willett Kempton, a professor of engineering, joined WIRED to answer the internet's most interesting questions about electric cars. Plastic shopping bag policies are actually working, a new study suggests (CNN) – Kimberly Oremus, associate professor of marine science and policy, was featured in several major outlets on the effectiveness of plastic bag bans. Insects are dying: here are 25 easy and effective ways you can help protect them (The Guardian) – Douglas Tallamy, an entomologist, was featured in dozens of outlets for his expertise. Political news coverage was front and center U.S. Chamber of Commerce sues Trump administration over $100,000 H-1B visa fees (NPR) – Daniel Kinderman, a political science professor, was interviewed for his expertise on a lawsuit involving changes in work visas. The government shutdown is over, but expect more fights and higher insurance prices to come (Delaware Public Media) – David Redlawsk, a political psychologist, discussed the recent government shutdown and what an end to it signals. Wrestling Over Charlie Kirk’s Legacy and the Divide in America (The New York Times) – Dannagal Young, a communications professor, commented on how media reacted to the death of Charlie Kirk. Consequences for colleges whose students carry mountains of debt? Republicans say yes (NPR) – Dominique Baker, associate professor of education, was quoted in multiple national outlets for her education expertise. General expertise came in clutch Why the U.S. struggles with passenger service despite having the most rail lines (NPR) – Allan Zarembski, a professor of railroad engineering, was featured in dozens of national publications for his expertise. From folklore to your front porch: The history of the jack-o'-lantern (NPR) – Cindy Ott, an associate professor of history, detailed the history of this autumn staple in multiple outlets. Nexstar Media Group buying Tegna in deal worth $6.2 billion (AP) – Danilo Yanich, professor of public policy, noted the ways the media giant duplicates work across networks. Warren Buffett hired Todd Combs to take over Berkshire's portfolio one day. Here's what close watchers say about his surprise exit. (Business Insider) – Lawrence Cunningham, director of UD's Weinberg Center, was featured throughout the year for his business and economic expertise. Enlighten Me: How to make your holidays truly happy (Delaware Public Media) – Amit Kumar, a professor of marketing, discussed strategies for finding happiness during the holidays throughout the winter season. Students and their stories shined throughout the year Networking: Is it what you know or who you know? (The Chronicle of Higher Education) – UD's career-development office, which assists students on their job journeys, was featured. U of Delaware Creates Yearlong Co-Ops for Business Students (Inside Higher Ed) – A new partnership with the state of Delaware connects business students to local employers, with the goal of reducing brain drain in the region was featured. Wilmington’s 'STEM Queen' earns national Obama–Chesky honor (The News Journal/Delaware Online) – Jacqueline Means, a management information systems major, was featured for earning a national recognition. Vita Nova Restaurant Gives Culinary Students Hands-on Training (Delaware Today) – The student-staffed restaurant, Vita Nova, was featured. Delaware professor transforms writing class by teaching students to use AI as the technology reshapes the workforce (WHYY) – Matt Kinservik, a professor of English, was featured for teaching students to use AI responsibly, exploring its capabilities and fact-checking tools. Pop culture experts weighed in 'Stranger Things' expert at UD chats about Netflix show's appeal (The News Journal/Delaware Online) – Siobhan Carroll, an associate English professor, sat down with a reporter to discuss the latest season and how the horror genre is often a mirror of our real world. “Horrendous And Insulting”: Backlash Erupts Over “Misrepresentation” In 2026 Wuthering Heights (Bored Panda) – Thomas Leitch, an English professor, said that “literal adaptations of classic novels are exceedingly rare, maybe impossible.” Major changes at UD highlighted University of Delaware appoints interim president to the permanent post (The Philadelphia Inquirer) – News of UD's new president, Laura A. Carlson, was covered throughout the region. Retiree learning center gets boost with $1M gift for downstate OLLI classes (Spotlight Delaware) – a large donation to the southern Delaware chapter of the Osher Lifelong Learning Institute, was featured. To speak with any of these experts in 2026 on these stories or others, please reach out to MediaRelations@udel.edu. Happy holidays and cheers for a bright and healthy new year!

With OpenAI’s latest release, GPT-5.2, AI has crossed an important threshold in performance on professional knowledge-work benchmarks. Peter Evans, Co-Founder & CEO of ExpertFile, outlines how these technologies will fundamentally improve research communications and shares tips and prompts for PR pros. OpenAI has just launched GPT-5.2, describing it as its most capable AI model yet for professional knowledge work — with significantly improved accuracy on tasks like creating spreadsheets, building presentations, interpreting images, and handling complex multistep workflows. And based on our internal testing, we're really impressed. For communications professionals in higher education, non-profits, and R&D-focused industries, this isn’t just another tech upgrade — it’s a meaningful step forward in addressing the “research translation gap” that can slow storytelling and media outreach. According to OpenAI, GPT-5.2 represents measurable gains on benchmarks designed to mirror real work tasks. In many evaluations, it matches or exceeds the performance of human professionals. Also, before you hit reply with “Actually, the best model is…” — yes, we know. ChatGPT-5.2 isn’t the only game in town, and it’s definitely not the only tool we use. Our ExpertFile platform uses AI throughout, and I personally bounce between Claude 4.5, Gemini, Perplexity, NotebookLM, and more specialized models depending on the job to be done. LLM performance right now is a full-contact horserace — today’s winner can be tomorrow’s “remember when,” so we’re not trying to boil the ocean with endless comparisons. We’re spotlighting GPT-5.2 because it marks a meaningful step forward in the exact areas research comms teams care about: reliability, long-document work, multi-step tasks, and interpreting visuals and data. Most importantly, we want this info in your hands because a surprising number of comms pros we meet still carry real fear about AI — and long term, that’s not a good thing. Used responsibly, these tools can help you translate research faster, find stronger story angles, and ship more high-quality work without burning out. When "Too Much" AI Power Might Be Exactly What You Need AI expert Allie K. Miller's candid but positive review of an early testing version of ChatGPT 5.2 highlights what she sees as drawbacks for casual users: "outputs that are too long, too structured, and too exhaustive." She goes on to say that in her tests, she observed that ChatGPT-5,2 "stays with a line of thought longer and pushes into edge cases instead of skating on the surface." Fair enough. All good points that Allie Miller makes (see above). However, for communications professionals, these so-called "downsides" for casual users are precisely the capabilities we need. When you're assessing complex research and developing strategic messaging for a variety of important audiences, you want an AI that fits Miller's observation that GPT-5.2 feels like "AI as a serious analyst" rather than "a friendly companion." That's not a critique of our world—it's a job description for comms pros working in sectors like higher education and healthcare. Deep research tools that refuse to take shortcuts are exactly what research communicators need. So let's talk more specifically about how comms pros can think about these new capabilities: 1. AI is Your New Speed-Reading Superpower for Research That means you can upload an entire NIH grant, a full clinical trial protocol, or a complex environmental impact study and ask the model to highlight where key insights — like an unexpected finding — are discussed. It can do this in a fraction of the time it would take a human reader. This isn’t about being lazy. It’s about using AI to assemble a lot of tedious information you need to craft compelling stories while teams still parse dense text manually. 2. The Chart Whisperer You’ve Been Waiting For We’ve all been there — squinting at a graph of scientific data that looks like abstract art, waiting for the lead researcher to clarify what those error bars actually mean. Recent improvements in how GPT-5.2 handles scientific figures and charts show stronger performance on multimodal reasoning tasks, indicating better ability to interpret and describe visual information like graphs and diagrams. With these capabilities, you can unlock the data behind visuals and turn them into narrative elements that resonate with audiences. 3. A Connection Machine That Finds Stories Where Others See Statistics Great science communication isn’t about dumbing things down — it’s about building bridges between technical ideas and the broader public. GPT-5.2 shows notable improvements in abstract reasoning compared with earlier versions, based on internal evaluations on academic reasoning benchmarks. For example, teams working on novel materials science or emerging health technologies can use this reasoning capability to highlight connections between technical results and real-world impact — something that previously required hours of interpretive work. These gains help the AI spot patterns and relationships that can form the basis of compelling storytelling. 4. Accuracy That Gives You More Peace of Mind...When Coupled With Human Oversight Let’s address the elephant in the room: AI hallucinations. You’ve probably heard the horror stories — press releases that cited a study that didn’t exist, or a “quote” that was never said by an expert. GPT-5.2 has meaningfully reduced error rates compared with its predecessor, by a substantial margin, according to OpenAI Even with all these improvements, human review with your experts and careful editing remain essential, especially for anything that will be published or shared externally. 5. The Speed Factor: When “Urgent” Actually Means Urgent With the speed of media today, being second often means being irrelevant. GPT-5.2’s performance on workflow-oriented evaluations suggests it can synthesize information far more quickly than manual review, freeing up a lot more time for strategic work. While deeper reasoning and longer contexts — the kinds of tasks that matter most in research translation — require more processing time and costs continue to improve. Savvy communications teams will adopt a tiered approach: using faster models of AI for simple tasks such as social posts and routine responses, and using reasoning-optimized settings for deep research. Your Action Plan: The GPT-5.2 Playbook for Comms Pros Here’s a tactical checklist to help your team capitalize on these advances. #1 Select the Right AI Model for the Job: Lowers time and costs • Use fast, general configurations for routine content • Use reasoning-optimized configurations for complex synthesis and deep document understanding • Use higher-accuracy configurations for high-stakes projects #2 Find Hidden Ideas Beyond the Abstract: Deeper Reasoning Models do the Heavy Work • Upload complete PDFs — not just the 2-page summary you were given • Use deeper reasoning configurations to let the model work through the material Try these prompts in ChatGPT5.2 “What exactly did the researchers say about this unexpected discovery that would be of interest to my <target audience>? Provide quotes and page references where possible.” “Identify and explain the research methodology used in this study, with references to specific sections.” “Identify where the authors discuss limitations of the study.” “Explain how this research may lead to further studies or real-world benefits, in terms relatable to a general audience.” #3 Unlock Your Story Leverage improvements in pattern recognition and reasoning. Try these prompts: “Using abstract reasoning, find three unexpected analogies that explain this complex concept to a general audience.” “What questions could the researchers answer in an interview that would help us develop richer story angles?” #4 Change the Way You Write Captions Take advantage of the way ChatGPT-5.2 translates processes and reasons about images, charts, diagrams, and other visuals far more effectively. Try these prompts: Clinical Trial Graphs: “Analyze this uploaded trial results graph upload image. Identify key trends, and comparisons to controls, then draft a 150-word donor summary with plain-language explanations and suggested captions suitable for donor communications.” Medical Diagrams: “Interpret these uploaded images. Extract diagnostic insights, highlight innovations, and generate a patient-friendly explainer: bullet points plus one visual caption.” A Word of Caution: Keep Experts in the Loop to Verify Information Even with improved reliability, outputs should be treated as drafts. If your team does not yet have formal AI use policies, it's time to get started, because governance will be critical as AI use scales in 2026 and beyond. A trust-but-verify policy with experts treats AI as a co-pilot — helpful for heavy lifting — while humans remain accountable for approval and publication. The Importance of Humans (aka The Good News) Remember: the future of research communication isn’t about AI taking over — it’s about AI empowering us to do the strategic, human work that machines cannot. That includes: • Building relationships across your institution • Engaging researchers in storytelling • Discovering narrative opportunities • Turning discoveries into compelling narratives that influence audiences With improvements in speed, reasoning, and reliability, the question isn’t whether AI can help — it’s what research stories you’ll uncover next to shape public understanding and impact. FAQ How is AI changing expectations for accuracy in research and institutional communications? AI is shifting expectations from “fast output” to defensible accuracy. Better reasoning means fewer errors in research summaries, policy briefs, and expert content—especially when you’re working from long PDFs, complex methods, or dense results. The new baseline is: clear claims, traceable sources, and human review before publishing. ⸻ Why does deeper AI reasoning matter for communications teams working with experts and research content? Comms teams translate multi-disciplinary research into messaging that must withstand scrutiny. Deeper reasoning helps AI connect findings to real-world relevance, flag uncertainty, and maintain nuance instead of flattening meaning. The result is work that’s easier to defend with media, leadership, donors, and the public—when paired with expert verification. ⸻ When should communications professionals use advanced AI instead of lightweight AI tools? Use lightweight tools for brainstorming, social drafts, headlines, and quick rewrites. Use advanced, reasoning-optimized AI for high-stakes deliverables: executive briefings, research positioning, policy-sensitive messaging, media statements, and anything where a mistake could create reputational, compliance, or scientific credibility risk. Treat advanced AI as your “analyst,” not your autopilot. ⸻ How can media relations teams use AI to find stronger story angles beyond the abstract? AI can scan full papers, grants, protocols, and appendices to surface where the real story lives: unexpected findings, practical implications, limitations, and unanswered questions that prompt great interviews. Ask it to map angles by audience (public, policy, donors, clinicians) and to point to the exact sections that support each angle. ⸻ How should higher-ed comms teams use AI without breaking embargoes or media timing? AI can speed prep work—backgrounders, Q&A, lay summaries, caption drafts—before embargo lifts. The rule is simple: treat embargoed material like any sensitive document. Use approved tools, restrict sharing, and avoid pasting embargoed text into unapproved systems. Use AI to build assets early, then finalize post-approval at release time. ⸻ What’s the best way to keep faculty “in the loop” while still moving fast with AI? Use AI to produce review-friendly drafts that reduce load on researchers: short summaries, suggested quotes clearly marked as drafts, and a checklist of claims needing verification (numbers, methods, limitations). Then route to the expert with specific questions, not a wall of text. This keeps approvals faster while protecting scientific accuracy and trust. ⸻ How should teams handle charts, figures, and visual data in research communications? AI can turn “chart confusion” into narrative—if you prompt for precision. Ask it to identify trends, group comparisons, and what the figure does not show (limitations, missing context). Then verify with the researcher, especially anything involving significance, controls, effect size, or causality. Use the output to write captions that are accurate and accessible. ⸻ Do we need an AI Use policy in comms and media relations—and what should it include? Yes—because adoption scales faster than risk awareness. A practical policy should define: approved tools, what data is restricted, required human review steps, standards for citing sources/page references, rules for drafting quotes, and escalation paths for sensitive topics (health, legal, crisis). Clear guardrails reduce fear and prevent preventable reputational mistakes. If you’re using AI to move faster on research translation, the next bottleneck is usually the same one for many PR and Comm Pros: making your experts more discoverable in Generative Search, your website, and other media. ExpertFile helps media relations and digital teams organize their expert content by topics, keep detailed profiles current, and respond faster to source requests—so you can boost your AI citations and land more coverage with less work. For more information visit us at www.expertfile.com

Artificial intelligence is a resource-intensive technology. A paper recently published in Nano Letters by collaborators at the Virginia Commonwealth University (VCU) College of Engineering and Georgetown University hopes to improve AI’s ability to parse the vast amounts of information it creates by applying magneto-ionics to the established concept of physical reservoir computing (PRC). “Demonstrating we can make solid-state devices with magneto-ionic materials is an important step into further energy-efficient computing research, and this Nano Letters publication reinforces that,” said Muhammad (Md.) Mahadi Rajib, Ph.D., a postdoc with Jayasimha Atulasimha, Ph.D., Engineering Foundation Professor in the Department of Mechanical & Nuclear Engineering. What makes a decision? Our brains make countless complex decisions everyday. Input comes in, we weigh options and decide what to do. Within that simple path are countless identical loops of input, consideration and output as neurons fire in a chain that takes you from cause to effect. For artificial intelligence, nodes within a neural network receive inputs and provide output, much like the neurons in our brains. These outputs can be sent to other nodes for continued processing, but those outputs need weight to have value. For AI, weight signifies one input or connection is more important than another. Traditional neural networks have multiple layers consisting of countless nodes like this. Each node requires training in order to weigh things properly. Training consumes processing power, and processing power takes time and energy. Making tasks like analysis and prediction more efficient is how to continuously improve AI technology. Less training, more efficiency. Physical reservoir computing reduces the number of nodes an AI needs to train. Only the final output layer needs training in PRC, using a simple method for classification or prediction tasks. A physical “black box” replaces neural network nodes and synapses, like the ones used for AI inference, in PRC and processes inputs by implementing a nonlinear mathematical function with temporal memory. To explain the inner workings of the black box, imagine two stones thrown into still water. One stone is thrown with high force and the other with low force, creating big and small ripples respectively. If the stones are thrown so the second stone lands before the previous ripples have dissipated, the new ripple is affected by the earlier one. This illustrates the concept of temporal memory. In this analogy, if multiple stones are thrown one after another into still water according to some complex trend, observing the ripples over time allows you to understand the trend and train a simple set of weights to predict the force of the next stone throw from the ripple pattern. Repeatedly performing this cycle of input, interaction and observation is PRC. It reveals patterns over time that can predict chaotic systems, like market trends or the weather, using techniques like linear regression modeling to plot each output as a single point. The magneto-ionic approach. Using this same example above, the “water” in a magneto-ionic PRC is represented by a positive and negative electrode with solid-state electrolyte between them through which ions move when voltage is applied. The application of voltage is equivalent to throwing a stone and the ripple effect is comparable to the movement of oxygen ions in the system. “In addition to its energy efficiency, a useful feature of the magnetoionic system is that time scales for ion diffusion can be controlled from microseconds to minutes,” Atulasimha said. “This leads to simple experimental demonstration, as no megahertz and gigahertz measurements are needed. One can work at the natural time scales of the target application in practical systems and remove the need for complex frequency conversion, which takes both energy and space due to complex electronics.” Atulasimha imagines these energy-efficient reservoir systems have applications in edge computing devices like drones, automated vehicles and surveillance cameras. Tasks such as household energy load forecasting, weather prediction or processing hourly readings from wearable devices, which operate on hour-scale data, can also be performed using magneto-ionic PRC without additional preprocessing. “We showed that the magneto-ionic physical reservoir has both memory and nonlinear behavior, two important properties necessary for using it as a reservoir block,” Rajib said. “Our system stands out because voltage-controlled ion migration is a highly energy efficient method of manipulating magnetization. We demonstrated the required reservoir properties in a physical system and did so using a very energy efficient approach.” Two labs came together in order to pursue this research. Virginia Commonwealth University collaborators included Atulasimha, Rajib, and VCU Ph.D. students Fahim Chowdhury and Shouvik Sarker. The Georgetown University team included Kai Liu, Ph.D., Professor and McDevitt Chair in Physics, Dhritiman Bhattacharya, Ph.D., Christopher Jensen, Ph.D. and Gong Chen, Ph.D. Atulasimha’s group illustrated physical reservoir computing using numerical models of spintronic devices and sought a material system to experimentally demonstrate PRC. Liu’s team worked with magneto-ionic materials and was intrigued by the possibility of using them for computing applications.

UF team develops AI tool to make genetic research more comprehensive
University of Florida researchers are addressing a critical gap in medical genetic research — ensuring it better represents and benefits people of all backgrounds. Their work, led by Kiley Graim, Ph.D., an assistant professor in the Department of Computer & Information Science & Engineering, focuses on improving human health by addressing "ancestral bias" in genetic data, a problem that arises when most research is based on data from a single ancestral group. This bias limits advancements in precision medicine, Graim said, and leaves large portions of the global population underserved when it comes to disease treatment and prevention. To solve this, the team developed PhyloFrame, a machine-learning tool that uses artificial intelligence to account for ancestral diversity in genetic data. With funding support from the National Institutes of Health, the goal is to improve how diseases are predicted, diagnosed, and treated for everyone, regardless of their ancestry. A paper describing the PhyloFrame method and how it showed marked improvements in precision medicine outcomes was published Monday in Nature Communications. Graim, a member of the UF Health Cancer Center, said her inspiration to focus on ancestral bias in genomic data evolved from a conversation with a doctor who was frustrated by a study's limited relevance to his diverse patient population. This encounter led her to explore how AI could help bridge the gap in genetic research. “If our training data doesn’t match our real-world data, we have ways to deal with that using machine learning. They’re not perfect, but they can do a lot to address the issue.” —Kiley Graim, Ph.D., an assistant professor in the Department of Computer & Information Science & Engineering and a member of the UF Health Cancer Center “I thought to myself, ‘I can fix that problem,’” said Graim, whose research centers around machine learning and precision medicine and who is trained in population genomics. “If our training data doesn’t match our real-world data, we have ways to deal with that using machine learning. They’re not perfect, but they can do a lot to address the issue.” By leveraging data from population genomics database gnomAD, PhyloFrame integrates massive databases of healthy human genomes with the smaller datasets specific to diseases used to train precision medicine models. The models it creates are better equipped to handle diverse genetic backgrounds. For example, it can predict the differences between subtypes of diseases like breast cancer and suggest the best treatment for each patient, regardless of patient ancestry. Processing such massive amounts of data is no small feat. The team uses UF’s HiPerGator, one of the most powerful supercomputers in the country, to analyze genomic information from millions of people. For each person, that means processing 3 billion base pairs of DNA. “I didn’t think it would work as well as it did,” said Graim, noting that her doctoral student, Leslie Smith, contributed significantly to the study. “What started as a small project using a simple model to demonstrate the impact of incorporating population genomics data has evolved into securing funds to develop more sophisticated models and to refine how populations are defined.” What sets PhyloFrame apart is its ability to ensure predictions remain accurate across populations by considering genetic differences linked to ancestry. This is crucial because most current models are built using data that does not fully represent the world’s population. Much of the existing data comes from research hospitals and patients who trust the health care system. This means populations in small towns or those who distrust medical systems are often left out, making it harder to develop treatments that work well for everyone. She also estimated 97% of the sequenced samples are from people of European ancestry, due, largely, to national and state level funding and priorities, but also due to socioeconomic factors that snowball at different levels – insurance impacts whether people get treated, for example, which impacts how likely they are to be sequenced. “Some other countries, notably China and Japan, have recently been trying to close this gap, and so there is more data from these countries than there had been previously but still nothing like the European data," she said. “Poorer populations are generally excluded entirely.” Thus, diversity in training data is essential, Graim said. "We want these models to work for any patient, not just the ones in our studies," she said. “Having diverse training data makes models better for Europeans, too. Having the population genomics data helps prevent models from overfitting, which means that they'll work better for everyone, including Europeans.” Graim believes tools like PhyloFrame will eventually be used in the clinical setting, replacing traditional models to develop treatment plans tailored to individuals based on their genetic makeup. The team’s next steps include refining PhyloFrame and expanding its applications to more diseases. “My dream is to help advance precision medicine through this kind of machine learning method, so people can get diagnosed early and are treated with what works specifically for them and with the fewest side effects,” she said. “Getting the right treatment to the right person at the right time is what we’re striving for.” Graim’s project received funding from the UF College of Medicine Office of Research’s AI2 Datathon grant award, which is designed to help researchers and clinicians harness AI tools to improve human health.
LSU Experts Break Down Artificial Intelligence Boom Behind Holiday Shopping Trends
Consumers are increasingly turning to artificial intelligence tools for holiday shopping—especially Gen Z shoppers, who are using platforms like ChatGPT and social media not only for gift inspiration but also to find the best prices. Andrew Schwarz, professor in the LSU Stephenson Department of Entrepreneurship & Information Systems, and Dan Rice, associate professor and Director of the E. J. Ourso College of Business Behavioral Research Lab, share their insights on this emerging trend. AI is the new front door for search: Schwarz: We’re seeing a fundamental change in how consumers find information. Instead of browsing multiple pages of results, users—especially Gen Z—are skipping to conversational AI for curated answers. That dramatically shortens the shopping journey. For years, companies optimized for SEO to appear on the first page of Google; now they’ll have to think about how their products surface in AI-generated recommendations. This may lead to a new form of “AIO”—AI Information Optimization—where retailers tailor product descriptions, metadata, and partnerships specifically for AI visibility. The companies that adapt early will have a distinct advantage in capturing consumer attention. Rice: This issue of people being satisfied with the AI results (like a summary at the top of the Google results) and then not clicking on any of the paid or organic links leads to a huge increase in what we call “zero click search” (for obvious reasons). For some providers, this is leading to significant drops in web traffic from search results, which can be disconcerting due to the potential loss of leads. However, to Andrew’s point of shortening the journey, it means that the consumers who do come through are much more likely to buy (quickly) because they are “better” leads. This translates to seemingly paradoxical situations for providers: they see drops in click-through rates and visitors/leads, yet revenue increases because the visitors are “better.” There is a rise in personalized shopping journeys: Schwarz: AI essentially acts as a personal shopper—one that can instantly analyze preferences, budget, personality traits, or past behavior to produce tailored gift lists. This shifts power toward “delegated decision-making,” in which consumers allow AI to narrow their choices. Younger consumers are already comfortable outsourcing this cognitive load. However, as ads enter the picture, these personalized journeys could be shaped by incentives that aren’t always transparent. That creates a new responsibility for platforms to disclose when suggestions are sponsored and for users to develop a more critical lens when interacting with AI-driven recommendations. Rice: This is also a great point. The “tools” marketers use to attract customers are constantly evolving, but this seems in many ways to be the next iteration of the Amazon.com suggestions that you find at the bottom of the product page for something you click on when searching Amazon (“buy all x for $” or “consumers also looked at…,” etc.), based on past histories of search and purchase, etc. One of the main differences is that you can now create virtually limitless ways to compare products, making comparisons less taxing (reducing cognitive load and stress), which may, in some cases, increase the likelihood of purchase. These idiosyncratic comparisons and prompts lead to the truly unique journeys Andrew is discussing. You no longer have to be beholden to a retailer-specified price range. You could choose your own, or instead ask an AI to list the products representing the best “value” based on consumer reviews, perhaps by asking to list the top ten products by cost per star rating, etc. Advertising is becoming more subtle and conversational: Schwarz: With ads woven directly into AI responses, the traditional boundary between content and advertising blurs. Instead of banner ads, pop-ups, or clearly labeled sponsored posts, recommendations in a conversational thread may feel more like advice than marketing. This has enormous implications for consumer trust. Retailers will likely see higher engagement through these context-aware ad placements, but regulatory scrutiny may also increase as policymakers evaluate how clearly sponsored content is identified. The risk is that advertising becomes invisible—something both platform designers and regulators will need to monitor carefully. Rice: This is definitely true. I was recently exploring an AI-based tool for choosing downhill skis, but the tool was subtly provided by a single ski brand. I’m not sure the distribution of ski brands covered was truly delivering the “best overall fit” for a potential buyer, rather than the best possible ski in that brand. At least in that case, it was somewhat disclosed. It does, however, become an issue if consumers feel misled, but they’d have to notice it first. Still, the advantages are big for retailers, and the numbers don't lie. According to some preliminary Black Friday data, shoppers using an AI assistant were 60% more likely to make a purchase. Schwarz: This shift is going to reshape multiple layers of the retail ecosystem: Retailers will need to rethink how they show up in AI-driven environments. Traditional SEO, ad bids, and social media strategies won’t be enough. Partnerships with AI platforms may become as important as being carried by major retailers today. Because AI tools can instantly compare prices across dozens of retailers, consumers will become more price-sensitive. Retailers may face increasing pressure to offer competitive pricing or unique value propositions, as AI reduces friction in comparison shopping. Retailers who integrate AI into their own websites—chat-based shopping assistants, personalized gift advisors, automated bundling—will gain an edge. Consumers are increasingly expecting conversational interfaces, and companies that delay will quickly feel outdated. As AI tools influence purchasing decisions, consumers and regulators alike will demand clarity around how recommendations are generated. Retailers will need to navigate this carefully to maintain What I think we are going to see accelerate as we move forward: AI-powered concierge shopping will become mainstream. Within a couple of years, using AI to generate shopping lists, compare prices, and find deals will be as common as using Amazon today. Retailers will create AI-specific marketing strategies. Instead of optimizing for keywords, they’ll optimize for prompts: how consumers might ask for products and how an AI system interprets those requests. More platforms will introduce advertising into AI models. ChatGPT is simply the first mover. Once the revenue potential becomes clear, others will follow with their own ad integrations. Greater scrutiny from policymakers. As conversational advertising grows, transparency rules and labeling requirements will almost certainly. A new era of “conversational commerce.” Buying directly through AI—“ChatGPT, order this for me”—will become increasingly common, merging search, recommendation, and transaction into a single seamless experience. I can speak to this on a personal level. My college-aged son is interested in college football, and I wanted to get him a streaming subscription to watch the games. However, the football landscape is fragmented across multiple, expensive platforms. I asked ChatGPT to generate a series of options. Hulu is $100/month for Live TV, but ChatGPT recommended a combination of ESPN+, Peacock, and Paramount+ for $400/year and identified which conferences would not be covered. What would have taken me hours only took me a few minutes! Rice: On the other hand, AI isn’t infallible, and it can lead to sub-optimal results, hallucinations, and questionable recommendations. From my recent ski shopping experience, I encountered several pitfalls. First, for very specific questions about a specific model, I sometimes received answers for a different ski model in the same brand, or for a different ski altogether, which was not particularly helpful, or specs I knew were just plain wrong. Secondly, regarding Andrew’s point about the conversational tone, I asked questions intended to push the limits of what could be considered reliable. For example, I asked the AI to describe the difference in “feel” of the ski for the skier among several models and brands. While the AI gave very detailed and plausible comparisons that were very much like an in-store discussion with a salesperson or area expert, I’m not sure I fully trust when an AI tells me that you can really feel the power of a ski push you out of a turn, this ski has great edge hold, etc. It sounds great, but where is the AI sourcing this information? I’m not convinced it’s fully accurate. It also seems we’re starting to see Google shift toward a more AI-centric approach (e.g., AI summaries and full AI Mode). At the same time, we’re also starting to see AI migrate closer to Google as people use it for product-related chats, and companies like Amazon and Walmart have developed their own AI that is specifically focused on the consumer experience. I can’t imagine it will be long before companies like OpenAI and their competitors start “selling influence” in AI discussions to monetize the influence their engines will have.

AI Can’t Replace Therapists – But It Can Help Them
For a young adult who is lonely or just needs someone to talk to, an artificial intelligence chatbot can feel like a nonjudgmental best friend, offering encouragement before an interview or consolation after a breakup. AI’s advice seems sincere, thoughtful and even empathic – in short, very human. But when a vulnerable person alludes to thoughts of suicide, AI is not the answer. Not by itself, at least. Recent stories have documented the heartbreak of people dying by suicide after seeking help from chatbots rather than fellow humans. In this way, the ethos of the digital world – sometimes characterized as “move fast and break things” – clashes with the health practitioners’ oath to “first, do no harm.” When humans are being harmed, things must change. As a researcher and licensed therapist with a background in computer science, I am interested in the intersection between technology and mental health, and I understand the technological foundations of AI. When I directed a counseling clinic, I sat with people in their most vulnerable moments. These experiences prompt me to consider the rise of therapy chatbots through both a technical and clinical lens. AI, no matter how advanced, lacks the morality, responsibility and duty of care that humans carry. When someone has suicidal thoughts, they need human professionals to help. With years of training before we are licensed, we have specific ethical protocols to follow when a person reveals thoughts of suicide. Read the full article from US News & World Report here






