Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

UC Irvine’s Daniele Piomelli provides expert view on federal reclassification of cannabis featured image

UC Irvine’s Daniele Piomelli provides expert view on federal reclassification of cannabis

As the White House moves to reclassify cannabis under federal law from a schedule I to a schedule III, questions remain about how the change could affect medical use, public health, research, and regulation. UC Irvine’s Daniele Piomelli, PhD, an internationally recognized cannabis researcher, is available to comment on the implications of the policy shift. Piomelli is a distinguished professor of anatomy and neurobiology at the University of California, Irvine, the Louise Turner Arnold Chair in the neurosciences, and director of the UCI Center for the Study of Cannabis. Piomelli has more than 30 years of experience studying cannabis, THC and the endocannabinoid system, with research spanning basic neuroscience, pharmacology and translational science. He is editor in chief of Cannabis and Cannabinoid Research and has testified before the U.S. Senate on cannabis-related research and policy. He can provide perspective on: • What federal reclassification may change for medical cannabis and scientific research • Differences between THC, CBD and other cannabinoids • Potential public health benefits and risks of cannabis legalization • Cannabis exposure and the developing brain, including adolescence • Regulatory and research challenges tied to cannabis policy Piomelli is available for interviews or background conversations. Email: piomelli@hs.uci.edu

1 min. read
Tales of Christmas Past: Preserving Your Family History During the Holidays featured image

Tales of Christmas Past: Preserving Your Family History During the Holidays

During past family Christmas gatherings, many of us remember when older relatives regaled everyone with tales about their fascinating life stories, firsthand experiences as an eyewitness to history or simply sharing how favorite family traditions started. So how do you preserve those precious family memories during the holidays? Baylor University oral historians Stephen Sloan and Adrienne Cain Darough have recorded and preserved the oral history memoirs of thousands of individuals through their work with Baylor’s renowned Institute for Oral History, home of the national Oral History Association. Together, the historians share seven simple best practices to help family members begin oral history conversations that enrich recollections of the past and capture your family memories. “The holiday season brings about the opportunity to spend time with family members, especially those you may not be able to see on a frequent basis,” Cain Darough said. “This presents the perfect opportunity to conduct oral histories to capture the stories and experiences of your family and loved ones, to learn more about them, the history of your family, traditions that have been passed down from generation to generation and more.” Seven best practices for preserving your family’s oral history 1. Ask first! Make sure your family member wants their story to be documented or recorded. That is the first – and most important – question to ask, said Adrienne Cain Darough, M.L.S., assistant director and senior lecturer with the Institute for Oral History. Ask first. “Many oral historians have run into the spot where someone says, ‘Oh, my grandpa would be great for that topic,’ and you get there and it's, ‘Grandpa does not want to talk to you.’ So first, make sure they want their story recorded,” she said. 2. Determine the type of recording equipment you want to use. Decide if you want to record your interview with an audio recorder or use a video recording device. It all depends on your needs and comfort level with the technology. For family members who are unable to travel this holiday season, you can include them by capturing their stories using a remote recording platform like Zoom, which became a vital tool for oral historians when COVID struck in 2020. Helpful resources from Baylor’s Institute for Oral History include: How to choose the right digital recorder Oral History at a Distance webinar on the dynamics of conducting remote oral history interviews Remote Interviewing Resources guide (Oral History Association) 3. Research your family member’s life and their timeline to help you formulate your questions. Recording a family member’s oral history is more than just putting down a recorder in front of them and saying, “Talk.” If you’re recording an oral history over Christmas with a family member, are there specific things that you want to know that are related to the holiday? For example, what was Christmas morning like for them as a child? How did your favorite family traditions start? What is their favorite holiday dish? (Maybe they could even share the recipe. “You can finally learn why Nana’s banana pudding doesn’t even have bananas in it,” Cain Darough said.) “Doing your research to try to form those questions will help you get around the reluctance to talk sometimes,” Cain Darough added. “The favorite thing that I love to hear is, ‘Oh, I don't have much to say,’ or ‘I'm not that important.’ And then you sit down with them, and you listen to their stories, and your mind is just blown by the things that they've seen and experienced.” 4. Start with the basics: “Where are you from?” When Baylor oral historians conduct an interview, they generally begin with some life history of the subject, providing important context for historians. “Ask questions early on that are easy for them to answer: a little bit of the backstory, a little bit of where they're from, where they grew up,” said Stephen Sloan, Ph.D., director of the Institute for Oral History, executive director of Oral History Association and professor of history at Baylor. “I want to understand the lens through which they experienced events, and the only way I can do that is, who was this? What was formative in their life growing up? Who spoke into who they were? What did they learn? Where did they go? What did they do? Those are the sorts of things that I would be exploring early in the interview.” One of the questions Cain Darough enjoys asking is, “What did you want to be when you grew up?” “You want to give them something that's very easy and comfortable to talk about,” Cain Darough said. “What was your favorite subject in school, just to see if that was something that continued on in their life. If there's a certain hobby or something that you know that they're affiliated with, when did you learn about that? Tell me more. What's your interest with this? And then they'll get to talking.” 5. Ask open-ended questions – without making any assumptions. With oral history, it is important that you don’t go into the interview with a specific agenda or try to lead anyone to a certain conclusion. “We can do this very subtly by assuming information, but you can't assume anything about their experience with the topic,’” Sloan said. “If we assume information, it could be very far from how they encountered whatever event that may have been. Allow them to relate the ways in which they lived these experiences.” 6. Listen closely. Listening is an important facet of gathering oral history. But historians say you are not only listening for what they're saying, you're also listening for what they're not saying. “Are there things that are being skipped around?” Cain Darough said. “For example, sometimes when you're talking to veterans about their combat experience, it may be the first time that they're reliving or retelling these stories. They need time, and you just have to be prepared for that.” 7. Be patient. It might take your subject some time to warm up to the conversation. “If you're talking to someone who is 80, 90 or even 100, that's a lot of memories that they have to go through, so patience is important,” Cain Darough said. Looking to know more or arrange an interview? Simply click on Stephen's icon or contact: Shelby Cefaratti-Bertin today to connect with  Adrienne Cain Darough.

Stephen  Sloan, Ph.D. profile photo
5 min. read
ChatGPT-5.2 Now Achieves “Expert-Level” Performance — Is this the Holiday Gift Research Communications Professionals Needed? featured image

ChatGPT-5.2 Now Achieves “Expert-Level” Performance — Is this the Holiday Gift Research Communications Professionals Needed?

With OpenAI’s latest release, GPT-5.2, AI has crossed an important threshold in performance on professional knowledge-work benchmarks. Peter Evans, Co-Founder & CEO of ExpertFile, outlines how these technologies will fundamentally improve research communications and shares tips and prompts for PR pros. OpenAI has just launched GPT-5.2, describing it as its most capable AI model yet for professional knowledge work — with significantly improved accuracy on tasks like creating spreadsheets, building presentations, interpreting images, and handling complex multistep workflows. And based on our internal testing, we're really impressed. For communications professionals in higher education, non-profits, and R&D-focused industries, this isn’t just another tech upgrade — it’s a meaningful step forward in addressing the “research translation gap” that can slow storytelling and media outreach. According to OpenAI, GPT-5.2 represents measurable gains on benchmarks designed to mirror real work tasks.  In many evaluations, it matches or exceeds the performance of human professionals. Also, before you hit reply with “Actually, the best model is…” — yes, we know. ChatGPT-5.2 isn’t the only game in town, and it’s definitely not the only tool we use. Our ExpertFile platform uses AI throughout, and I personally bounce between Claude 4.5, Gemini, Perplexity, NotebookLM, and more specialized models depending on the job to be done. LLM performance right now is a full-contact horserace — today’s winner can be tomorrow’s “remember when,” so we’re not trying to boil the ocean with endless comparisons. We’re spotlighting GPT-5.2 because it marks a meaningful step forward in the exact areas research comms teams care about: reliability, long-document work, multi-step tasks, and interpreting visuals and data. Most importantly, we want this info in your hands because a surprising number of comms pros we meet still carry real fear about AI — and long term, that’s not a good thing. Used responsibly, these tools can help you translate research faster, find stronger story angles, and ship more high-quality work without burning out. When "Too Much" AI Power Might Be Exactly What You Need AI expert Allie K. Miller's candid but positive review of an early testing version of ChatGPT 5.2 highlights what she sees as drawbacks for casual users: "outputs that are too long, too structured, and too exhaustive."  She goes on to say that in her tests, she observed that ChatGPT-5,2 "stays with a line of thought longer and pushes into edge cases instead of skating on the surface." Fair enough. All good points that Allie Miller makes (see above).  However, for communications professionals, these so-called "downsides" for casual users are precisely the capabilities we need. When you're assessing complex research and developing strategic messaging for a variety of important audiences, you want an AI that fits Miller's observation that GPT-5.2 feels like "AI as a serious analyst" rather than "a friendly companion." That's not a critique of our world—it's a job description for comms pros working in sectors like higher education and healthcare. Deep research tools that refuse to take shortcuts are exactly what research communicators need.  So let's talk more specifically about how comms pros can think about these new capabilities: 1. AI is Your New Speed-Reading Superpower for Research That means you can upload an entire NIH grant, a full clinical trial protocol, or a complex environmental impact study and ask the model to highlight where key insights — like an unexpected finding — are discussed. It can do this in a fraction of the time it would take a human reader. This isn’t about being lazy. It’s about using AI to assemble a lot of tedious information you need to craft compelling stories while teams still parse dense text manually. 2. The Chart Whisperer You’ve Been Waiting For We’ve all been there — squinting at a graph of scientific data that looks like abstract art, waiting for the lead researcher to clarify what those error bars actually mean. Recent improvements in how GPT-5.2 handles scientific figures and charts show stronger performance on multimodal reasoning tasks, indicating better ability to interpret and describe visual information like graphs and diagrams.  With these capabilities, you can unlock the data behind visuals and turn them into narrative elements that resonate with audiences. 3. A Connection Machine That Finds Stories Where Others See Statistics Great science communication isn’t about dumbing things down — it’s about building bridges between technical ideas and the broader public. GPT-5.2 shows notable improvements in abstract reasoning compared with earlier versions, based on internal evaluations on academic reasoning benchmarks.  For example, teams working on novel materials science or emerging health technologies can use this reasoning capability to highlight connections between technical results and real-world impact — something that previously required hours of interpretive work. These gains help the AI spot patterns and relationships that can form the basis of compelling storytelling. 4. Accuracy That Gives You More Peace of Mind...When Coupled With Human Oversight Let’s address the elephant in the room: AI hallucinations. You’ve probably heard the horror stories — press releases that cited a study that didn’t exist, or a “quote” that was never said by an expert. GPT-5.2 has meaningfully reduced error rates compared with its predecessor, by a substantial margin, according to OpenAI  Even with all these improvements, human review with your experts and careful editing remain essential, especially for anything that will be published or shared externally. 5. The Speed Factor: When “Urgent” Actually Means Urgent With the speed of media today, being second often means being irrelevant.  GPT-5.2’s performance on workflow-oriented evaluations suggests it can synthesize information far more quickly than manual review, freeing up a lot more time for strategic work.  While deeper reasoning and longer contexts — the kinds of tasks that matter most in research translation — require more processing time and costs continue to improve. Savvy communications teams will adopt a tiered approach: using faster models of AI for simple tasks such as social posts and routine responses, and using reasoning-optimized settings for deep research. Your Action Plan: The GPT-5.2 Playbook for Comms Pros Here’s a tactical checklist to help your team capitalize on these advances. #1 Select the Right AI Model for the Job: Lowers time and costs • Use fast, general configurations for routine content • Use reasoning-optimized configurations for complex synthesis and deep document understanding • Use higher-accuracy configurations for high-stakes projects #2 Find Hidden Ideas Beyond the Abstract: Deeper Reasoning Models do the Heavy Work • Upload complete PDFs — not just the 2-page summary you were given • Use deeper reasoning configurations to let the model work through the material Try these prompts in ChatGPT5.2 “What exactly did the researchers say about this unexpected discovery that would be of interest to my <target audience>? Provide quotes and page references where possible.” “Identify and explain the research methodology used in this study, with references to specific sections.” “Identify where the authors discuss limitations of the study.” “Explain how this research may lead to further studies or real-world benefits, in terms relatable to a general audience.” #3 Unlock Your Story Leverage improvements in pattern recognition and reasoning. Try these prompts: “Using abstract reasoning, find three unexpected analogies that explain this complex concept to a general audience.” “What questions could the researchers answer in an interview that would help us develop richer story angles?” #4 Change the Way You Write Captions Take advantage of the way ChatGPT-5.2 translates processes and reasons about images, charts, diagrams, and other visuals far more effectively. Try these prompts: Clinical Trial Graphs: “Analyze this uploaded trial results graph upload image. Identify key trends, and comparisons to controls, then draft a 150-word donor summary with plain-language explanations and suggested captions suitable for donor communications.” Medical Diagrams: “Interpret these uploaded images. Extract diagnostic insights, highlight innovations, and generate a patient-friendly explainer: bullet points plus one visual caption.” A Word of Caution: Keep Experts in the Loop to Verify Information Even with improved reliability, outputs should be treated as drafts.  If your team does not yet have formal AI use policies, it's time to get started, because governance will be critical as AI use scales in 2026 and beyond.  A trust-but-verify policy with experts treats AI as a co-pilot — helpful for heavy lifting — while humans remain accountable for approval and publication.  The Importance of Humans (aka The Good News) Remember: the future of research communication isn’t about AI taking over — it’s about AI empowering us to do the strategic, human work that machines cannot. That includes: • Building relationships across your institution • Engaging researchers in storytelling • Discovering narrative opportunities • Turning discoveries into compelling narratives that influence audiences With improvements in speed, reasoning, and reliability, the question isn’t whether AI can help — it’s what research stories you’ll uncover next to shape public understanding and impact. FAQ How is AI changing expectations for accuracy in research and institutional communications? AI is shifting expectations from “fast output” to defensible accuracy. Better reasoning means fewer errors in research summaries, policy briefs, and expert content—especially when you’re working from long PDFs, complex methods, or dense results. The new baseline is: clear claims, traceable sources, and human review before publishing. ⸻ Why does deeper AI reasoning matter for communications teams working with experts and research content? Comms teams translate multi-disciplinary research into messaging that must withstand scrutiny. Deeper reasoning helps AI connect findings to real-world relevance, flag uncertainty, and maintain nuance instead of flattening meaning. The result is work that’s easier to defend with media, leadership, donors, and the public—when paired with expert verification. ⸻ When should communications professionals use advanced AI instead of lightweight AI tools? Use lightweight tools for brainstorming, social drafts, headlines, and quick rewrites. Use advanced, reasoning-optimized AI for high-stakes deliverables: executive briefings, research positioning, policy-sensitive messaging, media statements, and anything where a mistake could create reputational, compliance, or scientific credibility risk. Treat advanced AI as your “analyst,” not your autopilot. ⸻ How can media relations teams use AI to find stronger story angles beyond the abstract? AI can scan full papers, grants, protocols, and appendices to surface where the real story lives: unexpected findings, practical implications, limitations, and unanswered questions that prompt great interviews. Ask it to map angles by audience (public, policy, donors, clinicians) and to point to the exact sections that support each angle. ⸻ How should higher-ed comms teams use AI without breaking embargoes or media timing? AI can speed prep work—backgrounders, Q&A, lay summaries, caption drafts—before embargo lifts. The rule is simple: treat embargoed material like any sensitive document. Use approved tools, restrict sharing, and avoid pasting embargoed text into unapproved systems. Use AI to build assets early, then finalize post-approval at release time. ⸻ What’s the best way to keep faculty “in the loop” while still moving fast with AI? Use AI to produce review-friendly drafts that reduce load on researchers: short summaries, suggested quotes clearly marked as drafts, and a checklist of claims needing verification (numbers, methods, limitations). Then route to the expert with specific questions, not a wall of text. This keeps approvals faster while protecting scientific accuracy and trust. ⸻ How should teams handle charts, figures, and visual data in research communications? AI can turn “chart confusion” into narrative—if you prompt for precision. Ask it to identify trends, group comparisons, and what the figure does not show (limitations, missing context). Then verify with the researcher, especially anything involving significance, controls, effect size, or causality. Use the output to write captions that are accurate and accessible. ⸻ Do we need an AI Use policy in comms and media relations—and what should it include? Yes—because adoption scales faster than risk awareness. A practical policy should define: approved tools, what data is restricted, required human review steps, standards for citing sources/page references, rules for drafting quotes, and escalation paths for sensitive topics (health, legal, crisis). Clear guardrails reduce fear and prevent preventable reputational mistakes. If you’re using AI to move faster on research translation, the next bottleneck is usually the same one for many PR and Comm Pros: making your experts more discoverable in Generative Search, your website, and other media. ExpertFile helps media relations and digital teams organize their expert content by topics, keep detailed profiles current, and respond faster to source requests—so you can boost your AI citations and land more coverage with less work.                                            For more information visit us at www.expertfile.com

Peter Evans profile photo
9 min. read
The Research Behind the Reputation: TCU’s Dr. Ledbetter Maps the Science of Taylor Swift’s Storytelling featured image

The Research Behind the Reputation: TCU’s Dr. Ledbetter Maps the Science of Taylor Swift’s Storytelling

At Texas Christian University, Dr. Andrew Ledbetter, Chair of the Communication Studies Department, is turning his scholarly attention to one of pop culture’s biggest phenomena: Taylor Swift. His research uses data-driven analysis to reveal how Swift’s albums and songs build an interconnected narrative universe — what he calls her “Taylorverse.” Ledbetter ran lyrics across ten albums through semantic-network software to show how certain songs act as linchpins connecting themes of fame, womanhood, love and storytelling. “I was interested in interconnections among the song lyrics,” says Ledbetter. “The songs that are most central have a lot of overlap with other songs, might tend to be songs that are the most popular.”  November 03 0 NBC News The work stands out not just for its pop-culture relevance, but for its academic innovation: combining computational text-analysis with narrative theory to unlock why certain tracks resonate more deeply than others. For journalists, cultural commentators or anyone covering the evolving intersection of music, identity and media, Dr. Ledbetter is a go-to expert. He can speak to how storytelling in music shapes audience engagement, how media fandom becomes scholarship, and why Swift’s songwriting continues to spark new research just as much as chart-topping hits. Andrew Ledbetter is available for interviews - Simply click on his icon now to arrange an interview today.

Andrew Ledbetter profile photo
1 min. read
Australia’s Under-16 Social Media Ban Isn’t a Finish Line - It’s a Reality Check featured image

Australia’s Under-16 Social Media Ban Isn’t a Finish Line - It’s a Reality Check

Australia’s move to restrict social media accounts for kids under 16 has become a global lightning rod - and it’s forcing the right conversation: what do we do when a technology is too powerful for a developing brain? But here’s what I think journalists should focus on next: “A ban is a speed bump, not a seatbelt. It might slow kids down - but it won’t teach them how to drive their attention.” That’s the part that gets lost in the headlines. Because even if you can reduce access, you still have to deal with the why behind the behavior: boredom, social pressure, loneliness, stress, sleep debt. “The headlines make it sound like the problem is solved. But the real question is: what happens in the living room on day three?” Offline.now’s early data shows something important: most people genuinely want to change their screen habits, but many feel overwhelmed and don’t know where to start. That’s why we begin with a quick self-assessment and map people into four Types - Overwhelmed, Ready, Stuck, Unconcerned - so the advice matches the person. “We keep treating social media like a self-control test. It’s not. It’s a confidence problem - people don’t know where to start, so they start with shame.” What I’d tell policymakers considering similar bans 1. Pair friction with skills. “If the only plan is ‘block the app,’ you’re betting against the internet. Workarounds aren’t a bug - they’re the default.” 2. Don’t outsource responsibility entirely to families. “If policy turns parents into full-time bouncers and kids into part-time hackers, we’ve built a system that’s guaranteed to fail.” 3. Ask what gets protected, not just what gets restricted. “The real target isn’t ‘screen time.’ It’s the moments screens replace.” What parents need to know that headlines aren't telling them This is a process, not a switch. The best “first phone / first social” plans are adjustable. Modeling beats monitoring. The rules collapse if adults don’t follow them too. Have a handoff plan. If a child’s mood, sleep, school performance, or withdrawal is deteriorating, it may be bigger than habits. Why this is a late December / January story “The holidays are the perfect storm: more free time, more family friction, more devices, less sleep. January is when the bill comes due.” Journalist angles Bans vs. behavior change: what policy can’t solve The workarounds economy: age gates, bypass culture, privacy tension The four Types: why one-size fits all screen-time advice fails families New Year resets for families: simple, shame-free agreements that stick Available for interviews Eli Singer - CEO of Offline.now; author of Offline.now: A Practical Guide to Healthy Digital Balance. I speak about practical behavior change, non-judgmental family agreements, and confidence-based starting points - and I can direct people to licensed professionals via the Offline.now Directory when needs go beyond coaching.

Eli Singer profile photo
2 min. read
Changing Phone Habits Isn’t a Willpower Problem. It’s a Confidence Problem. featured image

Changing Phone Habits Isn’t a Willpower Problem. It’s a Confidence Problem.

Every January, millions of people swear they’ll “spend less time on my phone.” By February, they’re right back where they started, only now they feel worse about themselves. Eli Singer, founder and CEO of Offline.now and author of Offline.now: A Practical Guide to Healthy Digital Balance, thinks we’re telling the wrong story. “Most people don’t need another productivity hack or a harsher version of ‘just put your phone down,’” Singer says. “They need one tiny experience that proves, ‘I can actually change this.’ That’s confidence. Without it, willpower doesn’t stand a chance.” Drawing on early data from Offline.now’s self-assessment tool, Singer sees a pattern: people are highly motivated to change, but don’t believe they can stick to anything. His framework sorts users into four Types — Overwhelmed, Ready, Stuck and Unconcerned — based on motivation and confidence. Each Type gets different starting moves, all designed to be done in under 20 minutes. “Telling an overwhelmed parent or burned-out executive to do a 30-day social media fast is like asking someone who’s never run to start with a marathon,” he says. “We focus on micro-wins — one phone-free dinner, ten minutes of swapping doomscrolling for something you actually enjoy — because that’s what rebuilds trust in yourself.” Singer is a coach, not a therapist, but Offline.now’s Digital Wellness Directory connects people with licensed therapists, social workers, coaches and dietitians when deeper clinical support is needed. He positions Offline.now as the “front door” for people who know their relationship with screens isn’t working, but don’t know where to start. Why now January is peak “resolution season” and peak disappointment season. Singer can speak to why traditional “digital detox” narratives don’t work, how confidence and micro-steps change the story, and what a realistic New Year phone reset looks like for real people with jobs, kids and ADHD. Featured Expert Eli Singer – Founder of Offline.now and author of Offline.now: A Practical Guide to Healthy Digital Balance. Singer can speak to the platform’s behavioral data on digital overwhelm, the confidence gap, the Offline.now Matrix, and how 20-minute micro-steps outperform all-or-nothing digital detoxes in the real world. Expert interviews can be arranged through the Offline.now media team.

Eli Singer profile photo
2 min. read
Study: Lessons learned from 20 years of snakebites featured image

Study: Lessons learned from 20 years of snakebites

The best way to avoid getting bitten by a venomous snake is to not go looking for one in the first place. Like eating well and exercising to feel better, the avoidance approach is fully backed by science. A new study from University of Florida Health researchers analyzed 20 years of snakebites cases seen at UF Health Shands Hospital in Gainesville. “This is the first time we’ve evaluated two decades of venomous snakebites here,” said senior author and assistant professor of medicine Norman L. Beatty, M.D., FACP. Researchers analyzed 546 de-identified patient records from 2002 to 2022 and highlighted notable conclusions — for instance, that a third of the snakebites analyzed were preventable and caused by people intentionally engaging with wild snakes. “Typically, people’s experiences with getting bitten are due to an interaction that was inadvertent — they stumble upon a snake or reach for something without seeing one camouflaged,” Beatty said. “In this case, people were seeking them out. There were a few individuals who were bitten on more than one occasion.” Most (77.8%) of the snakebites occurred in adult men while they were handling wild snakes, and most of the bites were perpetrated by the diminutive pygmy rattlesnake and the cottonmouth. The latter is named for the white lining of its mouth, which it displays when threatened. “I was less surprised to see those species emerge as some of the most common ones people were bitten by, but the robust presence of other, less common species in the data — like the eastern coral snake, southern copperhead, timber rattlesnake and the eastern diamondback rattlesnake, was interesting,” Beatty said. The eastern diamondback rattlesnake is one of the most venomous snakes in North America. Most patients were bitten on their hands and fingers and around 10% of them attempted outdated self-treatments no longer recommended by doctors — like sucking out the venom. Initially, the study began as a medical student research project, thanks to a handful of medical students who worked with Beatty to review the cases. The intention was to dive deep into the circumstances of each encounter and learn more about the treatment given, as well as the outcomes. Fourth-year medical student River Grace, the paper’s first author, said the work struck a personal note. “My dad is a reptile biologist, so I’ve grown up around snakes my whole life,” Grace said. “He was bitten by a venomous snake many years ago and ended up hospitalized for multiple weeks, so it was interesting to keep that experience in mind while going over the data.” Grace noted that it typically took those bitten over an hour on average to travel from where the bite occurred to the hospital. “It seems like the reason for that was people not knowing exactly what to do once they’d been bitten, or underestimating the severity of the bite,” he said. “Some would just sit at home for hours.” Floridians share their home with a variety of scaly neighbors who don’t always welcome visitors — accidental or not. Ultimately, thanks to the timely care of providers, only three snake bites were fatal. However, antivenom is no panacea. Those who are lucky enough to receive it in time can still incur complications from the original snake bites, like tissue damage, or even a fatal allergic reaction to the antivenom itself. Consequently, researchers look toward improving the processes used to triage snake bites in the emergency room, ensuring that providers are equipped with the knowledge and the know-how to shorten time to treatment. “In the future, we think we’d love to get involved in enhancing provider education so everyone in the health care setting is confident in being able to identify and administer antivenom as quickly and safely as possible,” Grace said.

Norman Beatty profile photo
3 min. read
UF team develops AI tool to make genetic research more comprehensive featured image

UF team develops AI tool to make genetic research more comprehensive

University of Florida researchers are addressing a critical gap in medical genetic research — ensuring it better represents and benefits people of all backgrounds. Their work, led by Kiley Graim, Ph.D., an assistant professor in the Department of Computer & Information Science & Engineering, focuses on improving human health by addressing "ancestral bias" in genetic data, a problem that arises when most research is based on data from a single ancestral group. This bias limits advancements in precision medicine, Graim said, and leaves large portions of the global population underserved when it comes to disease treatment and prevention. To solve this, the team developed PhyloFrame, a machine-learning tool that uses artificial intelligence to account for ancestral diversity in genetic data. With funding support from the National Institutes of Health, the goal is to improve how diseases are predicted, diagnosed, and treated for everyone, regardless of their ancestry. A paper describing the PhyloFrame method and how it showed marked improvements in precision medicine outcomes was published Monday in Nature Communications. Graim, a member of the UF Health Cancer Center, said her inspiration to focus on ancestral bias in genomic data evolved from a conversation with a doctor who was frustrated by a study's limited relevance to his diverse patient population. This encounter led her to explore how AI could help bridge the gap in genetic research. “If our training data doesn’t match our real-world data, we have ways to deal with that using machine learning. They’re not perfect, but they can do a lot to address the issue.” —Kiley Graim, Ph.D., an assistant professor in the Department of Computer & Information Science & Engineering and a member of the UF Health Cancer Center “I thought to myself, ‘I can fix that problem,’” said Graim, whose research centers around machine learning and precision medicine and who is trained in population genomics. “If our training data doesn’t match our real-world data, we have ways to deal with that using machine learning. They’re not perfect, but they can do a lot to address the issue.” By leveraging data from population genomics database gnomAD, PhyloFrame integrates massive databases of healthy human genomes with the smaller datasets specific to diseases used to train precision medicine models. The models it creates are better equipped to handle diverse genetic backgrounds. For example, it can predict the differences between subtypes of diseases like breast cancer and suggest the best treatment for each patient, regardless of patient ancestry. Processing such massive amounts of data is no small feat. The team uses UF’s HiPerGator, one of the most powerful supercomputers in the country, to analyze genomic information from millions of people. For each person, that means processing 3 billion base pairs of DNA. “I didn’t think it would work as well as it did,” said Graim, noting that her doctoral student, Leslie Smith, contributed significantly to the study. “What started as a small project using a simple model to demonstrate the impact of incorporating population genomics data has evolved into securing funds to develop more sophisticated models and to refine how populations are defined.” What sets PhyloFrame apart is its ability to ensure predictions remain accurate across populations by considering genetic differences linked to ancestry. This is crucial because most current models are built using data that does not fully represent the world’s population. Much of the existing data comes from research hospitals and patients who trust the health care system. This means populations in small towns or those who distrust medical systems are often left out, making it harder to develop treatments that work well for everyone. She also estimated 97% of the sequenced samples are from people of European ancestry, due, largely, to national and state level funding and priorities, but also due to socioeconomic factors that snowball at different levels – insurance impacts whether people get treated, for example, which impacts how likely they are to be sequenced. “Some other countries, notably China and Japan, have recently been trying to close this gap, and so there is more data from these countries than there had been previously but still nothing like the European data," she said. “Poorer populations are generally excluded entirely.” Thus, diversity in training data is essential, Graim said. "We want these models to work for any patient, not just the ones in our studies," she said. “Having diverse training data makes models better for Europeans, too. Having the population genomics data helps prevent models from overfitting, which means that they'll work better for everyone, including Europeans.” Graim believes tools like PhyloFrame will eventually be used in the clinical setting, replacing traditional models to develop treatment plans tailored to individuals based on their genetic makeup. The team’s next steps include refining PhyloFrame and expanding its applications to more diseases. “My dream is to help advance precision medicine through this kind of machine learning method, so people can get diagnosed early and are treated with what works specifically for them and with the fewest side effects,” she said. “Getting the right treatment to the right person at the right time is what we’re striving for.” Graim’s project received funding from the UF College of Medicine Office of Research’s AI2 Datathon grant award, which is designed to help researchers and clinicians harness AI tools to improve human health.

Kiley Graim profile photo
4 min. read
LSU Experts Break Down Artificial Intelligence Boom Behind Holiday Shopping Trends featured image

LSU Experts Break Down Artificial Intelligence Boom Behind Holiday Shopping Trends

Consumers are increasingly turning to artificial intelligence tools for holiday shopping—especially Gen Z shoppers, who are using platforms like ChatGPT and social media not only for gift inspiration but also to find the best prices. Andrew Schwarz, professor in the LSU Stephenson Department of Entrepreneurship & Information Systems, and Dan Rice, associate professor and Director of the E. J. Ourso College of Business Behavioral Research Lab, share their insights on this emerging trend. AI is the new front door for search: Schwarz: We’re seeing a fundamental change in how consumers find information. Instead of browsing multiple pages of results, users—especially Gen Z—are skipping to conversational AI for curated answers. That dramatically shortens the shopping journey. For years, companies optimized for SEO to appear on the first page of Google; now they’ll have to think about how their products surface in AI-generated recommendations. This may lead to a new form of “AIO”—AI Information Optimization—where retailers tailor product descriptions, metadata, and partnerships specifically for AI visibility. The companies that adapt early will have a distinct advantage in capturing consumer attention. Rice: This issue of people being satisfied with the AI results (like a summary at the top of the Google results) and then not clicking on any of the paid or organic links leads to a huge increase in what we call “zero click search” (for obvious reasons). For some providers, this is leading to significant drops in web traffic from search results, which can be disconcerting due to the potential loss of leads. However, to Andrew’s point of shortening the journey, it means that the consumers who do come through are much more likely to buy (quickly) because they are “better” leads. This translates to seemingly paradoxical situations for providers: they see drops in click-through rates and visitors/leads, yet revenue increases because the visitors are “better.”  There is a rise in personalized shopping journeys: Schwarz: AI essentially acts as a personal shopper—one that can instantly analyze preferences, budget, personality traits, or past behavior to produce tailored gift lists. This shifts power toward “delegated decision-making,” in which consumers allow AI to narrow their choices. Younger consumers are already comfortable outsourcing this cognitive load. However, as ads enter the picture, these personalized journeys could be shaped by incentives that aren’t always transparent. That creates a new responsibility for platforms to disclose when suggestions are sponsored and for users to develop a more critical lens when interacting with AI-driven recommendations. Rice: This is also a great point. The “tools” marketers use to attract customers are constantly evolving, but this seems in many ways to be the next iteration of the Amazon.com suggestions that you find at the bottom of the product page for something you click on when searching Amazon (“buy all x for $” or “consumers also looked at…,” etc.), based on past histories of search and purchase, etc. One of the main differences is that you can now create virtually limitless ways to compare products, making comparisons less taxing (reducing cognitive load and stress), which may, in some cases, increase the likelihood of purchase. These idiosyncratic comparisons and prompts lead to the truly unique journeys Andrew is discussing. You no longer have to be beholden to a retailer-specified price range. You could choose your own, or instead ask an AI to list the products representing the best “value” based on consumer reviews, perhaps by asking to list the top ten products by cost per star rating, etc.  Advertising is becoming more subtle and conversational: Schwarz: With ads woven directly into AI responses, the traditional boundary between content and advertising blurs. Instead of banner ads, pop-ups, or clearly labeled sponsored posts, recommendations in a conversational thread may feel more like advice than marketing. This has enormous implications for consumer trust. Retailers will likely see higher engagement through these context-aware ad placements, but regulatory scrutiny may also increase as policymakers evaluate how clearly sponsored content is identified. The risk is that advertising becomes invisible—something both platform designers and regulators will need to monitor carefully. Rice: This is definitely true. I was recently exploring an AI-based tool for choosing downhill skis, but the tool was subtly provided by a single ski brand. I’m not sure the distribution of ski brands covered was truly delivering the “best overall fit” for a potential buyer, rather than the best possible ski in that brand. At least in that case, it was somewhat disclosed. It does, however, become an issue if consumers feel misled, but they’d have to notice it first. Still, the advantages are big for retailers, and the numbers don't lie. According to some preliminary Black Friday data, shoppers using an AI assistant were 60% more likely to make a purchase.  Schwarz: This shift is going to reshape multiple layers of the retail ecosystem: Retailers will need to rethink how they show up in AI-driven environments. Traditional SEO, ad bids, and social media strategies won’t be enough. Partnerships with AI platforms may become as important as being carried by major retailers today. Because AI tools can instantly compare prices across dozens of retailers, consumers will become more price-sensitive. Retailers may face increasing pressure to offer competitive pricing or unique value propositions, as AI reduces friction in comparison shopping. Retailers who integrate AI into their own websites—chat-based shopping assistants, personalized gift advisors, automated bundling—will gain an edge. Consumers are increasingly expecting conversational interfaces, and companies that delay will quickly feel outdated. As AI tools influence purchasing decisions, consumers and regulators alike will demand clarity around how recommendations are generated. Retailers will need to navigate this carefully to maintain What I think we are going to see accelerate as we move forward: AI-powered concierge shopping will become mainstream. Within a couple of years, using AI to generate shopping lists, compare prices, and find deals will be as common as using Amazon today. Retailers will create AI-specific marketing strategies. Instead of optimizing for keywords, they’ll optimize for prompts: how consumers might ask for products and how an AI system interprets those requests. More platforms will introduce advertising into AI models. ChatGPT is simply the first mover. Once the revenue potential becomes clear, others will follow with their own ad integrations. Greater scrutiny from policymakers. As conversational advertising grows, transparency rules and labeling requirements will almost certainly. A new era of “conversational commerce.” Buying directly through AI—“ChatGPT, order this for me”—will become increasingly common, merging search, recommendation, and transaction into a single seamless experience. I can speak to this on a personal level.  My college-aged son is interested in college football, and I wanted to get him a streaming subscription to watch the games.  However, the football landscape is fragmented across multiple, expensive platforms. I asked ChatGPT to generate a series of options. Hulu is $100/month for Live TV, but ChatGPT recommended a combination of ESPN+, Peacock, and Paramount+ for $400/year and identified which conferences would not be covered.  What would have taken me hours only took me a few minutes! Rice: On the other hand, AI isn’t infallible, and it can lead to sub-optimal results, hallucinations, and questionable recommendations. From my recent ski shopping experience, I encountered several pitfalls. First, for very specific questions about a specific model, I sometimes received answers for a different ski model in the same brand, or for a different ski altogether, which was not particularly helpful, or specs I knew were just plain wrong. Secondly, regarding Andrew’s point about the conversational tone, I asked questions intended to push the limits of what could be considered reliable. For example, I asked the AI to describe the difference in “feel” of the ski for the skier among several models and brands. While the AI gave very detailed and plausible comparisons that were very much like an in-store discussion with a salesperson or area expert, I’m not sure I fully trust when an AI tells me that you can really feel the power of a ski push you out of a turn, this ski has great edge hold, etc. It sounds great, but where is the AI sourcing this information? I’m not convinced it’s fully accurate. It also seems we’re starting to see Google shift toward a more AI-centric approach (e.g., AI summaries and full AI Mode). At the same time, we’re also starting to see AI migrate closer to Google as people use it for product-related chats, and companies like Amazon and Walmart have developed their own AI that is specifically focused on the consumer experience. I can’t imagine it will be long before companies like OpenAI and their competitors start “selling influence” in AI discussions to monetize the influence their engines will have.  

Dan Rice profile photoAndrew Schwarz profile photo
6 min. read
Opinion: Hey Florida! Want to go to Mars? Here’s what it will do to your body featured image

Opinion: Hey Florida! Want to go to Mars? Here’s what it will do to your body

The president is eager “to plant the stars and stripes on the planet Mars.” Would you sign up for that mission? What would happen to your body in the three years you would be gone? As the United States continues to prioritize space travel, you might wonder why anyone would want to travel to Mars and whether it’s even ethical to expose humans to such extreme physiological conditions. The world is watching as the astronauts on the Boeing Starliner remain stuck in space until at least March due to a capsule malfunction. So many questions have arisen about the impacts of people spending extended periods of time in space, and we don’t have all the answers yet. However, because I study how spaceflight affects human physiology and performance, I have some ideas. The first 10 minutes of your journey will be exciting, but it’s the next months and years we really need to worry about. We have solved some of the problems but not all. After you lift off, the high g-forces will paste your body against the crew couch as you accelerate, but there’s really not too much to fear. A typical launch results in only about half the acceleration experienced by a fighter pilot in a tight turn. You might feel lightheaded, but astronauts have dealt with this for generations. Read the full article in the Tampa Bay Times here:

Rachael Seidler profile photo
1 min. read