Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

A Year Into Ahmed al-Sharaa's Presidency, Villanova's Samer Abboud, PhD, Shares Thoughts on Syrian Affairs featured image

A Year Into Ahmed al-Sharaa's Presidency, Villanova's Samer Abboud, PhD, Shares Thoughts on Syrian Affairs

One year ago, after a campaign that toppled Bashar al-Assad's repressive dictatorship, Ahmed al-Sharaa assumed the Syrian presidency. Since then, the former rebel commander has worked to establish his credentials as a statesman, winning the support of regional powers like Türkiye, Saudi Arabia and Qatar—as well as recognition from the White House. Yet al-Sharaa and his transitional government have not been immune from criticism, particularly over their handling of domestic affairs. Samer Abboud, PhD, director of the Center for Arab and Islamic Studies at Villanova University, is an expert on modern Syria and the wider Middle East. A year into al-Sharaa's presidency, he believes the provisional government has made incredible strides in some areas, like international diplomacy, while struggling to find its footing in others. "There's no doubt that Syria's external image is becoming more positive. We see this kind of charm offensive, with President al-Sharaa taking to the world stage," says Dr. Abboud. "Also, most of the regional actors are very fond of al-Sharaa and were very happy for the Assad regime to have fallen. So, there's this external presentation of a transition government that is legitimate and has support, and I think that's largely true. "The problem in Syria right now, of course, is what's happening internally. To begin, across the country, you have completely collapsed infrastructure—limited electricity, restricted access to running water and unreliable internet." Much has been made of economic sanctions' role in contributing to these internal issues, with Western governments having historically limited the amount of aid and investment that could enter Syria. However, while Dr. Abboud sees these measures' elimination as crucial to the nation's progress, he also contends that ending restrictions alone is not enough to ensure the country's long-term stability and prosperity. Of particular concern, according to the professor, is the al-Sharaa administration's persistent claim "that 'free markets' could and would be a cure-all." As he explains, "The problem is that there's literally no evidence to demonstrate that private enterprise is interested in social betterment in reconstruction cases. You can't rebuild a state and a society on the profit logic. When you look at Lebanon, after all the wars Lebanon endured, what did free markets—without a strong public sector—do for that country? Roughly 80 percent of Lebanese people live in poverty." Beyond the troubles surrounding economic growth and infrastructural development, there also exist a series of fractures along ethnic and ideological lines. Wide swaths of Syria are currently controlled by militias with agendas at odds with that of the provisional government, and despite making inroads with one significant bloc of dissent (the Kurdish-led Syrian Democratic Forces), tensions are exceedingly high. Furthermore, a number of groups remain suspicious of the president and his intentions due to his past affiliation with Hayat Tahrir al-Sham, a Sunni Islamist group that traces its roots to al-Qaeda. Navigating this delicate situation with poise and precision is something that al-Sharaa needs to master, contends Dr. Abboud. And, over the course of the past several months, it seems Syria's new leader has started to refine the skill. "To illustrate, last year, at least 25 people were killed in a bombing at the Mar Elias Church in Damascus, and President al-Sharaa did not go to the site. In addressing the incident, he also didn't use the language of martyrdom, which is what you would typically do for any person—Christian or Muslim—who died in this context," says Dr. Abboud. "In June, however, they arrested the culprits, and he went and met the patriarch and went inside the church, and they publicized it. "The first time, he was too worried about these internal influences—of being perceived by his base as having moderated his views. Right now, he very much finds himself caught in a balancing act, working to temper the forces that are compelling him to possibly do something that could worsen an unstable situation. But I do think that the two contrasts [represented in the Mar Elias Church episode] suggest that the president is learning and gradually figuring out how to do politics a bit differently." In this vein, Dr. Abboud feels the next phase in al-Sharaa's evolution should center on reckoning with the history of the country's late civil war and encouraging a dialogue between those who supported the Assad regime and those who sought to overthrow it. In the professor's estimation, this step is essential to achieving a lasting peace in Syria. "Currently, there are some memory projects and knowledge projects that are happening, but those are not led or facilitated by the state. And that's troublesome, given what we've seen in other conflict contexts," he says. "In Lebanon, for instance, the state has amnesia. The civil war is not in the textbooks, officials don't talk about it, and it's not commemorated nationally. But then, in many ways, the narrative of how it happened—who are the victims, who are the perpetrators—can totally shape people's lives." Still, while much economic, social and humanitarian work remains to be done, Syria today finds itself in a position unlike any it's occupied in decades' time: one marked by possibility. "In general, I envision an extended period of grace for the government and an extended period of hope," concludes Dr. Abboud. "Syria did not have a future under the Assad regime. Or it had a future, but one characterized by generations of isolation. Today, people, both inside and outside Syria, have an entirely different outlook."

4 min. read
How to Make Your Experts “AI-Ready" featured image

How to Make Your Experts “AI-Ready"

AI is changing how people discover expertise.  Today, journalists, event organizers, researchers, and the public increasingly turn to tools like ChatGPT, Claude, Perplexity, and Google Search’s AI summaries powered by Gemini. Instead of clicking through pages of links, they expect clear, credible answers—often delivered instantly, with citations. That shift has major implications for organizations. It’s no longer enough for your experts to “rank well.” They need to be understood, trusted, and accurately represented by AI systems. So the real question becomes: When AI talks about your experts, does it get it right? This is where LLMs.txt plays an important role—especially when paired with an ExpertFile-powered Expert Center. What is LLMs.txt (In Plain English)? ...and why is it essential for expert content LLMs.txt is a small, machine-readable file placed on your organization’s website—in the case of your expert content alongside your main Expert Center. Its purpose is simple: to explain your expertise to AI systems clearly and unambiguously. “AI systems don’t just scan for keywords; they look for clear meaning, consistent context, and clean formatting — precise, structured language makes it easier for AI to classify your content as relevant.” Microsoft: Optimizing Your Content for Inclusion in AI Search Answers Rather than forcing AI to infer meaning from scattered pages, LLMs.txt explicitly tells systems: Who your experts are Which pages represent official, curated content How expert profiles differ from articles, Q&A, or research content How your organization’s expertise should be interpreted as a whole Think of it as a table of contents and usage guide for AI —helping large language models understand your site the way a communications professional would. Why This Matters for Visibility and Trust It Establishes Your Organization as the Source of Truth AI systems routinely synthesize information from multiple places. Without guidance, they may rely on outdated bios, scraped content, or secondary references. LLMs.txt provides a clear signal: This is our official expert content. This is what represents us. For ExpertFile clients, this matters because the platform already centralizes and curates expert content—from profiles and directories to Spotlights and Expert Q&A—ensuring that what AI sees is current, governed, and institutionally endorsed. The result: Greater accuracy, stronger attribution, and reduced risk of misrepresentation when your experts appear in the ever growing AI-generated overviews and answer. ahrefs: AI Overviews Have Doubled How It Improves Discovery Across AI Platforms It Makes Structured Expertise Easier for AI to Use ExpertFile is purpose-built to publish structured expert content at scale—content that goes well beyond static bios. LLMs.txt simply helps AI recognize and use that structure correctly. It clarifies the role of key ExpertFile content types, including: Expert Profiles → Canonical identity, credentials, and areas of expertise Spotlight Posts → Timely commentary, thought leadership, and research insights Expert Q&A → Authoritative answers to real-world questions Directories, Research Bureaus, and Speakers Bureaus → Curated collections of expertise by topic or audience This makes it easier for AI systems to: Match your experts to breaking news and trending topics Pull accurate summaries for AI-generated responses Identify the right expert for journalists, event organizers, and researchers Combined with ExpertFile’s extended distribution through expertfile.com and the ExpertFile Mobile App, your expertise is not only published—but actively discoverable across channels used by key audiences . How It Builds Organizational Authority It Connects Individual Experts to Institutional Credibility Without context, AI may treat expert pages as isolated profiles. LLMs.txt helps connect the dots. It tells AI that: Your experts are curated and endorsed by the organization Their insights are part of a broader expertise ecosystem Your institution has depth across priority subject areas This aligns closely with how ExpertFile structures content to support E-E-A-T (Experience, Expertise, Authority, Trust)—not just at the individual level, but across the organization . The outcome: Your organization is recognized not just as a collection of experts, but as an authoritative source of knowledge. How It Works with Google, Gemini, and AI Search Supports AI Summaries, Citations, and Knowledge Panels LLMs.txt helps ensure that when Google’s AI: Summarizes your organization Cites expert commentary Builds “about this topic” panels …it draws from your official, structured ExpertFile content, rather than fragmented third-party sources. This complements ExpertFile’s existing SEO and AI-discoverability foundation, which includes clean code, proper meta data, schema markup, and frequent crawling by both search engines and AI bots. How LLMS.txt Fits with SEO, Meta Tags, and Schema LLMS.txt doesn’t replace SEO—it builds on it. Traditional SEO elements such as page titles, meta descriptions, schema.org markup, and internal linking remain essential for helping search engines index and rank your content. ExpertFile already delivers these fundamentals out of the box, continually testing and evolving SEO and GEO (Generative Engine Optimization) standards as search changes . “Semantic SEO helps search engines understand context... it now helps bridge a critical gap between traditional SEO and newer generative engine optimization (GEO) and AI optimization (AIO) efforts.” Search Engine Land: Semantic SEO: How to optimize for meaning over keywords LLMS.txt adds a layer designed specifically for AI systems: Schema explains individual pages LLMs.txt explains your entire expertise ecosystem In simple terms: SEO helps your content get found LLMs.txt helps AI understand, summarize, and cite it correctly Together, they ensure your experts are not only visible—but accurately represented wherever AI is shaping discovery. Why This Is Especially Powerful on ExpertFile ExpertFile was designed to future-proof expert visibility—offering structured publishing, governance, distribution, inquiry management, analytics, and professional services as part of a continuously evolving SaaS platform . LLMS.txt acts as a multiplier on that foundation: Turning your Expert Center into a machine-readable expertise hub Strengthening AI discovery without adding operational burden Supporting emerging use cases like automated expert matching and AI-assisted research It’s not about chasing new technology. It’s about ensuring your expertise is clearly defined, properly attributed, and trusted—now and in the future. The Takeaway An LLMs.txt file on your ExpertFile organization page helps ensure that: Your experts are found by AI tools, not overlooked Your content is interpreted correctly, not flattened or misrepresented Your organization earns authority and trust in AI summaries, citations, and search results “AI search isn’t eliminating organic traffic. But it is reducing visits to source websites… Measure presence (citations, mentions) alongside traffic to see real impact.” Semrush: AI Search Trends for 2026 & How You Can Adapt  As AI becomes the front door to information, LLMs.txt helps make sure that when people ask for expertise, your organization is the answer they get.

Robert Carter profile photo
5 min. read
How corporate competition can spur collaborative solutions to the world's problems featured image

How corporate competition can spur collaborative solutions to the world's problems

Why can’t large competitive companies come together to work on or solve environmental challenges, AI regulation, polarization or other huge problems the world is facing? They can, says the University of Delaware’s Wendy Smith. While it's difficult, the key is to have these companies collaborate under the guise of competition. Smith, a professor of management and an expert on these types of paradoxes, co-authored a recent three-year study of one of the most profound collaborations. Her team looked at the unlikely alliance of 13 competitive oil and gas companies that eventually formed Canada’s Oil Sands Innovation Alliance (COSIA), which works with experts worldwide to find innovative solutions for environmental and technical challenges in the region. Smith and her co-authors found that those companies were willing to collaborate, but only when collaboration was cast in the language, practices and goals of competition. Given the scope of our global problems, companies must continually work together to offer solutions. Creating that collaboration becomes critical, Smith said. This research offers important insight about how these collaborations are possible. Among the study's key findings: Competition can drive cooperation — if leaders harness it. It would make sense to assume that competition undermines collaboration. But the study finds that those who championed alliances used competitive dynamics to strengthen cooperation among rival firms. Rather than suppressing rivalry, leaders leveraged competition as a mechanism to enable joint action toward shared environmental goals. This reframes how organizations can manage tensions between competition and cooperation in partnerships. For example, COSIA leaders created competition between partners to see who would contribute the most valuable environmental innovations. Partners could only gain as much benefit from other company’s innovations commensurate with what they shared. A “Paradox Mindset” is key to complex collaborative success. The research identifies the importance of what the authors call a paradox mindset, which sees competition and cooperation not as opposites to be balanced but as interrelated forces that can be used in tandem. Leaders in the study who adopted this mindset were more thoughtful and creative about how to engage both competitive and collaborative practices in the same alliance. Traditional balance isn’t the goal — process over stability. Instead of pursuing a simplistic “balance” between competing and cooperating, the study shows that effective alliances evolve through process, where competition remains visible and even useful throughout the lifecycle of the alliance. To connect with Smith directly and arrange an interview, visit her profile and click on the "contact" button. Interested journalists can also send an email to MediaRelations@udel.edu.

Wendy Smith profile photo
2 min. read
LI Schools See Improvement in Math and ELA Exams featured image

LI Schools See Improvement in Math and ELA Exams

Dr. Amy Catalano, interim dean of Hofstra University’s School of Education, was interviewed by Newsday about English language arts (ELA) and math scores improving among Long Island students in grades 3-8. The article also noted that student participation in testing has increased. On Long Island, 31.1% of students opted out of the ELA test in 2025 compared with 36.5% last year and about 41% in 2023. Experts like Dr. Catalano noted all eligible students need to take the tests or scores could mask academic gaps. “If you don’t have 100% of your kids taking the test, those results are just not reliable,” she said.

Amy Catalano profile photo
1 min. read
AI as IP™: A Framework for Boards, Executives, and Investors featured image

AI as IP™: A Framework for Boards, Executives, and Investors

Under current corporate accounting practices, artificial intelligence (AI) companies’ most valuable resources – large language models, training datasets, and algorithms – remain “off the books” or uncapitalized. As the importance of AI continues to grow in the global knowledge-based economy, financial statements are becoming less representative of a company’s true worth, creating a recognition gap. In this article, James E. Malackowski, Eric Carnick, and David Ngo present several conceptual frameworks to bridge this gap. They explain how the triangulation of three valuation approaches can reveal both the tangible investment base and the intangible, strategic upside of AI assets. In turn, these approaches provide board-level visibility into where AI capital resides and how it contributes to enterprise value. James E. Malackowski is the Chief Intellectual Property Officer (CIPO) of J.S. Held and Co-founder of Ocean Tomo, a part of J.S. Held. Mr. Malackowski has served as an expert on over one hundred occasions on intellectual property economics, including valuation, royalty, lost profits, price erosion, licensing terms, venture financing, copyright fair use, and injunction equities. He has substantial experience as a Board Director for leading technology corporations, research organizations, and companies with critical brand management issues.  This article is the second installment in our three-part series, Artificial Intelligence as Intellectual Property or “AI as IP™”, which explores how artificial intelligence assets should be treated as a form of intellectual property and enterprise capital. The first article, “A Strategic Framework for the Legal Profession”, explored the legal foundations for recognizing and protecting AI assets. The upcoming third article, “Guide for SMEs to Classify, Protect, and Monetize AI Assets”, will provide practical steps for small and mid-sized enterprises to turn AI into measurable economic value. To explore the topic further, simply connect with James through his icon below.

James E. Malackowski, CPA, CLP profile photo
2 min. read
New AI-powered tool helps students find creative solutions to complex math proofs featured image

New AI-powered tool helps students find creative solutions to complex math proofs

Math students may not blink at calculating probabilities, measuring the area beneath curves or evaluating matrices, yet they often find themselves at sea when first confronted with writing proofs. But a new AI-powered tool called HaLLMos — developed by a team led by Professor Vincent Vatter, Ph.D., in the University of Florida Department of Mathematics — now offers a lifeline. “Some students love proofs, but almost everyone struggles with them. The ones who love them just put in more work,” Vatter said. “It just kind of blows their minds that there’s no single correct answer — that there are many different ways to do this. It’s very different than just doing computational work.” Building the tool HaLLMos was developed by Vatter, as principal investigator, along with Sarah Sword, a mathematics education expert at the Education Development Center; Jay Pantone, an associate professor of mathematical and statistical sciences at Marquette University; and Ryota Matsuura, a professor of mathematics, statistics and computer science at St. Olaf College; with grant support from the National Science Foundation. The tool is freely available at hallmos.com. The team’s goal was to develop an AI tool powered by a large language model that would support student learning rather than short-circuiting it. HaLLMos provides immediate personalized feedback that guides students through the creative struggle that writing proofs requires, without solving the proofs for them. The tool’s name honors the late Paul Halmos, a renowned mathematician who argued that the mathematics field is a creative art, akin to how painters work. Students using HaLLMos can select from classic exercises — such as proving that, for all integers, if the square of the integer is even, the integer is even — or use “sandbox mode” to enter exercises from any course. Faculty can create exercises and share them with students. Vatter introduced HaLLMos to his students last spring in his “Reasoning and Proof in Mathematics” class, a core requirement for math majors that is often the first time students encounter proofs. “They could use this tool to try out their proofs before they brought them to me. We try to identify the error in a student’s proof and let them go fix it,” Vatter said. “It is difficult for faculty to devote enough time to working individually with students. Our goal is that this tool will provide the feedback in real time to students in the way we would do it if we were there with them as they construct a proof.” Helping professors and students excel “I think every math professor would love to give more feedback to students than we are able to,” Vatter said. “That’s one of the things that inspired this.” The next steps for Vatter and his colleagues include getting more pilot sites to use the tool and continuing to improve its responses. “We’d like it to be good at any kind of undergraduate mathematics proofs,” he said. Vatter also intends to explore moving HaLLMos to UF’s HiPerGator, the country's fastest university-owned supercomputer. “It’s our goal to have it remain publicly accessible,” Vatter said. This research was supported by a grant from the National Science Foundation Division of Undergraduate Education.

Vincent Vatter profile photo
3 min. read
ChatGPT-5.2 Now Achieves “Expert-Level” Performance — Is this the Holiday Gift Research Communications Professionals Needed? featured image

ChatGPT-5.2 Now Achieves “Expert-Level” Performance — Is this the Holiday Gift Research Communications Professionals Needed?

With OpenAI’s latest release, GPT-5.2, AI has crossed an important threshold in performance on professional knowledge-work benchmarks. Peter Evans, Co-Founder & CEO of ExpertFile, outlines how these technologies will fundamentally improve research communications and shares tips and prompts for PR pros. OpenAI has just launched GPT-5.2, describing it as its most capable AI model yet for professional knowledge work — with significantly improved accuracy on tasks like creating spreadsheets, building presentations, interpreting images, and handling complex multistep workflows. And based on our internal testing, we're really impressed. For communications professionals in higher education, non-profits, and R&D-focused industries, this isn’t just another tech upgrade — it’s a meaningful step forward in addressing the “research translation gap” that can slow storytelling and media outreach. According to OpenAI, GPT-5.2 represents measurable gains on benchmarks designed to mirror real work tasks.  In many evaluations, it matches or exceeds the performance of human professionals. Also, before you hit reply with “Actually, the best model is…” — yes, we know. ChatGPT-5.2 isn’t the only game in town, and it’s definitely not the only tool we use. Our ExpertFile platform uses AI throughout, and I personally bounce between Claude 4.5, Gemini, Perplexity, NotebookLM, and more specialized models depending on the job to be done. LLM performance right now is a full-contact horserace — today’s winner can be tomorrow’s “remember when,” so we’re not trying to boil the ocean with endless comparisons. We’re spotlighting GPT-5.2 because it marks a meaningful step forward in the exact areas research comms teams care about: reliability, long-document work, multi-step tasks, and interpreting visuals and data. Most importantly, we want this info in your hands because a surprising number of comms pros we meet still carry real fear about AI — and long term, that’s not a good thing. Used responsibly, these tools can help you translate research faster, find stronger story angles, and ship more high-quality work without burning out. When "Too Much" AI Power Might Be Exactly What You Need AI expert Allie K. Miller's candid but positive review of an early testing version of ChatGPT 5.2 highlights what she sees as drawbacks for casual users: "outputs that are too long, too structured, and too exhaustive."  She goes on to say that in her tests, she observed that ChatGPT-5,2 "stays with a line of thought longer and pushes into edge cases instead of skating on the surface." Fair enough. All good points that Allie Miller makes (see above).  However, for communications professionals, these so-called "downsides" for casual users are precisely the capabilities we need. When you're assessing complex research and developing strategic messaging for a variety of important audiences, you want an AI that fits Miller's observation that GPT-5.2 feels like "AI as a serious analyst" rather than "a friendly companion." That's not a critique of our world—it's a job description for comms pros working in sectors like higher education and healthcare. Deep research tools that refuse to take shortcuts are exactly what research communicators need.  So let's talk more specifically about how comms pros can think about these new capabilities: 1. AI is Your New Speed-Reading Superpower for Research That means you can upload an entire NIH grant, a full clinical trial protocol, or a complex environmental impact study and ask the model to highlight where key insights — like an unexpected finding — are discussed. It can do this in a fraction of the time it would take a human reader. This isn’t about being lazy. It’s about using AI to assemble a lot of tedious information you need to craft compelling stories while teams still parse dense text manually. 2. The Chart Whisperer You’ve Been Waiting For We’ve all been there — squinting at a graph of scientific data that looks like abstract art, waiting for the lead researcher to clarify what those error bars actually mean. Recent improvements in how GPT-5.2 handles scientific figures and charts show stronger performance on multimodal reasoning tasks, indicating better ability to interpret and describe visual information like graphs and diagrams.  With these capabilities, you can unlock the data behind visuals and turn them into narrative elements that resonate with audiences. 3. A Connection Machine That Finds Stories Where Others See Statistics Great science communication isn’t about dumbing things down — it’s about building bridges between technical ideas and the broader public. GPT-5.2 shows notable improvements in abstract reasoning compared with earlier versions, based on internal evaluations on academic reasoning benchmarks.  For example, teams working on novel materials science or emerging health technologies can use this reasoning capability to highlight connections between technical results and real-world impact — something that previously required hours of interpretive work. These gains help the AI spot patterns and relationships that can form the basis of compelling storytelling. 4. Accuracy That Gives You More Peace of Mind...When Coupled With Human Oversight Let’s address the elephant in the room: AI hallucinations. You’ve probably heard the horror stories — press releases that cited a study that didn’t exist, or a “quote” that was never said by an expert. GPT-5.2 has meaningfully reduced error rates compared with its predecessor, by a substantial margin, according to OpenAI  Even with all these improvements, human review with your experts and careful editing remain essential, especially for anything that will be published or shared externally. 5. The Speed Factor: When “Urgent” Actually Means Urgent With the speed of media today, being second often means being irrelevant.  GPT-5.2’s performance on workflow-oriented evaluations suggests it can synthesize information far more quickly than manual review, freeing up a lot more time for strategic work.  While deeper reasoning and longer contexts — the kinds of tasks that matter most in research translation — require more processing time and costs continue to improve. Savvy communications teams will adopt a tiered approach: using faster models of AI for simple tasks such as social posts and routine responses, and using reasoning-optimized settings for deep research. Your Action Plan: The GPT-5.2 Playbook for Comms Pros Here’s a tactical checklist to help your team capitalize on these advances. #1 Select the Right AI Model for the Job: Lowers time and costs • Use fast, general configurations for routine content • Use reasoning-optimized configurations for complex synthesis and deep document understanding • Use higher-accuracy configurations for high-stakes projects #2 Find Hidden Ideas Beyond the Abstract: Deeper Reasoning Models do the Heavy Work • Upload complete PDFs — not just the 2-page summary you were given • Use deeper reasoning configurations to let the model work through the material Try these prompts in ChatGPT5.2 “What exactly did the researchers say about this unexpected discovery that would be of interest to my <target audience>? Provide quotes and page references where possible.” “Identify and explain the research methodology used in this study, with references to specific sections.” “Identify where the authors discuss limitations of the study.” “Explain how this research may lead to further studies or real-world benefits, in terms relatable to a general audience.” #3 Unlock Your Story Leverage improvements in pattern recognition and reasoning. Try these prompts: “Using abstract reasoning, find three unexpected analogies that explain this complex concept to a general audience.” “What questions could the researchers answer in an interview that would help us develop richer story angles?” #4 Change the Way You Write Captions Take advantage of the way ChatGPT-5.2 translates processes and reasons about images, charts, diagrams, and other visuals far more effectively. Try these prompts: Clinical Trial Graphs: “Analyze this uploaded trial results graph upload image. Identify key trends, and comparisons to controls, then draft a 150-word donor summary with plain-language explanations and suggested captions suitable for donor communications.” Medical Diagrams: “Interpret these uploaded images. Extract diagnostic insights, highlight innovations, and generate a patient-friendly explainer: bullet points plus one visual caption.” A Word of Caution: Keep Experts in the Loop to Verify Information Even with improved reliability, outputs should be treated as drafts.  If your team does not yet have formal AI use policies, it's time to get started, because governance will be critical as AI use scales in 2026 and beyond.  A trust-but-verify policy with experts treats AI as a co-pilot — helpful for heavy lifting — while humans remain accountable for approval and publication.  The Importance of Humans (aka The Good News) Remember: the future of research communication isn’t about AI taking over — it’s about AI empowering us to do the strategic, human work that machines cannot. That includes: • Building relationships across your institution • Engaging researchers in storytelling • Discovering narrative opportunities • Turning discoveries into compelling narratives that influence audiences With improvements in speed, reasoning, and reliability, the question isn’t whether AI can help — it’s what research stories you’ll uncover next to shape public understanding and impact. FAQ How is AI changing expectations for accuracy in research and institutional communications? AI is shifting expectations from “fast output” to defensible accuracy. Better reasoning means fewer errors in research summaries, policy briefs, and expert content—especially when you’re working from long PDFs, complex methods, or dense results. The new baseline is: clear claims, traceable sources, and human review before publishing. ⸻ Why does deeper AI reasoning matter for communications teams working with experts and research content? Comms teams translate multi-disciplinary research into messaging that must withstand scrutiny. Deeper reasoning helps AI connect findings to real-world relevance, flag uncertainty, and maintain nuance instead of flattening meaning. The result is work that’s easier to defend with media, leadership, donors, and the public—when paired with expert verification. ⸻ When should communications professionals use advanced AI instead of lightweight AI tools? Use lightweight tools for brainstorming, social drafts, headlines, and quick rewrites. Use advanced, reasoning-optimized AI for high-stakes deliverables: executive briefings, research positioning, policy-sensitive messaging, media statements, and anything where a mistake could create reputational, compliance, or scientific credibility risk. Treat advanced AI as your “analyst,” not your autopilot. ⸻ How can media relations teams use AI to find stronger story angles beyond the abstract? AI can scan full papers, grants, protocols, and appendices to surface where the real story lives: unexpected findings, practical implications, limitations, and unanswered questions that prompt great interviews. Ask it to map angles by audience (public, policy, donors, clinicians) and to point to the exact sections that support each angle. ⸻ How should higher-ed comms teams use AI without breaking embargoes or media timing? AI can speed prep work—backgrounders, Q&A, lay summaries, caption drafts—before embargo lifts. The rule is simple: treat embargoed material like any sensitive document. Use approved tools, restrict sharing, and avoid pasting embargoed text into unapproved systems. Use AI to build assets early, then finalize post-approval at release time. ⸻ What’s the best way to keep faculty “in the loop” while still moving fast with AI? Use AI to produce review-friendly drafts that reduce load on researchers: short summaries, suggested quotes clearly marked as drafts, and a checklist of claims needing verification (numbers, methods, limitations). Then route to the expert with specific questions, not a wall of text. This keeps approvals faster while protecting scientific accuracy and trust. ⸻ How should teams handle charts, figures, and visual data in research communications? AI can turn “chart confusion” into narrative—if you prompt for precision. Ask it to identify trends, group comparisons, and what the figure does not show (limitations, missing context). Then verify with the researcher, especially anything involving significance, controls, effect size, or causality. Use the output to write captions that are accurate and accessible. ⸻ Do we need an AI Use policy in comms and media relations—and what should it include? Yes—because adoption scales faster than risk awareness. A practical policy should define: approved tools, what data is restricted, required human review steps, standards for citing sources/page references, rules for drafting quotes, and escalation paths for sensitive topics (health, legal, crisis). Clear guardrails reduce fear and prevent preventable reputational mistakes. If you’re using AI to move faster on research translation, the next bottleneck is usually the same one for many PR and Comm Pros: making your experts more discoverable in Generative Search, your website, and other media. ExpertFile helps media relations and digital teams organize their expert content by topics, keep detailed profiles current, and respond faster to source requests—so you can boost your AI citations and land more coverage with less work.                                            For more information visit us at www.expertfile.com

Peter Evans profile photo
9 min. read
Acing AI interviews: Career expert on strategies for job seekers featured image

Acing AI interviews: Career expert on strategies for job seekers

AI-conducted interviews are becoming a standard step in the hiring process, but many job seekers still aren’t sure how to handle them. University of Delaware career expert Jill Gugino Panté says candidates should treat these algorithm-driven interviews with the same seriousness as traditional ones and details how this can be done. Panté, director of UD’s Lerner College Career Services Center, can discuss what today’s AI interview platforms really measure – from confidence and tone to eye contact and facial expressions –  and how job seekers can stand out. She can also explain what recruiters are looking for in the AI-generated summaries that often determine who moves to the next round. Panté’s expert tips include: • Check equipment to make sure everything is working and the software is updated; turn off all notifications to avoid distractions and set up the space with good lighting, a neutral background. • Smile and maintain your energy, as some AI software will assess your tone and engagement. • Prepare as you would for any other interview - review the job description, research the organization, use the STAR method (Situation, Task, Action, Result) when providing examples. • Be sure to look at the camera and not the screen. It might feel awkward but that’s technically where the "eye contact" will be. • Some platforms will allow you to review your recording before submitting. Use this opportunity to take notes about your body language, pacing and clarity. To contact Panté directly and arrange an interview, visit her profile and click on the connect button. Interested journalists can also send an email to MediaRelations@udel.edu.

Jill Panté profile photo
2 min. read
Roderick Cooke, PhD, French and Francophone Studies Professor, Shares Thoughts on Louvre Heist, Artifacts Stolen featured image

Roderick Cooke, PhD, French and Francophone Studies Professor, Shares Thoughts on Louvre Heist, Artifacts Stolen

On Sunday, October 19, at 9:34 a.m., four masked individuals surged into the Louvre’s Galerie d’Apollon from a severed, second-floor window. Hurriedly, they smashed open two display cases, seized eight pieces of jewelry, then shimmied down a ladder and sped off on motorbikes toward Lyons. In seven minutes’ time, in broad daylight, they absconded with an estimated $102 million in valuables from the world’s most famous museum. This past Saturday, October 25, French authorities announced the first arrests in connection with the daring heist. However, despite the police’s progress, the country continues to litigate the matter—embroiled in discussions of heritage, history and national identity. Recently, Roderick Cooke, PhD, director of French and Francophone Studies at Villanova University, shared his perspective on the situation as well as the artifacts lost. Q: The Louvre heist has been described as “brazen,” “shocking” and a “terrible failure” on security’s part. Is there any sort of precedent for this event in the museum’s history? Dr. Cooke: Nothing on this scale has ever happened to the Louvre since its founding as a museum during the Revolution. The closest equivalent is the 1911 theft of the Mona Lisa by a former employee who claimed it should be returned to Italy. However, that was one painting, the heist was not committed by organized crime, and the Mona Lisa did not have the renown it enjoys today. The impact of the theft was thus lower, although it did cause major outrage and a sweeping law-enforcement response at the time. Ironically, that theft is often credited with making da Vinci’s painting the global icon it continues to be. Q: What has the reaction to this event been among the French people? DC: It’s harder to get a sense of reactions across French society, because so much of the aftermath has focused on the intellectual milieux’s opinions. And in those realms, it has immediately become a political football. Individuals positioning themselves as anti-elite or anti-status quo, such as Jordan Bardella of the National Rally party, have called the theft a “humiliation,” immediately tying it to French national prestige. Former President François Hollande has conversely and vainly called for the event to be de-polemicized, citing national solidarity. This is happening because the Louvre is one of the most visible manifestations of French soft power—the most-visited museum anywhere on Earth. As such, anything attacking its integrity becomes an attack on the nation, and how individual French citizens feel about the theft is closely tied to their broader view of the nation. Q: Several of the items stolen from the Louvre once belonged to Empress Eugénie. Could you share a bit of information on her story? DC: Eugénie de Montijo was a Spanish aristocrat who married the Emperor of the French, who ruled as Napoleon III between 1852 and 1870. It was a time of authoritarian repression and sham democracy—Napoleon III installed the Empire through a coup. Its clearest legacy is that Paris looks the way it does today largely because of the thorough modernizations overseen by Napoleon III’s appointee Baron Haussmann. So, Eugénie and her now-lost jewels represent a complex point in French history, when culture and the economy developed quickly, but did so in a climate of fear for any French person who opposed the regime too loudly (like Victor Hugo, who went into exile on the Channel Islands and wrote poems savaging Napoleon III and his deeds). Some accused the Empress of being responsible for the more hardline and conservative stances taken by her husband’s government. On a different note, she was a diligent patron of the arts and arguably the most significant figure in the contemporary fashion world, famous for setting trends such as the bustle that radiated across Europe. This explains the mix of anger and admiration that followed her depending on the sphere she was operating in. A new English-language biography argues that far from being a traditionalist, she was a pioneering feminist by the standards of the time. It looks like her historical importance will continue to be debated. Q: Interior Minister Laurent Nuñez described the stolen items as “of immeasurable heritage value.” How significant of a cultural loss do you consider this theft? DC: These jewels are referred to in French as “les Joyaux de la Couronne” (the Crown Jewels), but of course that phrase lands very differently in republican France than it does across the water in the United Kingdom. The items actually represent several different dynasties of French rulers, some of whom came to power through direct conflict with others. The now-ransacked display at the Louvre smoothed over these historical divisions, for which many French people died over the centuries. President Macron referred to the stolen items as embodying “our history,” which is emblematic of the French state’s work to create a conceptual present-day unity out of the clashes of the past. At a time when France is arguably more divided than at any point since World War II, any unitary symbol of identity takes on greater significance. Q: Do you have any closing thoughts on the artifacts taken and what they represent? DC: I’d reemphasize the previous point about the smoothing effect of the museum display on the violent history that made it possible. Much of the reporting on the stolen jewels lists off the different queens and empresses who owned them, without giving readers a sense of the complicated succession of regime changes and ideologies that put those women in power in the first place. The relative stability of the last 60-odd years is an anomaly in modern French history. This set of jewels and the names of their original owners may seem far removed from the concerns of an ordinary French citizen today, but just beneath their surface is a legacy of changing governments and tensions between social classes that survives in new forms in 2025.

4 min. read
The Thrill of Fear: The History and Cultural Significance of Horror Movies featured image

The Thrill of Fear: The History and Cultural Significance of Horror Movies

From flickering silent films to today’s big-budget blockbusters, horror movies have always tapped into humanity’s oldest emotion: fear. Across decades, they’ve reflected social anxieties, moral questions, and shifting definitions of what scares us. Yet behind every scream lies a story about culture, creativity, and the psychology of thrill. The Origins of On-Screen Fear Horror cinema began in the early 1900s with short silent films inspired by literature and folklore. One of the earliest, Le Manoir du Diable (1896), often considered the first horror film, introduced audiences to bats, ghosts, and the Devil himself. By the 1920s, German Expressionist films like Nosferatu and The Cabinet of Dr. Caligari used shadow and distortion to create unease, shaping the language of horror still used today. Hollywood’s Golden Age of Horror in the 1930s brought monsters to life — Dracula, Frankenstein, and The Mummy — giving audiences both fright and fascination during a time of global economic depression. These films helped people confront real-world fears symbolically, offering escape through imagination. Fear Evolves with the Times Each generation has reinvented horror to reflect its cultural moment. The 1950s’ atomic-age fears spawned giant monsters and alien invasions. The 1960s and ’70s shifted toward psychological and supernatural horror with classics like Psycho, The Exorcist, and The Texas Chain Saw Massacre — films that exposed anxieties about social change, faith, and violence. The 1980s and ’90s introduced slasher icons such as Halloween’s Michael Myers and A Nightmare on Elm Street’s Freddy Krueger, mixing terror with pop-culture spectacle. By the 2000s, horror had splintered into subgenres — from found-footage realism (The Blair Witch Project, Paranormal Activity) to elevated art-house films like Get Out and Hereditary, which use fear to explore race, grief, and identity. Why We Like to Be Scared Psychologists suggest people enjoy horror because it offers safe danger — a way to experience fear, adrenaline, and relief without real threat. Watching horror triggers the body’s fight-or-flight response, followed by catharsis once the tension resolves. Culturally, it provides a mirror to our collective psyche: what we fear, we face, and what we face, we sometimes conquer. Horror also brings people together — in theaters, at home, or online — to share an intense emotional experience. Whether screaming, laughing, or peeking through fingers, audiences participate in a ritual as old as storytelling itself. The Icons of the Genre Among the most popular and influential horror films of all time: Psycho (1960) The Exorcist (1973) Halloween (1978) A Nightmare on Elm Street (1984) The Silence of the Lambs (1991) The Ring (2002) Get Out (2017) Hereditary (2018) Each left a lasting mark on both cinema and culture — showing that horror, far from being niche, remains one of the most expressive and enduring genres in film history. Connect with our experts about the history and popularity of scary movies and horror flicks: Check out our experts here : www.expertfile.com

2 min. read