Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

How AI can improve poor leadership writing and boost productivity featured image

How AI can improve poor leadership writing and boost productivity

Poor written communication from leaders can create the kind of confusion it intended to avoid. University of Delaware career expert Jill Gugino Panté suggests using AI to sharpen emails, clarify expectations and reduce unnecessary calls. Getting through to employees with strong messaging can boost productivity by saving time and reducing unwanted meetings, she says. Panté, director of UD's Lerner Career Services Center, says that good leadership writing should be direct and outcome-driven, with no fluff, and offered the following advice for improvement. ✅ Don’t bury the lead. Start with what decision needs to be made, what action is required, and the deadline. If your writing doesn’t reduce ambiguity, it’s going to add to it. Vague communication can create interpretation gaps which, in turn, can create more meetings. When ownership isn’t defined, decisions aren’t documented, or outcomes aren’t clear, teams default to “Let’s hop on a call.” Meetings then become the fallback for unclear thinking. ✅ Generative AI can be a powerful clarity tool if it’s used intentionally. When used well, it can sharpen your ask and structure communication for action. The key is prompting it to refine your message, not just polish it. Leaders can use prompts like: • “Rewrite this message so the action, owner, deadline, and success metrics are explicitly stated" • “What assumptions or ambiguities exist in this message?” ✅ Good writing can replace unnecessary meetings. If communication is not direct, outcome-driven, and structured for action, it will cost you time somewhere else. Here are some practical actions that leaders can make in their writing: • Start with the Ask - Be explicit about what decision or action is needed. Don’t make people search for it. • Define Outcomes - Clarify deliverables, timelines, budgets and state what success looks like. • Clarify Ownership - Identify who is responsible for the request. • Document Decisions - Write down what has been decided and reiterate next steps, owners, and deadlines. To connect with Panté directly and arrange an interview, visit her profile and click on the "contact" button. Interested media can also send an email to MediaRelations@udel.edu.

Jill Panté profile photo
2 min. read
We Don’t Realize How Much Time We Spend With AI. Because It’s Hiding in Our Phones featured image

We Don’t Realize How Much Time We Spend With AI. Because It’s Hiding in Our Phones

If you ask most people how often they use AI, they’ll say something like: “I tried ChatGPT a couple of times” or “I don’t really use AI.” But look at their phone, and the story is completely different. Digital wellness platform Offline.now has found that we already spend about 10 of our 16 waking hours on screens, roughly 63% of our day. Founder Eli Singer calls AI “the shadow roommate inside those 10 hours”: invisible most of the time, but involved in more of our everyday taps and swipes than we realize. And we now have data to prove it. A recent Talker Research survey of 2,000 U.S. adults, commissioned by Samsung, found that 90% of Americans use AI features on their phones, but only 38% realize it. Common features like weather alerts, call screening, autocorrect, night-mode camera enhancements and auto-brightness are all powered by AI — yet more than half of respondents initially said they don’t use AI at all. Once shown a list of features, 86% admitted they use AI tools daily. (Lifewire) Singer sees this as a classic “confidence gap” problem applied to AI. Beyond the “invisible AI” on our phones, generative AI tools like ChatGPT, Claude and image generators are spreading fast. A nationally representative U.S. survey from Harvard’s Kennedy School and the Real-Time Population Survey found that by August 2024, about 39% of adults aged 18–64 were using generative AI. More than 24% of workers had used it at least once in the previous week, and nearly 1 in 9 used it every single workday. (NBER) Globally, usage is enormous. A World Bank backed analysis of online activity estimated that, as of March 2024, the top 40 generative AI tools attracted nearly 3 billion visits per month from hundreds of millions of users. ChatGPT alone commanded about 82.5% of that traffic. (Open Knowledge Repository) From a mental-health perspective, psychotherapist Harshi Sritharan, MSW, RSW says the issue isn’t just the number of visits, it’s the way AI subtly shapes the texture of our day. “Every autocorrect, every AI-sorted inbox, every ‘magic’ photo fix is a tiny cognitive hand-off,” she explains. “Individually they feel helpful. But taken together, they keep your brain in a constant state of micro-decisions and micro-rewards, which is exhausting, especially if you already struggle with ADHD, anxiety or overwhelm.” She points out that many of her clients only think of “AI time” as the hours they spend in a chatbot window. In reality, AI is involved when: Their phone decides which notifications to surface A map app reroutes them automatically Spam filters silently screen hundreds of emails “By the time they open a dedicated AI app, their nervous system has already been engaging with AI-driven features all day,” Sritharan says. “That’s part of why people end the day feeling tapped out but can’t quite explain why.” Singer worries that this “shadow AI” is quietly eating into the same finite resource Offline.now tracks with screens in general: attention. “We already know 10 hours a day on screens is unsustainable for our focus and our relationships,” he says. “Layer AI on top — systems designed to predict and nudge our behavior — and you’re not just losing time. You’re outsourcing micro-chunks of judgment, memory and choice without even noticing.” So how much time are people spending with AI? Right now, no one has a perfect number and that’s exactly the point. The best data we have suggests: Most smartphone users are already interacting with AI daily, whether they know it or not. (Lifewire) Roughly 4 in 10 U.S. adults now use generative AI, with a growing share using it at work every week or every day. (Harvard Kennedy School) Globally, billions of monthly visits are flowing into AI tools on top of our existing 10-hour screen days. (Open Knowledge Repository) “The future isn’t AI or no AI,” Singer says. “It’s: Can you be conscious about how you use it — instead of letting it hijack your attention and manage your life?” Featured Experts Eli Singer – Founder of Offline.now and author of Offline.now: A Practical Guide to Healthy Digital Balance. He brings proprietary behavioral data on screen time and digital overwhelm, and a framework (the Offline.now Matrix) for rebuilding confidence through 20-minute, real-world steps instead of all-or-nothing “detox” advice. Harshi Sritharan, MSW, RSW – Psychotherapist specializing in ADHD, anxiety and digital dependency. She explains how AI-assisted micro-tasks interact with dopamine, attention and overwhelm, and offers brain-friendly ways to renegotiate your relationship with both screens and AI. Expert interviews can be arranged through the Offline.now media team.

Eli Singer profile photoHarshi Sritharan profile photo
4 min. read
The AI Journal: UF and other research universities will fuel AI. Here’s why featured image

The AI Journal: UF and other research universities will fuel AI. Here’s why

In the global AI race between small and major competitors, established companies versus new players, and ubiquitous versus niche uses, the next giant leap isn’t about faster chips or improved algorithms. Where AI agents have already vacuumed up so much of the information on the internet, the next great uncertainty is where they’ll find the next trove of big data. The answer is not in Silicon Valley. It’s all across the nation at our major research universities, which are key to maintaining global competitiveness against China. To teach an AI system to “think” requires it to draw on massive amounts of data to build models. At a recent conference, Ilya Sutskever, the former chief scientist at OpenAI — the creator of ChatGPT — called data the “fossil fuel of AI.” Just as we will use up fossil fuels because they are not renewable, he said we are running out of new data to mine to keep fueling the gains in AI. However, so much of this thinking assumes AI was created by private Silicon Valley start-ups and the like. AI’s history is actually deeply rooted in U.S. universities dating back to the 1940s, when early research laid the groundwork for the algorithms and tools used today. While the computing power to use those tools was created only recently, the foundation was laid after World War II, not in the private sector but at our universities. Contrary to a “fossil fuel problem,” I believe AI has its own renewable fuel source: the data and expertise generated from our comprehensive public academic institutions. In fact, at the major AI conferences driving the field, most papers come from academic institutions. Our AI systems learn about our world only from the data we offer them. Current AI models like ChatGPT are scraping information from some academic journal articles in open-access repositories, but there are enormous troves of untapped academic data that could be used to make all these models more meaningful. A way past data scarcity is to develop new AI methods that leverage all of our knowledge in all of its forms. Our research institutions have the varied expertise in all aspects of our society to do this. Here’s just one example: We are creating the next generation of “digital twin” technology. Digital twins are virtual recreations of places or systems in our world. Using AI, we can develop digital twins that gather all of our data and knowledge about a system — whether a city, a community or even a person — in one place and allow users to ask “what if” questions. The University of Florida, for example, is building a digital twin for the city of Jacksonville, which contains the profile of each building, elevation data throughout the city and even septic tank locations. The twin also embeds detailed state-of-the-art waterflow models. In that virtual world, we can test all sorts of ideas for improving Jacksonville’s hurricane evacuation planning and water quality before implementing them in the actual city. As we continue to layer more data into the twin — real-time traffic information, scans of road conditions and more — our ability to deploy city resources will be more informed and driven by real-time actionable data and modeling. Using an AI system backed by this digital twin, city leaders could ask, “How would a new road in downtown Jacksonville impact evacuation times? How would the added road modify water runoff?” and so on. The possibilities for this emerging area of AI are endless. We could create digital twins of humans to layer human biology knowledge with personalized medical histories and imaging scans to understand how individuals may respond to particular treatments. Universities are also acquiring increasingly powerful supercomputers that are supercharging their innovations, such as the University of Florida’s HiPerGator, recently acquired from NVIDIA, which is being used for problems across all disciplines. Oregon State University and the University of Missouri, for example, are using their own access to supercomputers to advance marine science discoveries and improve elder care. In short, to see the next big leap in AI, don’t immediately look to Silicon Valley. Start scanning the horizon for those research universities that have the computing horsepower and the unique ability to continually renew the data and knowledge that will supercharge the next big thing in AI. Read more...

Alina Zare profile photo
3 min. read
How Higher Ed Should Tackle AI featured image

How Higher Ed Should Tackle AI

Higher learning in the age of artificial intelligence isn’t about policing AI, but rather reinventing education around the new technology, says Chris Kanan, an associate professor of computer science at the University of Rochester and an expert in artificial intelligence and deep learning. “The cost of misusing AI is not students cheating, it’s knowledge loss,” says Kanan. “My core worry is that students can deprive themselves of knowledge while still producing ‘acceptable work.’” Kanan, who writes about and studies artificial intelligence, is helping to shape one of the most urgent debates in academia today: how universities should respond to the disruptive force of AI. In his latest essay on the topic, Kanan laments that many universities consider AI “a writing problem,” noting that student writing is where faculty first felt the force of artificial intelligence. But, he argues, treating student use of AI as something to be detected or banned misunderstands the technological shift at hand. “Treating AI as ‘writing-tech’ is like treating electricity as ‘better candles,’” he writes. “The deeper issue is not prose quality or plagiarism detection,” he continues. “The deeper issue is that AI has become a general-purpose interface to knowledge work: coding, data analysis, tutoring, research synthesis, design, simulation, persuasion, workflow automation, and (increasingly) agent-like delegation.” That, he says, forces a change in pedagogy. What Higher Ed Needs to Do His essay points to universities that are “doing AI right,” including hiring distinguished artificial intelligence experts in key administrative leadership roles and making AI competency a graduation requirement. Kanan outlines structural changes he believes need to take place in institutions of higher learning. • Rework assessment so it measures understanding in an AI-rich environment. • Teach verification habits. • Build explicit norms for attribution, privacy, and appropriate use. • Create top-down leadership so AI strategy is coherent and not fractured among departments. • Deliver AI literacy across the entire curriculum. • Offer deep AI degrees for students who will build the systems everyone else will use. For journalists covering AI’s impact on education, technology, workforce development, or institutional change, Kanan offers a research-based, forward-looking perspective grounded in both technical expertise and a deep commitment to the mission of learning. Connect with him by clicking on his profile.

Christopher Kanan profile photo
2 min. read
How to Make Your Experts “AI-Ready" featured image

How to Make Your Experts “AI-Ready"

AI is changing how people discover expertise.  Today, journalists, event organizers, researchers, and the public increasingly turn to tools like ChatGPT, Claude, Perplexity, and Google Search’s AI summaries powered by Gemini. Instead of clicking through pages of links, they expect clear, credible answers—often delivered instantly, with citations. That shift has major implications for organizations. It’s no longer enough for your experts to “rank well.” They need to be understood, trusted, and accurately represented by AI systems. So the real question becomes: When AI talks about your experts, does it get it right? This is where LLMs.txt plays an important role—especially when paired with an ExpertFile-powered Expert Center. What is LLMs.txt (In Plain English)? ...and why is it essential for expert content LLMs.txt is a small, machine-readable file placed on your organization’s website—in the case of your expert content alongside your main Expert Center. Its purpose is simple: to explain your expertise to AI systems clearly and unambiguously. “AI systems don’t just scan for keywords; they look for clear meaning, consistent context, and clean formatting — precise, structured language makes it easier for AI to classify your content as relevant.” Microsoft: Optimizing Your Content for Inclusion in AI Search Answers Rather than forcing AI to infer meaning from scattered pages, LLMs.txt explicitly tells systems: Who your experts are Which pages represent official, curated content How expert profiles differ from articles, Q&A, or research content How your organization’s expertise should be interpreted as a whole Think of it as a table of contents and usage guide for AI —helping large language models understand your site the way a communications professional would. Why This Matters for Visibility and Trust It Establishes Your Organization as the Source of Truth AI systems routinely synthesize information from multiple places. Without guidance, they may rely on outdated bios, scraped content, or secondary references. LLMs.txt provides a clear signal: This is our official expert content. This is what represents us. For ExpertFile clients, this matters because the platform already centralizes and curates expert content—from profiles and directories to Spotlights and Expert Q&A—ensuring that what AI sees is current, governed, and institutionally endorsed. The result: Greater accuracy, stronger attribution, and reduced risk of misrepresentation when your experts appear in the ever growing AI-generated overviews and answer. ahrefs: AI Overviews Have Doubled How It Improves Discovery Across AI Platforms It Makes Structured Expertise Easier for AI to Use ExpertFile is purpose-built to publish structured expert content at scale—content that goes well beyond static bios. LLMs.txt simply helps AI recognize and use that structure correctly. It clarifies the role of key ExpertFile content types, including: Expert Profiles → Canonical identity, credentials, and areas of expertise Spotlight Posts → Timely commentary, thought leadership, and research insights Expert Q&A → Authoritative answers to real-world questions Directories, Research Bureaus, and Speakers Bureaus → Curated collections of expertise by topic or audience This makes it easier for AI systems to: Match your experts to breaking news and trending topics Pull accurate summaries for AI-generated responses Identify the right expert for journalists, event organizers, and researchers Combined with ExpertFile’s extended distribution through expertfile.com and the ExpertFile Mobile App, your expertise is not only published—but actively discoverable across channels used by key audiences . How It Builds Organizational Authority It Connects Individual Experts to Institutional Credibility Without context, AI may treat expert pages as isolated profiles. LLMs.txt helps connect the dots. It tells AI that: Your experts are curated and endorsed by the organization Their insights are part of a broader expertise ecosystem Your institution has depth across priority subject areas This aligns closely with how ExpertFile structures content to support E-E-A-T (Experience, Expertise, Authority, Trust)—not just at the individual level, but across the organization . The outcome: Your organization is recognized not just as a collection of experts, but as an authoritative source of knowledge. How It Works with Google, Gemini, and AI Search Supports AI Summaries, Citations, and Knowledge Panels LLMs.txt helps ensure that when Google’s AI: Summarizes your organization Cites expert commentary Builds “about this topic” panels …it draws from your official, structured ExpertFile content, rather than fragmented third-party sources. This complements ExpertFile’s existing SEO and AI-discoverability foundation, which includes clean code, proper meta data, schema markup, and frequent crawling by both search engines and AI bots. How LLMS.txt Fits with SEO, Meta Tags, and Schema LLMS.txt doesn’t replace SEO—it builds on it. Traditional SEO elements such as page titles, meta descriptions, schema.org markup, and internal linking remain essential for helping search engines index and rank your content. ExpertFile already delivers these fundamentals out of the box, continually testing and evolving SEO and GEO (Generative Engine Optimization) standards as search changes . “Semantic SEO helps search engines understand context... it now helps bridge a critical gap between traditional SEO and newer generative engine optimization (GEO) and AI optimization (AIO) efforts.” Search Engine Land: Semantic SEO: How to optimize for meaning over keywords LLMS.txt adds a layer designed specifically for AI systems: Schema explains individual pages LLMs.txt explains your entire expertise ecosystem In simple terms: SEO helps your content get found LLMs.txt helps AI understand, summarize, and cite it correctly Together, they ensure your experts are not only visible—but accurately represented wherever AI is shaping discovery. Why This Is Especially Powerful on ExpertFile ExpertFile was designed to future-proof expert visibility—offering structured publishing, governance, distribution, inquiry management, analytics, and professional services as part of a continuously evolving SaaS platform . LLMS.txt acts as a multiplier on that foundation: Turning your Expert Center into a machine-readable expertise hub Strengthening AI discovery without adding operational burden Supporting emerging use cases like automated expert matching and AI-assisted research It’s not about chasing new technology. It’s about ensuring your expertise is clearly defined, properly attributed, and trusted—now and in the future. The Takeaway An LLMs.txt file on your ExpertFile organization page helps ensure that: Your experts are found by AI tools, not overlooked Your content is interpreted correctly, not flattened or misrepresented Your organization earns authority and trust in AI summaries, citations, and search results “AI search isn’t eliminating organic traffic. But it is reducing visits to source websites… Measure presence (citations, mentions) alongside traffic to see real impact.” Semrush: AI Search Trends for 2026 & How You Can Adapt  As AI becomes the front door to information, LLMs.txt helps make sure that when people ask for expertise, your organization is the answer they get.

Robert Carter profile photo
5 min. read
Researchers warn of rise in AI-created non-consensual explicit images featured image

Researchers warn of rise in AI-created non-consensual explicit images

A team of researchers, including Kevin Butler, Ph.D., a professor in the Department of Computer and Information Science and Engineering at the University of Florida, is sounding the alarm on a disturbing trend in artificial intelligence: the rapid rise of AI-generated sexually explicit images created without the subject’s consent. With funding from the National Science Foundation, Butler and colleagues from UF, Georgetown University and the University of Washington investigated a growing class of tools that allow users to generate realistic nude images from uploaded photos — tools that require little skill, cost virtually nothing and are largely unregulated. “Anybody can do this,” said Butler, director of the Florida Institute for Cybersecurity Research. “It’s done on the web, often anonymously, and there’s no meaningful enforcement of age or consent.” The team has coined the term SNEACI, short for synthetic non-consensual explicit AI-created imagery, to define this new category of abuse. The acronym, pronounced “sneaky,” highlights the secretive and deceptive nature of the practice. “SNEACI really typifies the fact that a lot of these are made without the knowledge of the potential victim and often in very sneaky ways,” said Patrick Traynor, a professor and associate chair of research in UF's Department of Computer and Information Science and Engineering and co-author of the paper. In their study, which will be presented at the upcoming USENIX Security Symposium this summer, the researchers conducted a systematic analysis of 20 AI “nudification” websites. These platforms allow users to upload an image, manipulate clothing, body shape and pose, and generate a sexually explicit photo — usually in seconds. Unlike traditional tools like Photoshop, these AI services remove nearly all barriers to entry, Butler said. “Photoshop requires skill, time and money,” he said. “These AI application websites are fast, cheap — from free to as little as six cents per image — and don’t require any expertise.” According to the team’s review, women are disproportionately targeted, but the technology can be used on anyone, including children. While the researchers did not test tools with images of minors due to legal and ethical constraints, they found “no technical safeguards preventing someone from doing so.” Only seven of the 20 sites they examined included terms of service that require image subjects to be over 18, and even fewer enforced any kind of user age verification. “Even when sites asked users to confirm they were over 18, there was no real validation,” Butler said. “It’s an unregulated environment.” The platforms operate with little transparency, using cryptocurrency for payments and hosting on mainstream cloud providers. Seven of the sites studied used Amazon Web Services, and 12 were supported by Cloudflare — legitimate services that inadvertently support these operations. “There’s a misconception that this kind of content lives on the dark web,” Butler said. “In reality, many of these tools are hosted on reputable platforms.” Butler’s team also found little to no information about how the sites store or use the generated images. “We couldn’t find out what the generators are doing with the images once they’re created” he said. “It doesn’t appear that any of this information is deleted.” High-profile cases have already brought attention to the issue. Celebrities such as Taylor Swift and Melania Trump have reportedly been victims of AI-generated non-consensual explicit images. Earlier this year, Trump voiced support for the Take It Down Act, which targets these types of abuses and was signed into law this week by President Donald Trump. But the impact extends beyond the famous. Butler cited a case in South Florida where a city councilwoman stepped down after fake explicit images of her — created using AI — were circulated online. “These images aren’t just created for amusement,” Butler said. “They’re used to embarrass, humiliate and even extort victims. The mental health toll can be devastating.” The researchers emphasized that the technology enabling these abuses was originally developed for beneficial purposes — such as enhancing computer vision or supporting academic research — and is often shared openly in the AI community. “There’s an emerging conversation in the machine learning community about whether some of these tools should be restricted,” Butler said. “We need to rethink how open-source technologies are shared and used.” Butler said the published paper — authored by student Cassidy Gibson, who was advised by Butler and Traynor and received her doctorate degree this month — is just the first step in their deeper investigation into the world of AI-powered nudification tools and an extension of the work they are doing at the Center for Privacy and Security for Marginalized Populations, or PRISM, an NSF-funded center housed at the UF Herbert Wertheim College of Engineering. Butler and Gibson recently met with U.S. Congresswoman Kat Cammack for a roundtable discussion on the growing spread of non-consensual imagery online. In a newsletter to constituents, Cammack, who serves on the House Energy and Commerce Committee, called the issue a major priority. She emphasized the need to understand how these images are created and their impact on the mental health of children, teens and adults, calling it “paramount to putting an end to this dangerous trend.” "As lawmakers take a closer look at these technologies, we want to give them technical insights that can help shape smarter regulation and push for more accountability from those involved," said Butler. “Our goal is to use our skills as cybersecurity researchers to address real-world problems and help people.”

Kevin Butler profile photoPatrick Traynor profile photo
4 min. read
The Ads are Coming ! OpenAI is testing ads inside ChatGPT starting this month. featured image

The Ads are Coming ! OpenAI is testing ads inside ChatGPT starting this month.

But there's a catch: You can’t just buy your way in ChatGPT will soon include “clearly labeled sponsored listings” at the bottom of AI-generated responses. And while the mock-ups don't appear all that sophisticated, it's important to focus on the bigger picture. We're about to see a new wave of 'high-intent advertising' that combines the targeting sophistication of social media with the purchase-intent clarity of search advertising. More on that in a moment. How Do ChatGPT Ads Work? Starting later this month, free users of the ChatGPT platform and those under 18 will begin receiving Ads at the bottom of their screens. First, they will see ChatGPT's answer to their question, which provides a comprehensive, relevant response that builds trust. Then they will see an ad for a sponsored product/service below. An ad that suddenly doesn't feel like a blunt interruption. It feels like a natural next step. This is premium placement. The user has already received value. They've been educated. And now there's a clear call to action (CTA) that's in context. Open AI has stated that their new Ads “support a broader effort to make powerful AI accessible to more people.” Translation: As they approach 1 billion weekly users across 171 countries using ChatGPT for free, OpenAI needs to offset its astronomical burn rate with ads. Makes sense. This New Era of Conversational Ads Will be Complicated But there's a structural difference with these new ads. OpenAI has stated that ads will only appear when they're relevant to that exact conversation. This means you can't just buy your way into ChatGPT Ads. In fact, with ChatGPT you are being selected because you're the right answer the user needs at that time. Put another way: When ChatGPT evaluates which sponsored products to show, it will favor brands with demonstrated authority on the topic. So unlike traditional paid search, where a higher bid gets you ranked in sponsored results, ChatGPT Ads will reward the brands whose content has already been recognized as authoritative by the AI model. Brands with strong organic visibility, topical expertise, and content that aligns with user intent will have a distinct competitive advantage from day one. Brands without that foundation will be paying premium rates to compete with established authorities. How ChatGPT's Ad Strategy is Set to Change Digital Marketing For years, CMOs have treated organic search and paid search as separate budget lines, often managed by different teams. I saw this firsthand, as I helped my client DoubleClick launch it’s first Ad Exchange network in the US market. Programmatic exchanges brought a new efficiency to digital ad buying. It was a very groovy time. This feels very different. Why? Because, the conventional wisdom has always been that paid search and ads drive immediate results while organic search plays the long game. In 2026, that strategy isn’t completely obsolete. But that type of thinking is about to get a lot more expensive for clients if they don't start to appreciate quality "organic" content and its ability to improve their paid advertising ROI. Now organic and paid need to get along, to get ahead. ChatGPT Ads Are Looking for Topical Authority that Experts Can Demonstrate When ChatGPT evaluates which sponsored products to show, it will favor brands with demonstrated authority on the topic. Brands won't simply be able to "buy" visibility. OpenAI in its announcements, has been explicit: ads must be relevant to the conversation. Relevance is determined by topical alignment, not budget. A brand spending millions on generic bidding will lose to a smaller competitor whose product is more precisely aligned with what the user actually asked. The ads aren't live yet. But the infrastructure supporting them is. Open AI, Google and many of the other generative search platforms are building very sophisticated systems that track topical authority and content quality signals. They're already reshaping how organic search, AI recommendations, and paid advertising work together. Topical Relevance + Expert Authority is the Path to Visibility in Search Investing in well-developed thought leadership programs generates compound returns. You get the organic search results plus an improvement in your paid search metrics in Generative AI search platforms. When done right, you build authority for AI citations, which then positions you better for ChatGPT ads. Remember, your organic traffic gains are built on authoritative content. They're built on being the answer that search engines and AI systems select. And once you've built that authority, it works everywhere—traditional search, AI Overviews, ChatGPT, and soon… ChatGPT ads. What To Do Before AI Ad Networks Start to Scale The early advantage will go to brands that invest in quality content right now. Organizations that invest in expert-authored, intent-aligned content over the next six months will have more AI citation visibility from Google Overviews and similar LLM's like ChatGPT. That means more trust signals, making paid ads more effective when they run. Content that is aligned with user intent: Answers a specific question. Not tangentially, not after 2,000 words of context. The answer appears in the opening paragraph, structured for AI extraction. Includes expert perspective. Generic information that could come from anywhere doesn't differentiate you. Expert insight, original research, or proprietary frameworks do. Demonstrates topical authority. A single authoritative article matters less than a cluster of related content that shows comprehensive expertise on a topic. Is structured for scanning. Clear headings (H2, H3), bullet points, tables, Q&A blocks. This structure helps both human readers and AI systems parse meaning. Remember, the brands that get the most value out of ChatGPT Ads will be the ones that built intent-aligned content years before the ads launched. They'll have topical clusters, expert perspectives, and the authority signals that make them the natural choice for sponsorship. Questions CMO’s Should Be Asking their Teams Now to Prepare for ChatGPT Ads Q. Can I pre-purchase Chat GPT Ads? As of today, there are currently no ads in ChatGPT. Open AI has announced that they will begin internal testing ads in ChatGPT later this month for Free users in the US market. Q. Do Ads influence the answers ChatGPT gives you? What about privacy? Open AI in their release states that answers are optimized based on what's most helpful to you. Ads are always separated and clearly labeled from Answers. They also state that they keep your conversations private from advertisers and will never sell your data to advertisers. Q. How do we audit our site content to ensure we're aligned with user intent? For your top 20-30 decision-stage queries (the ones that drive revenue), here's a quick test. Does the content directly answer the question in the opening paragraph? Are you including question-and-answer formats in your content? If you're burying the answer in a 3,000-word article full of tangents, you're losing visibility in organic search, and you're already failing in ChatGPT's environment. Restructure. Q. How do we prepare for ChatGPT Advertising Opportunities? Build topical authority through content clusters. Don't publish isolated blog posts. Organize your content around core topics your audience cares about. Create a long-form hub article that comprehensively covers the topic, then develop additional linked articles that dive into sub-topics and questions. Link them together. This structure helps AI systems over time, recognize your brand as authoritative on that topic, which improves both organic rankings and AI citation rates. Q. Can we still get traction with content that is not authored by experts? Generic AI-written content won't differentiate you. Get expert voices into your content. Feature your subject-matter experts, partner with practitioners, and customers to contribute original insights, case studies, or frameworks. AI systems can detect authenticity, and original expert perspectives is now a ranking signal. This is especially critical as you prepare for ChatGPT ads. OpenAI has prioritized conversations that cite authoritative sources. Q. How does content need to be structured for citations? Implement proper schema markup and structured data. AI systems extract information by parsing content structure. If your pages include proper schema markup (FAQPage, HowTo, Review, Product schema), you're making it easier for AI to pull your content into answers. This increases citation rates, which builds authority before ChatGPT ads scale. Q. How do we allocate our organic and paid programs? Own the organic + paid intersection. For your highest-intent topics, if you have a budget, invest in both organic visibility and paid campaigns. Run ads targeting the same keywords where you rank organically. This takes up more real estate on the results page and signals authority. It also gives you direct feedback on keyword performance, messaging, and landing page effectiveness—data that informs your organic content strategy and drives more citations - a virtuous cycle. Q. What types of creative will work best in these new Ad products? Until they roll out, it's unwise to make too many predictions. The safe bet here is to prepare your team for conversational advertising. ChatGPT ads won't reward traditional ad copy. They'll reward clarity, specificity, and direct value messaging. If you're used to brand-heavy, aspirational creative, this will feel foreign. Start testing conversationally-appropriate messaging now. Short, clear, problem-focused. Test on existing paid channels and refine before ChatGPT ads launch. Our Prediction When ChatGPT ads fully launch and scale, many brands that have invested in organic visibility and content quality will start to pull away from the pack. Remember…The brands that win won't be the ones with the biggest ad budgets. They'll be the ones whose content has already proven they're the right answer. They'll be the ones users already trust, already cite, and already know. The ads are coming. Are you ready?

Peter Evans profile photo
7 min. read
How corporate competition can spur collaborative solutions to the world's problems featured image

How corporate competition can spur collaborative solutions to the world's problems

Why can’t large competitive companies come together to work on or solve environmental challenges, AI regulation, polarization or other huge problems the world is facing? They can, says the University of Delaware’s Wendy Smith. While it's difficult, the key is to have these companies collaborate under the guise of competition. Smith, a professor of management and an expert on these types of paradoxes, co-authored a recent three-year study of one of the most profound collaborations. Her team looked at the unlikely alliance of 13 competitive oil and gas companies that eventually formed Canada’s Oil Sands Innovation Alliance (COSIA), which works with experts worldwide to find innovative solutions for environmental and technical challenges in the region. Smith and her co-authors found that those companies were willing to collaborate, but only when collaboration was cast in the language, practices and goals of competition. Given the scope of our global problems, companies must continually work together to offer solutions. Creating that collaboration becomes critical, Smith said. This research offers important insight about how these collaborations are possible. Among the study's key findings: Competition can drive cooperation — if leaders harness it. It would make sense to assume that competition undermines collaboration. But the study finds that those who championed alliances used competitive dynamics to strengthen cooperation among rival firms. Rather than suppressing rivalry, leaders leveraged competition as a mechanism to enable joint action toward shared environmental goals. This reframes how organizations can manage tensions between competition and cooperation in partnerships. For example, COSIA leaders created competition between partners to see who would contribute the most valuable environmental innovations. Partners could only gain as much benefit from other company’s innovations commensurate with what they shared. A “Paradox Mindset” is key to complex collaborative success. The research identifies the importance of what the authors call a paradox mindset, which sees competition and cooperation not as opposites to be balanced but as interrelated forces that can be used in tandem. Leaders in the study who adopted this mindset were more thoughtful and creative about how to engage both competitive and collaborative practices in the same alliance. Traditional balance isn’t the goal — process over stability. Instead of pursuing a simplistic “balance” between competing and cooperating, the study shows that effective alliances evolve through process, where competition remains visible and even useful throughout the lifecycle of the alliance. To connect with Smith directly and arrange an interview, visit her profile and click on the "contact" button. Interested journalists can also send an email to MediaRelations@udel.edu.

Wendy Smith profile photo
2 min. read
Tracking rain patterns will improve hurricane forecasting, UF researcher finds featured image

Tracking rain patterns will improve hurricane forecasting, UF researcher finds

Studying the precipitation patterns in hurricanes may be key to predicting future storm patterns and their potential strength, a University of Florida researcher has found. Supported by a four-year, $212,000 grant from the National Science Foundation, Professor of Geography Corene Matyas, Ph.D. has identified the patterns of rain rates within storms and studied the moisture surrounding these storms. “We are hoping that, if we have a better prediction of moisture availability, that might help us forecast rain events with greater accuracy,” Matyas said. “The more we know about how storms develop, the more we can predict their path and magnitude.” The ideal stage for the perfect storm The potential for devastating high winds, storm surge and flooding poses an annual threat to Florida and its residents. With 1,350 miles of coastline and relatively flat geography that juts out to separate the warm waters of the southeast Atlantic and the Gulf, Florida creates the ideal stage for the perfect storm. Last year broke records with 18 named storms, including 11 hurricanes in the Atlantic basin and three major hurricanes making landfall along Florida’s coast. Early predictions are crucial to hurricane preparedness, allowing for increased response time and resource allocation, and hurricane modeling is essential for understanding these somewhat unpredictable storms. Advances in technology, data collection and the use of artificial intelligence in hurricane modeling have significantly impacted the ability to predict a storm’s path and strength more accurately. Artificial intelligence helps researchers understand hurricanes Matyas has completed two studies on this topic. The first study processed 12,000 images of rain rates from tropical storms and hurricanes in the Atlantic, using a machine learning algorithm called a convolutional autoencoder. Similar in use to image recognition software, the encoder broke the rain rate images down and simplified the patterns. Six main types, or clusters, of rainfall patterns for tropical cyclones were identified. At a presentation of the work to forecasters at the National Weather Service office in Jacksonville, the forecasters confirmed that one of the patterns matches what they typically see when late-season storms make landfall over Florida’s Gulf Coast. The second study used the autoencoder to process 4,600 images that represent the amount of moisture in the atmosphere extending 1,000 kilometers away from each hurricane. “We looked for commonalities in the patterns and found four dominant patterns of moisture that accompany Atlantic basin hurricanes,” Matyas said. “We found the biggest storms with the most moisture make the most landfalls, typically in the Caribbean and even in southern Florida. They also have a large moisture pool, giving them a bigger chance of heavy rainfall.” According to Matyas, three of the moisture patterns found in the second study were strikingly like those found in the earlier study that used fewer observations in a statistical analysis. With this use of AI, researchers can now recognize and understand these moisture patterns better, which can improve predictions about a storm’s intensity, its size and the amount of rainfall that will result from it. Early, accurate storm predictions allow Floridians time to prepare Rapid intensification – when, in a 24-hour period, a storm experiences a sudden drop in pressure and a dramatic increase in wind speed – creates much more of a challenge for forecasters. “We tend to boil down a hurricane to a set of coordinates which track the middle of a storm,” Matyas said. “And the fastest winds do focus there, but the moisture gets pulled from thousands of kilometers away and the system forces the moisture up. That moisture must go somewhere. So, the outer edges of the storm need to be understood more as well.” Matyas hopes these studies will help scientists classify rain patterns more accurately and consistently. Continued funding for research at public universities from federal agencies, such as the National Science Foundation and the National Oceanic and Atmospheric Administration, is essential for helping researchers develop tools to detect and predict severe weather events. Matyas is one of two UF faculty members among 18 national researchers named to the 2025 class of fellows by the American Association of Geographers. Matyas and UF Geography Department Chair Jane Southworth, Ph.D. were honored by the organization for their contributions in biogeography, geospatial analytics, soil science, community geography, climatology and other areas related to geography. “I look forward to this opportunity to contribute to the mission of the AAG in a more formal capacity, continuing to research how weather shapes our spaces and share knowledge of earth systems beyond the classroom and the written word to promote an inclusive society,” Matyas said.

Corene Matyas profile photo
4 min. read
AI-driven software is 96% accurate at diagnosing Parkinson's featured image

AI-driven software is 96% accurate at diagnosing Parkinson's

Existing research indicates that the accuracy of a Parkinson’s disease diagnosis hovers between 55% and 78% in the first five years of assessment. That’s partly because Parkinson’s sibling movement disorders share similarities, sometimes making a definitive diagnosis initially difficult. Although Parkinson’s disease is a well-recognized illness, the term can refer to a variety of conditions, ranging from idiopathic Parkinson’s, the most common type, to other movement disorders like multiple system atrophy Parkinsonian variant and progressive supranuclear palsy. Each shares motor and nonmotor features, like changes in gait — but possess a distinct pathology and prognosis. Roughly one in four patients, or even one in two patients, is misdiagnosed. Now, researchers at the University of Florida and the UF Health Norman Fixel Institute for Neurological Diseases have developed a new kind of software that will help clinicians differentially diagnose Parkinson’s disease and related conditions, reducing diagnostic time and increasing precision beyond 96%. The study was published recently in JAMA Neurology and was funded by the National Institutes of Health. “In many cases, MRI manufacturers don’t communicate with each other due to marketplace competition,” said David Vaillancourt, Ph.D., chair and a professor in the UF Department of Applied Physiology and Kinesiology. “They all have their own software and their own sequences. Here, we’ve developed novel software that works across all of them.” Although there is no substitute for the human element of diagnosis, even the most experienced physicians who specialize in movement disorder diagnoses can benefit from a tool to increase diagnostic efficacy between different disorders, Vaillancourt said. The software, Automated Imaging Differentiation for Parkinsonism, or AIDP, is an automated MRI processing and machine learning software that features a noninvasive biomarker technique. Using diffusion-weighted MRI, which measures how water molecules diffuse in the brain, the team can identify where neurodegeneration is occurring. Then, the machine learning algorithm, rigorously tested against in-person clinic diagnoses, analyzes the brain scan and provides the clinician with the results, indicating one of the different types of Parkinson’s. The study was conducted across 21 sites, 19 of them in the United States and two in Canada. “This is an instance where the innovation between technology and artificial intelligence has been proven to enhance diagnostic precision, allowing us the opportunity to further improve treatment for patients with Parkinson’s disease,” said Michael Okun, M.D., medical adviser to the Parkinson’s Foundation and director of the Norman Fixel Institute for Neurological Diseases at UF Health. “We look forward to seeing how this innovation can further impact the Parkinson’s community and advance our shared goal of better outcomes for all.” The team’s next step is obtaining approval from the U.S. Food and Drug Administration. “This effort truly highlights the importance of interdisciplinary collaboration,” said Angelos Barmpoutis, Ph.D., a professor at the Digital Worlds Institute at UF. “Thanks to the combined medical expertise, scientific expertise and technological expertise, we were able to accomplish a goal that will change the lives of countless individuals.” Vaillancourt and Barmpoutis are partial owners of a company called Neuropacs whose goal is to bring this software forward, improving both patient care and clinical trials where it might be used.

Michael Okun profile photoDavid Vaillancourt profile photo
3 min. read