Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

University Communications Needs a Bigger Role in the Research Conversation
While attending the Expert Finder Systems International Forum (EFS), several notable themes emerged for me over the 2-day event. It's clear that many universities are working hard to improve their reputation by demonstrating the real-world impact of their research to the public and to funders, but it's proving to be a challenging task - even for the largest R1 universities. Many of these challenges stem from how institutions have traditionally organized their research functions, management systems, and performance metrics. Engaging faculty researchers in this process remains a significant challenge, despite the need for rapid transformation. While this EFS conference was very well-organized and the speakers delivered a great deal of useful information, I appeared to be one of the few marketing and communications professionals in a room full of research leaders, administrative staff, librarians, and IT professionals. There's a certain irony to this, as I observe the same phenomenon at HigherEd marketing conferences, which often lack representation from research staff. My point is this. We can't build better platforms, policies, and processes that amplify the profile of research without breaking down silos. We need University Communications to be much more involved in this process. As Baruch Fischhoff, a renowned scholar at Carnegie Mellon University, notes: Bridging the gap between scientists and the public “requires an unnatural act: collaboration among experts from different communities” – but when done right, it benefits everyone. But first, let's dive in a little more into RIM's and Expert Finder Systems for context. What are Research Information Systems (RIMs) Research Information Management systems (aka Expert Finder Systems) are the digital backbone that tracks everything researchers do. Publications, grants, collaborations, patents, speaking engagements. Think of them as massive databases that universities use to catalog their intellectual output and demonstrate their research capacity. These systems matter. They inform faculty promotion decisions, support strategic planning and grant applications, and increasingly, they're what institutions point to when asked to justify their existence to funders, accreditors, and the public. But here's the problem: most RIM systems were designed by researchers, for researchers, during an era when academic reputation was the primary currency. The game has fundamentally changed, and our systems haven't caught up. Let's explore this further. Academic Research Impact: The New Pressure Cooker Research departments across the country are under intense pressure to demonstrate impact—fast. State legislators want to see economic benefits from university research. Federal agencies are demanding clearer public engagement metrics. Donors want stories, not statistics. And the general public? They're questioning whether their tax dollars are actually improving their lives. Yet some academics are still asking, “Why should I simplify my research? Doesn’t the public already trust that this is important?” In a word, no – at least, not like they used to. Communicators must navigate a landscape where public trust in science and academia is not a given. The data shows that there's a lot of work to be done. Trust in science has declined and it's also polarized:. According to a Nov. 2024 Pew Research study, 88% of Democrats vs. 66% of Republicans have a great deal or fair amount of confidence in scientists; overall views have not returned to pre-pandemic highs and many Americans are wary of scientists’ role in policymaking. While Public trust in higher education has declined, Americans see universities having a central role in innovation. While overall confidence in higher education has been falling over the past decade, a recent report by Gallup Research shows innovation scores highest as an area where higher education helps generate positive outcomes. Communication is seen as an area of relative weakness for scientists. Overall, 45% of U.S. adults describe research scientists as good communicators, according to a November 2024 Pew Research Study. Another critique many Americans hold is the sense that research scientists feel superior to others; 47% say this phrase describes them well. The traditional media ecosystem has faltered:. While many of these issues are largely due to research being caught in a tide of political polarization fueled by a significant rise in misinformation and disinformation on social media, traditional media have faced serious challenges. Newsrooms have shrunk, and specialized science journalists are a rare breed outside major outlets. Local newspapers – once a reliable venue for highlighting state university breakthroughs or healthcare innovations – have been severely impacted. The U.S. has lost over 3,300 newspapers since 2005, with closures continuing and more than 7,000 newspaper jobs vanished between 2022 and 2023 according to a Northwestern University Medill Report on Local News. Competition for coverage is fierce, and your story really needs to shine to grab a journalist's attention – or you need to find alternative ways to reach audiences directly. The Big Message These Trends are Sending We can’t just assume goodwill – universities have to earn trust through clear, relatable communication. Less money means more competition and more scrutiny on outcomes. That's why communications teams play a pivotal role here: by conveying the impact of research to the public and decision-makers, they help build the case for why cuts to science are harmful. Remember, despite partisan divides, a strong majority – 78% of Americans – still agree government investment in scientific research is worthwhile. We need to keep it that way. But there's still a lot of work to do. The Audience Mismatch Problem The public doesn't care about your Altmetrics score. The policymakers I meet don't get excited about journal impact factors. Donors want to fund solutions to problems they understand, not citations in journals they'll never read. Yet our expert systems are still designed around these traditional academic metrics because that's what the people building them understand. It's not their fault—but it's created a blind spot. "Impact isn't just journal articles anymore," one EFS conference panelist explained. "It's podcasts, blogs, media mentions, datasets, even the community partnerships we build." But walk into most research offices, and those broader impacts are either invisible in the system or buried under layers of academic jargon that external audiences can't penetrate. Expert systems have traditionally been primarily focused on academic audiences. They're brilliant at tracking h-Index scores, citation counts, and journal impact factors. But try to use them to show a state legislator how your agriculture research is helping local farmers, or explain to a donor how your engineering faculty is solving real-world problems? There's still work to do here. As one frustrated speaker put it: "These systems have become compliance-driven, inward-looking tools. They help administrators, but they don't help the public understand why research matters. The Science Translation Crisis Perhaps the most sobering observation came from another EFS Conference speaker who said it very plainly. "If we can't explain our work in plain language, we lose taxpayers. We lose the community. They don't see themselves in what we do." However, this feels more like a communication problem masquerading as a technology issue. We've built systems that speak fluent academic, but the audiences we need to reach speak human. When research descriptions are buried in jargon, when impact metrics are incomprehensible to lay audiences, when success stories require a PhD to understand—we're actively pushing away the very people we need to engage. The AI Disruption Very Few Saw Coming Yes, AI, like everywhere else, is fast making its mark on how research gets discovered. One impassioned speaker representing a university system described this new reality: "We are entering an age where no one needs to click on content. AI systems will summarize and cite without ever sending the traffic back." Think about what this means for a lot of faculty research. If it's not structured for both AI discovery and human interaction, your world-class faculty might as well be invisible. Increasingly, you will see that search traffic isn't coming back to your beautifully designed university pages—instead, it's being "synthesized" and served up in AI-generated summaries. I've provided a more detailed overview of how AI-generated summaries work in a previous post here. Keep in mind, this isn't a technical problem that IT can solve alone. It's a fundamental communications challenge about how we structure, present, and distribute information about our expertise. Faculty Fatigue is Real Meanwhile, many faculty are experiencing serious challenges managing busy schedules and mounting responsibilities. As another EFS panelist commented on the challenges of engaging faculty in reporting and communicating their research, saying, "Many faculty see this work as duplicative. It's another burden on top of what they already have. Without clear incentives, adoption will always lag." Faculty researchers are busy people. They will engage with these internal systems when they see direct benefits. Media inquiries, speaking opportunities, consulting gigs, policy advisory roles—the kind of external visibility that advances careers and amplifies research impact. And they require more support than many institutions can provide. Yet, many universities have just one or two people trying to manage thousands of profiles, with no clear strategy for demonstrating how tasks such as profile updates and helping approve media releases and stories translate into tangible opportunities. In short, we're asking faculty to feed a system that feels like it doesn't feed them back. Breaking Down the Silos Which brings me to my main takeaway: we need more marketing and communications professionals in these conversations. The expert systems community is focused on addressing many of the technical challenges—data integration, workflow optimization, and new metadata standards — as AI transforms how we conduct research. But they're wrestling with fundamental communication challenges about audience, messaging, and impact storytelling. That's the uncomfortable truth. The systems are evolving whether we participate or not. The public pressure for accountability isn't going away. Comms professionals can either help shape these systems to serve critical communications goals or watch our expertise get lost in translation. ⸻ Key Takeaways Get Closer to Your Research: This involves having a deeper understanding of the management systems you use across the campus. How is your content appearing to external audiences? —not just research administrators, but the journalists, policymakers, donors, and community members we're trying to reach. Don't Forget The Importance of Stories: Push for plain-language research descriptions without unnecessarily "dumbing down" the research. Show how the work your faculty is doing can create real-world benefits at a local community level. Also, demonstrate how it has the potential to address global issues, further enhancing your authority. And always be on the lookout for story angles that connect the research to relevant news, adding value for journalists. Structure Expert Content for AI Discoverability: Audit your content to see how it's showing up on key platforms such as Google Gemini, ChatGPT. Show faculty how keeping their information fresh and relevant translates to career opportunities they actually care about. Show Up at These Research Events: Perhaps most importantly, communications pros need to be part of these conversations. Next year's International Forum on Expert Finder Systems needs more communications professionals, marketing strategists, and storytelling experts in the room. The research leaders, administrators and IT professionals you will meet have a lot of challenges on their plate and want to do the right thing. They will appreciate your input. These systems are being rapidly redesigned - Whether you're part of the conversation or not. The question is: do we want to influence how they serve our institutions' communications goals, or do we want to inherit systems that work brilliantly for academic audiences but get a failing grade for helping us serve the public?

By Steven Lazarus Like many coastal regions, Florida’s Space Coast faces significant climate resilience challenges and risks. According to the National Oceanic and Atmospheric Administration (NOAA), Florida has over 8,000 miles of shoreline, more than any other state in the contiguous U.S. In addition, the 2020 census indicates that that there are 21 million Florida residents, 75-80% of which live in coastal counties. This makes our state particularly vulnerable to rising sea levels, which are directly responsible for a host of coastal impacts, such as saltwater intrusion, sunny-day (high-tide) flooding, worsening surge, etc. There is growing evidence that storms are becoming wetter as the atmosphere warms— increasing the threat associated with compound flooding, which involves the combined effects of storm surge, rainfall, tides and river flow. Inland flooding events are also increasing due to overdevelopment, heavy precipitation and aging and/or inadequate infrastructure. The economic ramifications of these problems are quite evident, as area residents are confronted with the rising costs of their homeowners and flood insurance policies. As the principal investigator on a recently funded Department of Energy grant, Space Coast ReSCUE (Resilience Solutions for Climate, Urbanization, and Environment), I am working with Argonne National Laboratory, Florida Tech colleagues, community organizations and local government to improve our climate resilience in East Central Florida. It is remarkable that, despite its importance for risk management, urban planning and evaluating the environmental impacts of runoff, official data regarding local flooding is virtually nonexistent! Working alongside a local nonprofit, we have installed 10 automated weather stations and manual rain gauges in what was previously a “data desert” east of the Florida Tech campus: one at Stone Magnet Middle School and others at local homes. “We think that a ‘best methods’ approach is proactive, informed and cost-effective. The foundation of good decision-making, assessment and planning is built on data (model and observations), which are critical to adequately addressing the impact of climate on our communities.” – steven lazarus, meteorology professor, ocean engineering and marine sciences Data from these stations are available, in real-time, from two national networks: CoCoRaHS and Weather Underground. The citizen science initiative involving the rain gauge measurements is designed to document flooding in a neighborhood with limited resources. In addition to helping residents make informed choices, these data will also provide a means by which we can evaluate our flood models that will be used to create highly detailed flood maps of the neighborhood. We are working with two historic extreme-precipitation events: Hurricane Irma (2017) and Tropical Storm Fay (2008)—both of which produced excessive flooding in the area. What might the local flooding look like, in the future, as storms become wetter? To find out, we plan to simulate these two storms in both present-day and future climate conditions. What will heat stress, a combination of temperature and humidity, feel like in the future? What impact will this have on energy consumption? The station data will also be used develop and test building energy-efficiency tools designed to help the community identify affordable ways to reduce energy consumption, as well as to produce high-precision urban heat island (heat stress) maps that account for the impact of individual buildings. The heat island and building energy modeling will be complemented by a drone equipped with an infrared camera, which will provide an observation baseline. We think that a “best methods” approach is proactive, informed and cost-effective. The foundation of good decision-making, assessment and planning is built on data (model and observations), which are critical to adequately addressing the impact of climate on our communities.

ChristianaCare’s Virtual Primary Care practice at the Center for Virtual Health has earned full accreditation from the National Committee for Quality Assurance (NCQA), placing it among the first health systems in the nation to achieve this distinction. ChristianaCare was one of only 18 organizations invited to participate in NCQA’s inaugural pilot program in 2023 to develop the Virtual Care Accreditation. The recognition affirms ChristianaCare’s leadership role in shaping the future of health care and its commitment to delivering accessible, equitable and patient-centered care through innovative digital platforms. “This accreditation is a powerful validation of our vision to reimagine health care,” said Sarah Schenck, M.D., FACP, executive director of ChristianaCare’s Center for Virtual Health. “We’ve built a model that meets people where they are—at home, at work or on the go—with care that is personal, proactive and powered by love and excellence.” What Accreditation Means for Patients NCQA accreditation underscores that ChristianaCare’s Center for Virtual Health meets rigorous standards for: Clinical quality and safety: clear care protocols, escalation pathways and outcome monitoring. Access and equity: technology, language and disability-inclusive design that extends care to more people. Data privacy and security: strong safeguards to protect personal health information. ChristianaCare’s participation in NCQA’s pilot helped shape the benchmarks now used nationwide. The center delivers comprehensive virtual primary care through a multidisciplinary team that includes physicians, nurses, nurse practitioners, behavioral health specialists, pharmacists and patient digital ambassadors. Virtual Care by the Numbers In 2024, ChristianaCare’s Center for Virtual Health provided more than 7,500 patient visits, reflecting both rapid growth and strong demand for its virtual-first model. Services are offered at no copay to ChristianaCare caregivers and their dependents, while availability continues to expand across Delaware and the region “At ChristianaCare, we believe virtual care isn’t just a convenience, it’s a catalyst for better health outcomes,” said Brad Sandella, D.O., MBA, medical director, Ambulatory Care for the Center for Virtual Health. “This accreditation affirms our commitment to innovation and excellence. We’re proud to be among the pioneers defining what high-quality virtual care looks like in America.” Beginning in 2026, ChristianaCare will expand its Virtual Primary Care practice, giving a broader consumer audience convenient access to primary care. At that time, the service will be covered by most insurance carriers and continue to feature dedicated providers in areas such as behavioral health and neurology. ChristianaCare will also continue working with NCQA and other partners to advance best practices nationwide.
Teaching the Holocaust Part of New NYS Curriculum
Dr. Alan Singer, professor of education, talked to Newsday about the introduction of a new school curriculum teaching New York State students about the Holocaust and other mass murders. “Teaching the Holocaust and Other Genocides” was unveiled at a meeting in Albany of the state Board of Regents, which oversees New York’s educational institutions. The new resources will be optional for educators. Dr. Singer said the goal of Holocaust education is not to guide students to a particular conclusion, but to “engage students in research, in discussion, examining data, trying to reach conclusions about the past and present.”

#Expert Perspective: When AI Follows the Rules but Misses the Point
When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.
Ask an Expert: Augusta University's Gokila Dorai, PhD, talks Artificial Intelligence
Artificial Intelligence is dominating the news cycle. There's a lot to know, a lot to prepare for and also a lot of misinformation or assumptions that are making their way into the mainstream coverage. Recently, Augusta University's Gokila Dorai, PhD, took some time to answer some of the more important question's she's seeing being asked about Artificial Intelligence. Gokila Dorai, PhD, is an assistant professor in the School of Computer and Cyber Sciences at Augusta University. Dorai’s area of expertise is mobile/IoT forensics research. She is passionate about inventing digital tools to help victims and survivors of various digital crimes. View her profile here Q. What excites you most about your current research in digital forensics and AI? "I am most excited about using artificial intelligence to produce frameworks for practitioners make sense of complex digital evidence more quickly and fairly. My research combines machine learning with natural language processing incorporating a socio-technical framework, so that we don’t just get accurate results, but also understand how and why the system reached those results. This is especially important when dealing with sensitive investigations, where transparency builds trust." Q. How does your work help address today’s challenges around cybersecurity and data privacy? "Everyday life is increasingly digital, our phones, apps, and online accounts contain deeply personal information. My research looks at how we can responsibly analyze this data during investigations without compromising privacy. For example, I work on AI models that can focus only on what is legally relevant, while filtering out unrelated personal information. This balance between security and privacy is one of the biggest challenges today, and my work aims to provide practical solutions." Q. What role do you see artificial intelligence playing in shaping the future of digital investigations? "AI will be a critical partner in digital investigations. The volume of data investigators face is overwhelming, thousands of documents, chat messages, and app logs. AI can help organize and prioritize this information, spotting patterns that a human might miss. At the same time, I believe AI must be designed to be explainable and resilient against manipulation, so investigators and courts can trust its findings. The future isn’t about replacing human judgment, but about giving investigators smarter tools." Q. What is one misconception people often have about cybersecurity or digital forensics? "A common misconception is that digital forensics is like what you see on TV, instant results with a few keystrokes. In reality, it’s a painstaking process that requires both technical skill and ethical responsibility. Another misconception is that cybersecurity is only about protecting large organizations. In truth, individuals face just as many risks, from identity theft to app data leaks, and my research highlights how better tools can protect everyone." Are you a reporter covering Artificial intelligence and looking to know more? If so, then let us help with your stories. Gokila Dorai, PhD, is available for interviews. Simply click on her icon now to arrange a time today.

Swimming in the deep: MSU research reveals sea lamprey travel patterns in Great Lakes waterways
Why this matters: Invasive sea lampreys prey on most species of large Great Lakes fish such as lake trout, brown trout, lake sturgeon, lake whitefish, ciscoes, burbot, walleye and catfish. These species are crucial to Great Lakes ecosystems and to the region’s fishing industry. Understanding how sea lampreys migrate can inform management and conservation strategies, such as developing methods to catch the invasive fish that don’t involve dams, which reduce river connectivity, or lampricide, a pesticide that some communities and groups prefer not to use. The Great Lakes fishing industry is worth $7 billion and provides 75,000 jobs to the region. Reducing the amount of sea lamprey in waters is crucial for the industry’s well-being and the economic vitality of the Great Lakes. How do you catch an invasive fish that’s solitary, nocturnal and doesn't feed on bait? Researchers in the Michigan State University College of Agriculture and Natural Resources are one step closer to figuring it out. In a study published in the Journal of Experimental Biology and funded by the Great Lakes Fishery Commission, Kandace Griffin, a fisheries and wildlife doctoral student, and Michael Wagner, professor in the MSU Department of Fisheries and Wildlife, found that sea lampreys — a parasitic fish considered an invasive species in the Great Lakes region of the U.S. — follow a clear pattern of staying in the deepest parts of a river. These findings are important for informing sea lamprey management strategies, conservation of fish species native to the Great Lakes and protecting the region’s $7 billion fishing industry and the 75,000 jobs it provides. “We wanted to know how sea lampreys are making their movement decisions when migrating,” Griffin said. “Are they guided by certain environmental cues? Are they moving through areas that are safer? How can we potentially exploit those decisions or maybe manipulate them into going somewhere that they don’t want to go, like pushing them into a trap.” The primary methods used to control sea lamprey are dams that block them from entering waterways and lampricide, a species-specific pesticide that targets lamprey larvae. “Dams create a lot of challenges for conserving river ecosystems: They block all the other fish that are moving up and down in the system. Even though lampricide is proven to be safe and effective, there are communities that are uncomfortable with its use going into the future,” Wagner said. “Figuring out the right way to fish sea lamprey would decrease its population, lower reproduction rates and provide managers with the opportunity to match their control tactics to the community’s needs.” To track lamprey movements, Griffin and Wagner used a method called acoustic telemetry, which involved using sound emitted from a surgically implanted tag to track the movement of 56 sea lampreys in the White River near Whitehall, Michigan. Griffin likened acoustic telemetry to GPS. “There’s a tag that emits sound and has a unique transmission with a unique identification code, so I know exactly which fish is going where,” she said. “The receivers are listening for that sound and then calculating the time it reaches each receiver. We used this information to triangulate the position of the sea lamprey and analyzed it to find out how they’re using the river’s environmental traits to make decisions on where to swim.” Of the 56 lampreys studied, 26 of them (46%), consistently chose the deepest quarter of the river. “For nearly 20 years we have been discovering how sea lampreys migrate along coasts and through rivers. Now, thanks to Kandace’s work, we know where their movement paths come together near a riverbank — the perfect place to install a trap or other fishing device,” Wagner said. “That knowledge can be used to find similar sites across the Great Lakes basin.” Right now, a fishing device designed to catch bottom-swimming, solitary, nonfeeding, nocturnal sea lamprey doesn’t exist. However, Wagner notes there are places around the world — including Indigenous communities in the U.S. — where people have fished migratory lampreys of various species for hundreds of years and could help inform the creation of such a mechanism. “We have recently had a proposal funded to scour the Earth in search of knowledge, both scientific and traditional, about how to capture migrating lampreys and similar fishes,” Wagner said. “We want to talk with the communities of people who have histories fishing these animals and use this information, along with other data we’ve gathered, to conceive a device that could be used to fish sea lampreys.” Griffin views the new intel on lamprey migration patterns as a way to inform fishing practices to complement some of the existing control methods. “Hopefully, we can use this as a supplemental control method to the use of the barriers or dams,” she said. “We have societal pressure to remove barriers to enhance river connectivity, and some barriers are failing. Open water trapping is another way that we could try to still combat the invasive sea lamprey problem here but also promote river connectivity and other conservation goals for other species.” Wagner shares the same perspective. “When a community, or the Great Lakes Fishery Commission, or the governments of Canada and the U.S. come in and say, ‘We’d really rather be able to control this river with something other than lampricide,’ we want to be able to be able to provide 360-degree solutions that specify where to fish, when to fish and how to fish using fully prototyped and tested equipment,” he said. “We want our science to help solve real-world problems.”

First scientific paper on 3I/ATLAS interstellar object
When the news started to spread on July 1, 2025, about a new object that was spotted from outside our solar system, only the third of its kind ever known, astronomers at Michigan State University — along with a team of international researchers — turned their telescopes to capture data on the new celestial sighting. The team rushed to write a scientific paper on what they know so far about the object, now called 3I/ATLAS, after NASA’s Asteroid Terrestrial-impact Last Alert System, or ATLAS. ATLAS consists of four telescopes — two in Hawaii, one in Chile and one in South Africa — which automatically scans the whole sky several times every night looking for moving objects. MSU’s Darryl Seligman, a member of the scientific team and an assistant professor in the College of Natural Science, took the lead on writing the paper. “I heard something about the object before I went to bed, but we didn’t have a lot of information yet,” Seligman said. “By the time I woke up around 1 a.m., my colleagues, Marco Micheli from the European Space Agency and Davide Farnocchia from NASA’s Jet Propulsion Laboratory, were emailing me that this was likely for real. I started sending messages telling everyone to turn their telescopes to look at this object and started writing the paper to document what we know to date. We have data coming in from across the globe about this object.” The discovery Larry Denneau, a member of the ATLAS team reviewed and submitted the observations from the European Southern Observatory's Very Large Telescope in Chile shortly after it was observed on the night of July 1. Denneau said that he was cautiously excited. “We have had false alarms in the past about interesting objects, so we know not to get too excited on the first day. But the incoming observations were all consistent, and late that night it looked like we had the real thing. “It is especially gratifying that we found it in the Milky Way in the direction of the galactic center, which is a very challenging place to survey for asteroids because of all the stars in the background,” Denneau said. “Most other surveys don't look there.” John Tonry, another member of ATLAS and professor at the University of Hawaii, was instrumental in design and construction of ATLAS, the survey that discovered 3I. Tonry said, “It's really gratifying every time our hard work surveying the sky discovers something new, and this comet that has been traveling for millions of years from another star system is particularly interesting.” Once 3I/ATLAS was confirmed, Seligman and Karen Meech, faculty chair for the Institute for Astronomy at the University of Hawaii, both managed the communications flow and worked on getting the data pulled together for submitting the paper. “Once 3I/ATLAS was identified as likely interstellar, we mobilized rapidly,” Meech said. “We activated observing time on major facilities like the Southern Astrophysical Research Telescope and the Gemini Observatory to capture early, high-quality data and build a foundation for detailed follow-up studies.” After confirmation of the interstellar object, institutions from around the world began sharing information about 3I/ATLAS with Seligman. What scientists know about 3I/ATLAS so far Though data is pouring in about the discovery, it’s still so far away from Earth, which leaves many unanswered questions. Here’s what the scientific team knows at this point: It is only the third interstellar (meaning from outside our solar system) object to be detected passing through our solar system. It’s potentially giving off gas like other comets do, but that needs to be confirmed. It’s moving really fast at 60 kilometers per second, or 134,000 miles per hour, relative to the sun. It’s on an orbital path that is shaped like a boomerang or hyperbola. It’s very bright. It’s on a path that will leave our solar system and not return, but scientists will be able to study it for several months before it leaves. The James Webb Space Telescope and the Hubble Space Telescope are expected to reveal more information about its size, composition, spin and how it reacts to being heated over the next few months. “We have these images of 3I/ATLAS where it’s not entirely clear and it looks fuzzier than the other stars in the same image,” said James Wray, a professor at Georgia Tech. “But the object is pretty far away and, so, we just don’t know.” Seligman and his team are specifically interested in 3I/ATLAS’s brightness because it informs us about the evolution of the coma, a cloud of dust and gas. They’ve been tracking it to see if it has been changing over time as the object moves and turns in space. They also want to monitor for sudden outburst events in which the object gets much brighter. “3I/ATLAS likely contains ices, especially below the surface, and those ices may start to activate as it nears the sun,” Seligman said. “But until we detect specific gas emissions, like H₂O, CO or CO₂, we can’t say for sure what kinds of ice or how much are there.” The discovery of 3I/ATLAS is just the beginning. For Tessa Frincke, who came to MSU in late June to begin her career as a doctoral student with Seligman, having the opportunity to analyze data from 3I/ATLAS to predict its future path could lead to her publishing a scientific paper of her own. “I’ve had to learn a lot quickly, and I was shocked at how many people were involved,” said Frincke. “Discoveries like this have a domino effect that inspires novel engineering and mission planning.” For Atsuhiro Yaginuma, a fourth-year undergraduate student on Seligman’s team, this discovery has inspired him to apply his current research to see if it is possible to launch a spacecraft from Earth to get it within hundreds of miles or kilometers to 3I/ATLAS to capture some images and learn more about the object. “The closest approach to Earth will be in December,” said Yaginuma. “It would require a lot of fuel and a lot of rapid mobilization from people here on Earth. But getting close to an interstellar object could be a once-in-a-lifetime opportunity.” “We can’t continue to do this research and experiment with new ideas from Frincke and Yaginuma without federal funding,” said Seligman, who also is a postdoctoral fellow of the National Science Foundation. Seligman and Aster Taylor, who is a former student of Seligman’s and now a doctoral candidate in astronomy and astrophysics and a 2023 Fannie and John Hertz Foundation Fellow, wrote the following: “At a critical moment, given the current congressional discussions on science funding, 3I/ATLAS also reminds us of the broader impact of astronomical research. An example like 3I is particularly important to astronomy — as a science, we are supported almost entirely by government and philanthropic funding. The fact that this science is not funded by commercial enterprise indicates that our field does not provide a financial return on investment, but instead responds to the public’s curiosity about the deep questions of the universe: Where did we come from? Are we alone? What else is out there? The curiosity of the public, as expressed by the will of the U.S. Congress and made manifest in the federal budget, is the reason that astronomy exists.” In addition to MSU, contributors to this research and paper include European Space Agency Near-Earth Objects Coordination Centre (Italy), NASA Jet Propulsion Laboratory/Caltech (USA), University of Hawaii (USA), Auburn University (USA), Universidad de Alicante (Spain), Universitat de Barcelona (Spain), European Southern Observatory (Germany), Villanova University (USA), Lowell Observatory (USA), University of Maryland (USA), Las Cumbres Observatory (USA), University of Belgrade (Serbia), Politecnico di Milano (Italy), University of Michigan (USA), University of Western Ontario (Canada), Georgia Institute of Technology (USA), Universidad Diego Portales, Santiago (Chile) and Boston University (USA).

LSU, FUEL, Syngenta Partner to Develop Low-cost Digital Twins for Chemical Processing Facilities
Derick Ostrenko and Jason Jamerson, faculty in the LSU College of Art & Design, along with engineering advisor David Ben Spry, are pioneering a new approach to industrial innovation using digital twins. The effort is supported by a $217,403 use-inspired research and development (UIRD) award from Future Use of Energy in Louisiana (FUEL). Digital twins are highly detailed, virtual replicas of physical assets. The technology is used in engineering to enhance efficiency, safety, and training; however, their creation often requires costly specialized hardware, proprietary software, and engineering-intensive workflows. “This initiative not only advances digital twin technology but also highlights the interdisciplinary power of design and engineering,” FUEL UIRD Director Ashwith Chilvery said. “By applying creative tools in an industrial setting, we’re demonstrating new ways to lower costs and expand access to advanced digital infrastructure.” The collaborative effort between LSU, FUEL, and Syngenta aims to reduce costs by applying techniques more commonly used in the entertainment industry, leveraging free and open-source software and consumer-grade hardware, such as gaming PCs and digital cameras. Most of the work will be conducted by digital art students skilled in 3D modeling and video game production, offering a cost-effective alternative to traditional engineering services. “3D artists and game developers bring both technical expertise and creative vision that can add significant value when paired with traditional engineering approaches,” Spry said. “We’re eager to demonstrate how this talent pool can help accelerate digital transformation in industry.” “Working with an innovative company like Syngenta to advance digital twins for chemical manufacturing is an outstanding opportunity for our researchers and students, and we’re proud of the techniques and talent we’ve developed at LSU. FUEL’s support of digital twin development for the energy and chemical sectors helps build this technology and unique artistry in Louisiana, for our industries, and for the rest of the nation.” - Greg Trahan, LSU Assistant Vice President of Strategic Research Partnerships In addition to producing a high-fidelity digital twin of a process unit within an active chemical manufacturing facility, the project will deliver a virtual reality application that allows immersive interaction with the 3D model. Future extensions may include augmented reality overlays of physical equipment or integration of live process data for real-time monitoring and troubleshooting. The ultimate outcome of the project is a validated workflow that reduces the cost of producing digital twins by a factor of at least five compared to conventional engineering methods. This breakthrough has the potential to redefine digital infrastructure for the chemical processing industry, making it more accessible, scalable, and adaptable to future needs. Learn more about LSU's digital twin work with Syngenta as well as NASA: About FUEL Future Use of Energy in Louisiana (FUEL) positions the state as a global energy innovation leader through high-impact technology development and innovation that supports the energy industry in lowering carbon emissions. FUEL brings together a growing team of universities, community and technical colleges, state agencies and industry and capital partners led by LSU. With the potential to receive up to $160 million in funding from the U.S. National Science Foundation through the NSF Regional Innovation Engines program and an additional $67.5 million from Louisiana Economic Development, FUEL will advance our nation’s capacity for energy innovation through use-inspired research and development, workforce development, and technology commercialization. For more information, visit fuelouisiana.org. About Syngenta Syngenta Crop Protection is a global leader in agricultural innovation. It is focused on empowering farmers to make the transformation required to feed the world’s population while protecting our planet. Its bold scientific discoveries deliver better benefits for farmers and society on a bigger scale than ever before. Syngenta CP offers a leading portfolio of crop protection technologies and solutions that support farmers to grow healthier plants with higher yields. Its 17,700 employees are helping to transform agriculture in more than 90 countries. Syngenta Crop Protection is headquartered in Basel, Switzerland, and is part of the Syngenta Group. Read our stories and follow us on LinkedIn, Instagram & X.

University of Delaware secures $13.1M grant to transform Alzheimer’s research and prevention
A new five-year $13.1 million grant will greatly expand the ability of University of Delaware researchers to pursue ways to prevent and treat Alzheimer's disease. The gift from the Delaware Community Foundation (DCF) is one of the largest in state history for Alzheimer’s research. UD's Christopher Martens called the grant "transformational," as it will support the expansion of a statewide prevention study, enable the purchase of a state-of-the-art MRI machine and drive discovery of new diagnostic tools and treatments. “It will also help grow the number of researchers in Delaware focused on Alzheimer’s disease, promoting an interdisciplinary approach." said Martens, director of UD's Delaware Center for Cognitive Aging Research (DECCAR) and professor of kinesiology and applied physiology in the College of Health Sciences. Bringing together researchers from multiple fields to collaborate on a critical challenge like Alzheimer’s disease is a key strength of the University of Delaware, said Interim President Laura Carlson. “Every one of us has a family member or friend who has been deeply affected by Alzheimer’s. I’m proud that UD is working better to understand this terrible disease and partnering with others throughout the state to work on its prevention, diagnosis and treatment,” Carlson said. “We are grateful to the Delaware Community Foundation for their support, which allows us to escalate our research and expand our community outreach.” “No one has to look very far afield to witness and understand the tragedy of Alzheimer’s, and the research supported by this grant will help UD researchers come ever-closer to uncovering life-improving and life-saving solutions,” said Stuart Comstock-Gay, President and CEO of the Delaware Community Foundation (DCF). “The grant was provided through the generosity of late Paul H. Boerger, who made a substantial legacy gift to the fund he had established at the DCF in his lifetime, and his foresight will help so many.” The gift is aimed at achieving the following goals: • Tracking Alzheimer’s risk over time – Expanding Delaware’s largest study of brain aging from 100 to 500 participants to uncover who develops dementia and why. • A simple blood test for early detection – Developing a first-of-its-kind test that could diagnose Alzheimer’s years earlier than current methods. • Cutting-edge brain imaging – Installing a $3.2 million MRI machine on UD’s STAR Campus to reveal hidden brain changes linked to memory loss. • Spotting the earliest warning signs – Exploring how subtle shifts in language and menopause-related hormone changes may predict Alzheimer’s risk. • Fueling prevention and cures – Creating powerful data and tools that will accelerate new treatments and bring researchers closer to stopping Alzheimer’s. To reach Martens for an interview, visit his profile and click on the "contact" button. Interviews with DCF officials can be arranged by emailing MediaRelations@udel.edu.









