Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

ChristianaCare’s Center for Virtual Health Earns NCQA Accreditation, Setting a National Standard in Virtual Care featured image

ChristianaCare’s Center for Virtual Health Earns NCQA Accreditation, Setting a National Standard in Virtual Care

ChristianaCare’s Virtual Primary Care practice at the Center for Virtual Health has earned full accreditation from the National Committee for Quality Assurance (NCQA), placing it among the first health systems in the nation to achieve this distinction. ChristianaCare was one of only 18 organizations invited to participate in NCQA’s inaugural pilot program in 2023 to develop the Virtual Care Accreditation. The recognition affirms ChristianaCare’s leadership role in shaping the future of health care and its commitment to delivering accessible, equitable and patient-centered care through innovative digital platforms. “This accreditation is a powerful validation of our vision to reimagine health care,” said Sarah Schenck, M.D., FACP, executive director of ChristianaCare’s Center for Virtual Health. “We’ve built a model that meets people where they are—at home, at work or on the go—with care that is personal, proactive and powered by love and excellence.” What Accreditation Means for Patients NCQA accreditation underscores that ChristianaCare’s Center for Virtual Health meets rigorous standards for: Clinical quality and safety: clear care protocols, escalation pathways and outcome monitoring. Access and equity: technology, language and disability-inclusive design that extends care to more people. Data privacy and security: strong safeguards to protect personal health information. ChristianaCare’s participation in NCQA’s pilot helped shape the benchmarks now used nationwide. The center delivers comprehensive virtual primary care through a multidisciplinary team that includes physicians, nurses, nurse practitioners, behavioral health specialists, pharmacists and patient digital ambassadors. Virtual Care by the Numbers In 2024, ChristianaCare’s Center for Virtual Health provided more than 7,500 patient visits, reflecting both rapid growth and strong demand for its virtual-first model. Services are offered at no copay to ChristianaCare caregivers and their dependents, while availability continues to expand across Delaware and the region “At ChristianaCare, we believe virtual care isn’t just a convenience, it’s a catalyst for better health outcomes,” said Brad Sandella, D.O., MBA, medical director, Ambulatory Care for the Center for Virtual Health. “This accreditation affirms our commitment to innovation and excellence. We’re proud to be among the pioneers defining what high-quality virtual care looks like in America.” Beginning in 2026, ChristianaCare will expand its Virtual Primary Care practice, giving a broader consumer audience convenient access to primary care. At that time, the service will be covered by most insurance carriers and continue to feature dedicated providers in areas such as behavioral health and neurology. ChristianaCare will also continue working with NCQA and other partners to advance best practices nationwide.

2 min. read
MSU researchers develop wood-based material that improves safety and life of lithium-ion batteries featured image

MSU researchers develop wood-based material that improves safety and life of lithium-ion batteries

For consumers worried about the risks associated with using lithium-ion batteries — which are used in everything from phones to laptops to electric vehicles — Michigan State University has discovered that a natural material found in wood can improve battery safety while also improving the battery’s life. Chengcheng Fang, assistant professor in the College of Engineering, and Mojgan Nejad, an associate professor in the College of Agriculture and Natural Resources, collaborated to engineer lignin, a natural ingredient of wood that provides support and rigidity, into a thin film separator that can be used inside lithium-ion batteries to prevent short circuits that can cause a fire. “We wanted to build a better battery,” said Fang. “But we also wanted it to be safe, efficient and sustainable.” Inside a battery, the positively charged cathode and negatively charged anode electrodes help the flow of electricity. To keep these electrodes apart, a commercial separator is typically made from polyethylene and polypropylene plastic materials, which can shrink at temperatures near 100 degrees Celsius. Without the protection of the separator, the cathode and anode sides of the battery have the potential to touch, causing an accidental short circuit and possible fire or explosion. In contrast, the lignin-based separators developed remained stable and didn’t become smaller in size up to temperatures of 300 degrees Celsius. Fang and her team tested varying thicknesses of lignin and found that films measuring 25 micrometers, which is thinner than one quarter of a human hair, were the most effective at keeping the inside of the battery stable and keeping the anode and cathode from connecting. Using the lignin film inside the battery had another benefit: the increased stability inside the battery also resulted in an improved cycle life, or how many times the battery can be charged and used. “We were surprised to see that the lignin film also improved the battery’s cycle life,” said Fang. “We increased the battery’s cycle life by 60%.” A third advantage of this research is an environmentally friendly one. The team was able to manufacture the lignin separators using a low-cost dry processing method. This meant that the team was able to produce large quantities of the lignin film, on demand, while avoiding the use of harmful solvents commonly used in traditional separator manufacturing, which can be harmful to the environment. In this case, the researchers were able to use lignin and other materials that provided a 100% raw material conversion to create a film without creating any waste or pollution. “Lignin, particularly lignosulfonate, is naturally abundant and it doesn’t need any further treatment to function in batteries,” said Fang. “This work demonstrates a new design pathway to improve both the safety and manufacturability of battery materials.” This research was published in Advanced Materials, and the technology is patent pending through the MSU Innovation Center.

2 min. read
#Expert Perspective: When AI Follows the Rules but Misses the Point featured image

#Expert Perspective: When AI Follows the Rules but Misses the Point

When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.

Wei Jiang profile photo
6 min. read
Professor Roslyn Bill selected for the inaugural cohort of the Big if True Science accelerator featured image

Professor Roslyn Bill selected for the inaugural cohort of the Big if True Science accelerator

Professor Roslyn Bill is the director of Aston Institute for Membrane Excellence (AIME) The Big if True Science (BiTS) accelerator aims to bridge the gap between cutting-edge lab science and multi-million-dollar collaborative projects Professor Bill’s research is focused on the brain’s plumbing system and developing drugs against traumatic brain injury and cognitive decline. Professor Roslyn Bill, director of Aston Institute for Membrane Excellence (AIME), has been selected as an inaugural fellow of the new Big if True Science (BiTS) accelerator. BiTS was set up by a non-profit organisation, Renaissance Philanthropy, to support its scientist and innovator fellows in developing groundbreaking research initiatives and equip them with the tools, skills, and networks needed to design high-impact, collaborative research programmes and technical projects with multi-million-dollar budgets beyond their own laboratories. The first cohort of 12 fellows was selected after a highly competitive process. The cohort represents diverse fields including neuroscience, environmental engineering, biomedical research, and materials science. Over a 15-week period, they will transform their breakthrough concepts into fundable eight-figure R&D programmes, before pitching their ideas to funders on 10 December 2025. Professor Bill’s research focuses on the glymphatic system, the brain’s ‘plumbing’ system, which facilitates the movement of fluid and clears waste products. Water moves in and out of brain cells through tiny protein channels in the cell membrane called aquaporins. Uncontrolled water entry, for example, after a head injury, can cause catastrophic swelling and severe brain injuries of the type suffered by racing driver Michael Schumacher after a skiing accident. When the flow is impeded, for example, as we age, waste products can build up, leading to diseases like Alzheimer’s. In 2020, Professor Bill was lead author on a paper published in the prestigious journal Cell on how the flow of water through aquaporin-4 is controlled. She is now researching drugs to affect this process, which could have a huge impact on the treatment of traumatic brain injury and cognitive decline. Professor Bill said: “Every year, tens of millions of people are affected by injuries to their brains. Every three seconds, someone in the world develops dementia. There are no medicines that can fix these terrible conditions. Being an inaugural member of BiTS is a great honour, and I am delighted to be in the company of truly inspiring people. This exciting programme offers hope to patients for whom no medicines are available!”

Roslyn Bill profile photo
2 min. read
First scientific paper on 3I/ATLAS interstellar object featured image

First scientific paper on 3I/ATLAS interstellar object

When the news started to spread on July 1, 2025, about a new object that was spotted from outside our solar system, only the third of its kind ever known, astronomers at Michigan State University — along with a team of international researchers — turned their telescopes to capture data on the new celestial sighting. The team rushed to write a scientific paper on what they know so far about the object, now called 3I/ATLAS, after NASA’s Asteroid Terrestrial-impact Last Alert System, or ATLAS. ATLAS consists of four telescopes — two in Hawaii, one in Chile and one in South Africa — which automatically scans the whole sky several times every night looking for moving objects. MSU’s Darryl Seligman, a member of the scientific team and an assistant professor in the College of Natural Science, took the lead on writing the paper. “I heard something about the object before I went to bed, but we didn’t have a lot of information yet,” Seligman said. “By the time I woke up around 1 a.m., my colleagues, Marco Micheli from the European Space Agency and Davide Farnocchia from NASA’s Jet Propulsion Laboratory, were emailing me that this was likely for real. I started sending messages telling everyone to turn their telescopes to look at this object and started writing the paper to document what we know to date. We have data coming in from across the globe about this object.” The discovery Larry Denneau, a member of the ATLAS team reviewed and submitted the observations from the European Southern Observatory's Very Large Telescope in Chile shortly after it was observed on the night of July 1. Denneau said that he was cautiously excited. “We have had false alarms in the past about interesting objects, so we know not to get too excited on the first day. But the incoming observations were all consistent, and late that night it looked like we had the real thing. “It is especially gratifying that we found it in the Milky Way in the direction of the galactic center, which is a very challenging place to survey for asteroids because of all the stars in the background,” Denneau said. “Most other surveys don't look there.” John Tonry, another member of ATLAS and professor at the University of Hawaii, was instrumental in design and construction of ATLAS, the survey that discovered 3I. Tonry said, “It's really gratifying every time our hard work surveying the sky discovers something new, and this comet that has been traveling for millions of years from another star system is particularly interesting.” Once 3I/ATLAS was confirmed, Seligman and Karen Meech, faculty chair for the Institute for Astronomy at the University of Hawaii, both managed the communications flow and worked on getting the data pulled together for submitting the paper. “Once 3I/ATLAS was identified as likely interstellar, we mobilized rapidly,” Meech said. “We activated observing time on major facilities like the Southern Astrophysical Research Telescope and the Gemini Observatory to capture early, high-quality data and build a foundation for detailed follow-up studies.” After confirmation of the interstellar object, institutions from around the world began sharing information about 3I/ATLAS with Seligman. What scientists know about 3I/ATLAS so far Though data is pouring in about the discovery, it’s still so far away from Earth, which leaves many unanswered questions. Here’s what the scientific team knows at this point: It is only the third interstellar (meaning from outside our solar system) object to be detected passing through our solar system. It’s potentially giving off gas like other comets do, but that needs to be confirmed. It’s moving really fast at 60 kilometers per second, or 134,000 miles per hour, relative to the sun. It’s on an orbital path that is shaped like a boomerang or hyperbola. It’s very bright. It’s on a path that will leave our solar system and not return, but scientists will be able to study it for several months before it leaves. The James Webb Space Telescope and the Hubble Space Telescope are expected to reveal more information about its size, composition, spin and how it reacts to being heated over the next few months. “We have these images of 3I/ATLAS where it’s not entirely clear and it looks fuzzier than the other stars in the same image,” said James Wray, a professor at Georgia Tech. “But the object is pretty far away and, so, we just don’t know.” Seligman and his team are specifically interested in 3I/ATLAS’s brightness because it informs us about the evolution of the coma, a cloud of dust and gas. They’ve been tracking it to see if it has been changing over time as the object moves and turns in space. They also want to monitor for sudden outburst events in which the object gets much brighter. “3I/ATLAS likely contains ices, especially below the surface, and those ices may start to activate as it nears the sun,” Seligman said. “But until we detect specific gas emissions, like H₂O, CO or CO₂, we can’t say for sure what kinds of ice or how much are there.” The discovery of 3I/ATLAS is just the beginning. For Tessa Frincke, who came to MSU in late June to begin her career as a doctoral student with Seligman, having the opportunity to analyze data from 3I/ATLAS to predict its future path could lead to her publishing a scientific paper of her own. “I’ve had to learn a lot quickly, and I was shocked at how many people were involved,” said Frincke. “Discoveries like this have a domino effect that inspires novel engineering and mission planning.” For Atsuhiro Yaginuma, a fourth-year undergraduate student on Seligman’s team, this discovery has inspired him to apply his current research to see if it is possible to launch a spacecraft from Earth to get it within hundreds of miles or kilometers to 3I/ATLAS to capture some images and learn more about the object. “The closest approach to Earth will be in December,” said Yaginuma. “It would require a lot of fuel and a lot of rapid mobilization from people here on Earth. But getting close to an interstellar object could be a once-in-a-lifetime opportunity.” “We can’t continue to do this research and experiment with new ideas from Frincke and Yaginuma without federal funding,” said Seligman, who also is a postdoctoral fellow of the National Science Foundation. Seligman and Aster Taylor, who is a former student of Seligman’s and now a doctoral candidate in astronomy and astrophysics and a 2023 Fannie and John Hertz Foundation Fellow, wrote the following: “At a critical moment, given the current congressional discussions on science funding, 3I/ATLAS also reminds us of the broader impact of astronomical research. An example like 3I is particularly important to astronomy — as a science, we are supported almost entirely by government and philanthropic funding. The fact that this science is not funded by commercial enterprise indicates that our field does not provide a financial return on investment, but instead responds to the public’s curiosity about the deep questions of the universe: Where did we come from? Are we alone? What else is out there? The curiosity of the public, as expressed by the will of the U.S. Congress and made manifest in the federal budget, is the reason that astronomy exists.” In addition to MSU, contributors to this research and paper include European Space Agency Near-Earth Objects Coordination Centre (Italy), NASA Jet Propulsion Laboratory/Caltech (USA), University of Hawaii (USA), Auburn University (USA), Universidad de Alicante (Spain), Universitat de Barcelona (Spain), European Southern Observatory (Germany), Villanova University (USA), Lowell Observatory (USA), University of Maryland (USA), Las Cumbres Observatory (USA), University of Belgrade (Serbia), Politecnico di Milano (Italy), University of Michigan (USA), University of Western Ontario (Canada), Georgia Institute of Technology (USA), Universidad Diego Portales, Santiago (Chile) and Boston University (USA).

6 min. read
LSU, FUEL, Syngenta Partner to Develop Low-cost Digital Twins for Chemical Processing Facilities featured image

LSU, FUEL, Syngenta Partner to Develop Low-cost Digital Twins for Chemical Processing Facilities

Derick Ostrenko and Jason Jamerson, faculty in the LSU College of Art & Design, along with engineering advisor David Ben Spry, are pioneering a new approach to industrial innovation using digital twins. The effort is supported by a $217,403 use-inspired research and development (UIRD) award from Future Use of Energy in Louisiana (FUEL). Digital twins are highly detailed, virtual replicas of physical assets. The technology is used in engineering to enhance efficiency, safety, and training; however, their creation often requires costly specialized hardware, proprietary software, and engineering-intensive workflows. “This initiative not only advances digital twin technology but also highlights the interdisciplinary power of design and engineering,” FUEL UIRD Director Ashwith Chilvery said. “By applying creative tools in an industrial setting, we’re demonstrating new ways to lower costs and expand access to advanced digital infrastructure.” The collaborative effort between LSU, FUEL, and Syngenta aims to reduce costs by applying techniques more commonly used in the entertainment industry, leveraging free and open-source software and consumer-grade hardware, such as gaming PCs and digital cameras. Most of the work will be conducted by digital art students skilled in 3D modeling and video game production, offering a cost-effective alternative to traditional engineering services. “3D artists and game developers bring both technical expertise and creative vision that can add significant value when paired with traditional engineering approaches,” Spry said. “We’re eager to demonstrate how this talent pool can help accelerate digital transformation in industry.” “Working with an innovative company like Syngenta to advance digital twins for chemical manufacturing is an outstanding opportunity for our researchers and students, and we’re proud of the techniques and talent we’ve developed at LSU. FUEL’s support of digital twin development for the energy and chemical sectors helps build this technology and unique artistry in Louisiana, for our industries, and for the rest of the nation.” - Greg Trahan, LSU Assistant Vice President of Strategic Research Partnerships In addition to producing a high-fidelity digital twin of a process unit within an active chemical manufacturing facility, the project will deliver a virtual reality application that allows immersive interaction with the 3D model. Future extensions may include augmented reality overlays of physical equipment or integration of live process data for real-time monitoring and troubleshooting. The ultimate outcome of the project is a validated workflow that reduces the cost of producing digital twins by a factor of at least five compared to conventional engineering methods. This breakthrough has the potential to redefine digital infrastructure for the chemical processing industry, making it more accessible, scalable, and adaptable to future needs. Learn more about LSU's digital twin work with Syngenta as well as NASA: About FUEL Future Use of Energy in Louisiana (FUEL) positions the state as a global energy innovation leader through high-impact technology development and innovation that supports the energy industry in lowering carbon emissions. FUEL brings together a growing team of universities, community and technical colleges, state agencies and industry and capital partners led by LSU. With the potential to receive up to $160 million in funding from the U.S. National Science Foundation through the NSF Regional Innovation Engines program and an additional $67.5 million from Louisiana Economic Development, FUEL will advance our nation’s capacity for energy innovation through use-inspired research and development, workforce development, and technology commercialization. For more information, visit fuelouisiana.org. About Syngenta Syngenta Crop Protection is a global leader in agricultural innovation. It is focused on empowering farmers to make the transformation required to feed the world’s population while protecting our planet. Its bold scientific discoveries deliver better benefits for farmers and society on a bigger scale than ever before. Syngenta CP offers a leading portfolio of crop protection technologies and solutions that support farmers to grow healthier plants with higher yields. Its 17,700 employees are helping to transform agriculture in more than 90 countries. Syngenta Crop Protection is headquartered in Basel, Switzerland, and is part of the Syngenta Group. Read our stories and follow us on LinkedIn, Instagram & X.

Jason Jamerson profile photo
3 min. read
The Sky’s the Limit: Researching surface impacts to improve the durability of aircraft featured image

The Sky’s the Limit: Researching surface impacts to improve the durability of aircraft

Associate professor Ibrahim Guven, Ph.D. from the Department of Mechanical and Nuclear Engineering is conducting a research project funded by the Department of Defense (DoD) that explores building aircraft for military purposes and civilian transportation that can travel more than five times the speed of sound. Guven’s role in this project is to consider the durability of aircraft surfaces against elements such as rain, ice, and debris. His research group is composed of Ph.D. students who assist with the study and has collaborated with other institutions, including the University of Minnesota, Stevens Institute of Technology and the University of Maryland. Why did you get involved with this research project? The intersection of need and our interests decides what we research. I’m interested in physics and have been working on methods to strengthen aircraft exteriors against the elements for 12 years. We started with looking at sand particle impact damage, and then we graduated from that to studying raindrop impact because that’s a more challenging problem. Sand impact is not as challenging in terms of physics. A liquid and a solid behave differently under impact conditions. The shape of the raindrop changes prior to the impact due to the shock layer ahead of the aircraft. Researching this impact requires simulating the raindrop-shock layer interaction that gives us the shape of the droplet at the time of contact with the aircraft surface. Unlike with sand, analyzing raindrop impact starts at that point, which requires accurate modeling of the pressure being applied. As the aerospace community achieves faster speeds, there’s a need to understand what will affect a flight’s safety and the aircraft’s structural integrity. That need is what I’m helping to fulfill. Were there any challenges you and your research group faced while working on this study? How did you overcome them? Finding data was hard. I’m a computational scientist, meaning I implement mathematical differential equations that govern physics to write computer code that predicts how something will behave. My experiments are virtual, so to ensure that my models work well, I need experimental data for validation. However, conducting experiments on this problem is extremely challenging. That’s the roadblock. Currently, we refer to data from the seventies and eighties. Beyond that, this kind of information is not available. We are working to generate data that my computational methods need for their validation. An example is the nylon bead impact experiment. Some researchers found that if you shoot a nylon bead at a target, it leads to damage similar to that from a raindrop of the same size. It is much easier and cheaper to shoot nylon beads compared to the experiments involving raindrops. However, this similarity vanishes as we go into higher velocities. How do you typically gather data for a project of this nature? We are working with a laboratory under the U.S. Navy. They can accelerate specimens to relevant speeds, meaning they can shoot them into the air at the desired velocity. A colleague at Stevens Institute of Technology also came up with a droplet levitator. He uses acoustic waves emitted by tiny speakers to play a certain sound at a certain frequency to create enough air pressure to suspend droplets midair. To an untrained eye, it looks like magic. They levitate droplets and use a railgun to shoot our samples at the droplets. Our samples hitting the droplets are stand-ins for the aircraft surface material. Once this is done successfully, they shoot a sample with high-speed cameras that can take ten million frames per second. As a result, we get a good, high-fidelity picture of this impact event. That is the type of data I’m seeking, and this is how I get it from my collaborators. What was your overall experience working with the students in your research group? I like to think it was positive. I try to be a nice advisor and give them space to explore, fail, and bring their own ideas. Even if I feel like we’re at a dead-end, I step back and let them figure it out. My role is to help them grow. Teach them, train them and help them along the way. That’s the experience. Did you notice any personal changes in your students during this project? Yeah, I have. When they’re just out of their undergraduate programs, confidence is lacking sometimes. You see them become more sure of themselves as they learn more and more. Often, regardless of whether English is their native language or not, writing is a big issue for every student. How one presents ideas in written form is a persistent problem in engineering. I see the most growth in that area. Again, an advisor has to be a guide and also have patience. Eventually, after working on multiple paper drafts, I can see tremendous improvement. You must allow them to see their shortcomings. It’s important to work with students to refine how they frame a problem, explain it to a wide audience in concise terms, and use neutral language without leading them to certain conclusions. Why do you think that this research is important? Somebody has to do it, right? I believe that I’m the right person because of my background. Personally, I think if this research makes for safer travel conditions, and if I have something to offer, then why not? If we can accurately simulate what happens in these conditions, we can use our methods to test out designs for damage mitigation. For example, we can perform simulations with different surface materials for the aircraft to see if using a different material or layered coating system leads to less damage. In a bigger picture, we’re working on a very narrow problem in our field, but we don’t know how useful that’s going to be in 10, 15 or 30 years from now. Whatever we study and put out there in terms of publications, it may help some other researcher in a different context many years later. This could be space research, modeling an atmosphere on a different planet, or something that is related to our bodies. There are parts of physics in this problem that do not necessarily only apply to high-speed flight. It could be many different things. One has to understand that what is studied may seem obscure today, but because the universe is more or less governed by the same physics, everything should be put in a theoretical framework, done right and shared with the community. People may learn things that could become relevant in the future. It’s not uncommon. What is another subject that you plan to study? The next natural step is coming up with strategies to mitigate damage in these scenarios. If avoiding a risk is not an option, can we actually come up with a solution? We have to determine how to modify an aircraft’s design to prevent a catastrophe. Another extension of my research would be to examine the landing of spacecraft on dusty planetary bodies. During landing on Earth, aircraft approach and reach the ground very smoothly. On the other hand, a spacecraft comes down slowly and needs a lot of reverse propulsion for a soft landing. As it does, it kicks up a large amount of dust, which blows back and hits the spacecraft. Taking into account the damage that occurs due to particle impact is a direct connection to my work. This again is an open area, and because we have ambitions to have a permanent presence on dusty places like the moon and Mars, we have to nail down the concept of landing safely. That is where my research could help.

Ibrahim Guven, Ph.D. profile photo
6 min. read
Empowering independence: Blue Envelope program facilitates safer communication between drivers with disabilities and police featured image

Empowering independence: Blue Envelope program facilitates safer communication between drivers with disabilities and police

University of Delaware, in close collaboration with Delaware State Police, the Delaware Association of Chiefs of Police, the Office of Highway Safety, and the Delaware DMV, has co-developed the Blue Envelope Program – now launched statewide as of Aug. 26, 2025. The program offers no-questions-asked, no-ID-required, free envelopes that drivers with disabilities (including communication differences, sensory needs, mobility limitations, or other differences) can keep in their vehicle. The envelope includes space for emergency contact or medical notes, instructions for law enforcement and tips to ensure safe, respectful, clear exchanges during traffic stops. The University of Delaware Center for Disabilities Studies helped review and approve the content and design to ensure inclusivity and accessibility. UD experts – including Sarah Mallory (Associate Director of the Center for Disabilities Studies) and Alisha Fletcher (Director, Delaware Network for Excellence in Autism) – are available to speak about how the program supports an underserved and underrepresented group and improves outcomes in law enforcement encounters. Why This Matters: Traffic stops can be stressful for drivers with disabilities and can lead to misinterpretations or heightened risk. The Blue Envelope helps reduce misunderstandings while preserving dignity and safety. Delaware joins around 10 other states (including Maine, Massachusetts, New Jersey, New York, Rhode Island, and Vermont) in adopting a traffic-stop communication aid for drivers with disabilities This is a practical, no-barrier solution that promotes equity, accessibility, and respectful law enforcement practices. To speak with either Mallory or Fletcher to learn more about the program's development, impact and what’s next, email mediarelations@udel.edu.

2 min. read
Two Decades Later, Villanova Engineering Professor Who Assisted in Hurricane Katrina Investigation Reflects on Role in the Storm's Aftermath featured image

Two Decades Later, Villanova Engineering Professor Who Assisted in Hurricane Katrina Investigation Reflects on Role in the Storm's Aftermath

Twenty years ago, Hurricane Katrina hit the southeastern coast of the United States, devastating cities and towns across Louisiana, Florida, Mississippi, Alabama and beyond. The storm caused nearly 1,400 fatalities, displaced more than 1 million people and generated over $125 billion in damages. Rob Traver, PhD, P.E., D. WRE, F.EWRI, F.ASCE, professor of Civil and Environmental Engineering at Villanova University, assisted in the U.S. Army Corps of Engineers' (USACE) investigation of the failure of the New Orleans Hurricane Protection System during Hurricane Katrina, and earned an Outstanding Civilian Service Medal from the Commanding General of USACE for his efforts. Dr. Traver reflected on his experience working in the aftermath of Katrina, and how the findings from the investigation have impacted U.S. hurricane responses in the past 20 years. Q: What was your role in the investigation of the failure of the New Orleans Hurricane Protection System? Dr. Traver: Immediately after Hurricane Katrina, USACE wanted to assess what went wrong with flood protections that had failed during the storm in New Orleans, but they needed qualified researchers on their team who could oversee their investigation. The American Society of Civil Engineers (ASCE), an organization I have been a part of for many years, was hired for this purpose. Our job was to make sure that USACE was asking the right questions during the investigation that would lead to concrete answers about the causes of the failure of the hurricane protection system. My team was focused on analyzing the risk and reliability of the water resource system in New Orleans, and we worked alongside the USACE team, starting with revising the investigation questions in order to get answers about why these water systems failed during the storm. Q: What was your experience like in New Orleans in the aftermath of the hurricane? DT: My team went down to New Orleans a few weeks after the hurricane, visited all the sites we were reviewing and met with infrastructure experts along the way as progress was being made on the investigation. As we were flying overhead and looking at the devastated areas, seeing all the homes that were washed away, it was hard to believe that this level of destruction could happen in a city in the United States. As we started to realize the errors that were made and the things that went wrong leading up to the storm, it was heartbreaking to think about how lives could have been saved if the infrastructure in place had been treated as one system and undergone a critical review. Q: What were the findings of the ASCE and USACE investigation team? DT: USACE focused on New Orleans because they wanted to figure out why the city’s levee system—a human-made barrier that protects land from flooding by holding back water—failed during the hurricane. The city manages pump stations that are designed to remove water after a rainfall event, but they were not well connected to the levee system and not built to handle major storms. So, one of the main reasons for the levee system failure was that the pump stations and levees were not treated as one system, which was one of the causes of the mass flooding we saw in New Orleans. Another issue we found was that the designers of the levee system never factored in a failsafe for what would happen if a bigger storm occurred and the levee overflowed. They had the right idea by building flood protection systems, but they didn’t think that a larger storm the size of Katrina could occur and never updated the design to bring in new meteorological knowledge on size of potential storms. Since then, the city has completely rebuilt the levees using these lessons learned. Q: What did researchers, scientists and the general population learn from Katrina? DT: In areas that have had major hurricanes over the past 20 years, it’s easy to find what went wrong and fix it for the future, so we don’t necessarily worry as much about having a hurricane in the same place as we’ve had one before. What I worry about is if a hurricane hits a new town or city that has not experienced one and we have no idea what the potential frailties of the prevention systems there could be. Scientists and researchers also need to make high-risk areas for hurricane activity in the United States known for those who live there. People need to know what their risk is if they are in areas where there is increased risk of storms and flooding, and what they should do when a storm hits, especially now with the changes we are seeing in storm size.

Robert Traver, PhD profile photo
4 min. read
As Trump rolls back regulations, this expert examines the costs of compliance featured image

As Trump rolls back regulations, this expert examines the costs of compliance

President Donald Trump has signaled a push to scale back federal regulation across a wide range of industries, reigniting a national debate over the costs and benefits of government rules. For Joseph Kalmenovitz, an assistant professor of finance at the University of Rochester’s Simon Business School who studies the economics of regulation, the moment underscores the importance of understanding not just what regulations do — but how much they cost. Kalmenovitz, who combines legal training with cutting-edge empirical methods, has developed innovative ways to measure regulatory intensity. His research shows how compliance requirements translate into millions of additional hours of paperwork for firms — costs that often fall outside public view. A recent Bloomberg Law article cited his work in explaining how Wall Street alone devotes an estimated 51 million extra hours each year to compliance since the Great Financial Crisis. Beyond tallying hours, Kalmenovitz’s studies also explore how overlapping rules across agencies — what he calls “regulatory fragmentation” — can stifle productivity, profitability, and growth, especially for smaller firms. His long-term aim is to provide evidence-based insights that can guide smarter rulemaking in Washington. “The dream is that people will take insights from my work and use them to improve the way regulation is conceived,” he told Simon Business Magazine. Kalmenovitz is a leading voice in translating data into meaningful insights about the hidden costs and design of regulation whose work has been published in the Journal of Finance, the Review of Financial Studies, Management Science, and the Journal of Law and Economics. He is available for interviews and can be contacted through his profile.

Joseph Kalmenovitz profile photo
2 min. read