Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.
From classrooms to communities: Rethinking civic engagement in K–12 education
When national headlines focus on school board battles and political polarization, James Bridgeforth, assistant professor of educational leadership at the University of Delaware, is focused on what’s possible instead: building a more inclusive, participatory model of democracy through public education. His research in UD's College of Education and Human Development explores how community voice, equity and local leadership intersect to shape education policy – and how school boards can serve as vital engines for rebuilding public trust in government. "Despite the often sensationalized stories of chaotic school board meetings and the influence of more national "culture war" issues, I still believe that it's possible for people from different backgrounds, experiences, and points of view to come together to figure out how to best serve the needs of all of our children." – Bridgeforth Bridgeforth’s work centers on education governance, policy and leadership, with particular attention to how racism and anti-Blackness manifest in schools and policymaking spaces. His scholarship highlights the importance of inclusive decision-making, arguing that effective education policy must be representative of the diverse communities it serves. He recently published the report "Navigating Democracy in Divided Times" with co-authors on this topic. As part of his work with the Getting Down to Facts III project at Stanford University, Bridgeforth collaborates with researchers studying how to improve California’s TK–12 system and inform the next governor’s education policy agenda. His work documents the complex realities faced by local school board members – often minimally paid community leaders navigating contentious public discourse, social media pressure and limited resources. He notes that this research can be applied to school boards around the country. The next frontier: Youth civic engagement Over the next several years, Bridgeforth aims to deepen understanding of how schools can nurture young people’s civic skills and leadership capacity through participation in governance. One proposed project – "Strengthening Opportunities for Youth Civic Engagement and Student Voice in Educational Governance" – uses participatory action research to explore how student board member policies and engagement practices foster civic agency and democratic mindsets. This collaborative work brings together youth-led community organizations and education researchers to study how these experiences shape long-term civic behavior – from voting to public service. Why it matters Bridgeforth’s research arrives at a pivotal time for American democracy. As trust in public institutions erodes, local school boards remain one of the spaces where citizens can directly shape policy. His work points to a hopeful truth: democracy’s renewal may begin in classrooms, communities and the local school board meetings shaping them. For journalists covering education, race or civic engagement, Bridgeforth offers data-driven insight, lived experience and policy expertise – helping make sense of one of the most pressing questions of our time: How can we build systems that truly serve all students and communities? This work collectively demonstrates a number of promising opportunities to foster more inclusive, community-connected forms of governance, particularly in a time of eroding trust in government institutions." – Bridgeforth ABOUT JAMES BRIDGEFORTH Assistant Professor, College of Education and Human Development James Bridgeforth is an educator, researcher and policy advocate whose work focuses on community voice in education policy and the politics of educational leadership. His scholarship has appeared in top journals including Journal of School Leadership, Education Policy Analysis Archives, Educational Evaluation and Policy Analysis and Educational Administration Quarterly, and he has contributed to Education Week and The Washington Post. A recipient of the National Academy of Education/Spencer Foundation Dissertation Fellowship, Bridgeforth holds a Ph.D. in Urban Education Policy from the University of Southern California, an M.Ed. in Educational Administration and Policy from the University of Georgia, and a B.A. in Political Science and Sociology from Georgia College & State University. Expert available for: Interviews on K–12 school governance, education policy and democracy Commentary on community voice and equity in education decision-making Analysis of youth civic engagement and participatory leadership To contact Bridgeforth, email mediarelations@udel.edu.

Why College Students Are Storming Fields More Often
In his most recent Forbes article, Dr. Marshall Shepherd takes a scientific look at why college students and fans storm football fields, blending insights from psychology, meteorology, and social dynamics. He explains that field-storming is not simply a burst of emotion—it’s a predictable outcome of collective excitement and shared identity. After an unexpected win or a high-stakes rivalry game, thousands of people simultaneously experience what psychologists call “emotional contagion,” amplifying feelings of unity and celebration. This shared surge, combined with environmental cues like stadium acoustics and crowd density, transforms the act into what Shepherd calls a form of “social weather event.” “Storming the field isn’t chaos—it’s choreography fueled by emotion and crowd physics.” Shepherd also examines the logistical and safety implications. He notes that while universities often celebrate these spontaneous displays of school pride, they carry risks ranging from crowd injuries to property damage. Yet, institutions are reluctant to ban them outright because these moments reinforce fan loyalty and media attention. Shepherd suggests that the solution lies in better understanding crowd behavior: designing stadiums with safe egress routes, training security teams to manage surges, and anticipating emotional tipping points rather than reacting afterward. “Understanding the science behind fan behavior lets us manage energy, not suppress it.” Ultimately, Shepherd’s piece reframes field-storming as a fascinating mix of culture and physics—where joy, identity, and momentum collide. He urges universities to see these moments not as mere rule-breaking but as opportunities to study human behavior in motion, and to design environments that celebrate passion without compromising safety. Dr. J. Marshall Shepherd is a leading international weather-climate expert and is the Georgia Athletic Association Distinguished Professor of Geography and Atmospheric Sciences at the University of Georgia. Dr. Shepherd was the 2013 President of American Meteorological Society (AMS), the nation’s largest and oldest professional/science society in the atmospheric and related sciences. View his profile here Dr. J. Marshall Shepherd is available to speak with the media about this interesting topic - simply click on his icon now to arrange an interview today.

Multi-university AI research may revolutionize wildfire evacuation
As wildfires grow wilder, the University of Florida and two other universities are developing large language models to make evacuations safer and more efficient. Armed with a nearly $1.2 million National Science Foundation grant, UF, Johns Hopkins University and the University of Utah are creating these AI-based models to simulate human behavior during evacuations – information that will help emergency managers shape more effective evacuation plans. “Strengthening wildfire resilience requires accurate modeling and a deep understanding of collective human behavior during evacuations,” said UF project lead Xilei Zhao, Ph.D., an associate professor with the Engineering School of Sustainable Infrastructure and Environment. “There is a critical need for simulation models that can realistically capture how civilians, incident commanders and public safety officials make protective decisions during wildfires.” Xilei Zhao focuses on developing and applying data and computational science methods to tackle problems in transportation and resilience. View her profile here Existing simulation models face limitations, particularly with reliable predictions under various wildfire scenarios. New AI models can simulate how diverse groups of people behave and interact during the hurried scramble to seek safety. Zhao’s team is developing a convergent AI framework for wildfire evacuation simulations powered by psychological theory-informed large language models. The project will produce simulation methods to promote teaching, training and learning, and support wildfire resilience by allowing public safety officials to use open-access tools. “This research seeks to be a transformative step toward improving the behavioral realism, prediction accuracy and decision-support capability of wildfire evacuation simulation models,” Zhao said. Zhao partnered with John Hopkins professor Susu Xu, Ph.D., and University of Utah professors Thomas Cova, Ph.D., and Frank Drews, Ph.D. The preliminary results of the study were recently presented at the 63rd Annual Meeting of the Association for Computational Linguistics. “In that paper, we started to train the model on the survey data we collected to see how we can accurately predict people's evacuation decisions with LLMs,” Zhao said. Research objectives include extending the Protective Action Decision Model for civilians and public safety officials, developing psychological theory-informed large language model agents for protective modeling and generating a realistic synthetic population as input for the simulation platform. The team also plans to develop learning-based simulations and predict human behavior under scenarios such as fire spread, warning and infrastructure damage. This research comes at a critical time, as the number of wildfires has significantly increased globally. About 43% of the 200 most damaging fires occurred in the last decade leading up to 2023, according to a recent study in Science. The intensity, size and volume of wildfires are threatening more urban areas. “If you go into the urban area, many people do not have cars, or they need additional mobility support,” Zhao said. “For example, the LA fires impacted nursing homes with a lot of elderly people, many of whom are immobile or lack the ability to drive. That's a big problem. This would be very relevant to them.” The large language models will provide important context for evacuation planning as well as real-time decision making. “We envision this tool being used during planning,” Zhao said, “so emergency managers can test different kinds of scenarios to determine how to draw the evacuation zones, where to issue the orders first and how to design the communications messaging.” This is important research and critical as wildfires become more common across North America. If you're a reporter looking to connect and learn more - then let us help. Xilei Zhao is available to speak with media - simply click on her icon now to arrange an interview today.

3 Things A Climate Scientist Learned From Jane Goodall
In a recent Forbes article, Marshall Shepherd reflects on three key lessons he has drawn from the life and work of Dr. Jane Goodall. Shepherd frames Goodall’s legacy—spanning primatology, conservation, and public engagement—as deeply instructive for climate scientists and environmental advocates. He argues that her methods and mindset have more to teach than simply how to observe nature; they speak to how we engage with the world. First, Shepherd highlights immersion: Goodall’s decades of patient observation in the Tanzanian forests demonstrates the power of being physically—and emotionally—present to truly learn from ecosystems. For Shepherd, climate science must go beyond remote data collection: getting into the field and understanding local realities matters. Second, he emphasizes patience. Goodall’s willingness to wait, sometimes for years, for breakthroughs in understanding primate behavior offers a lesson for climate researchers, whose progress may unfold over decades. Third, he admires her tenacity—a commitment sustained over a lifetime, even under adversity. Shepherd suggests that tackling climate change requires that same kind of enduring resolve, especially when public attention or funding waxes and wanes. Through these reflections, Shepherd presents Goodall not just as an icon of conservation but as a model for scientific humility and perseverance. He invites readers to see the parallels between animal behavior research and climate work—and to adopt practices of listening, patience, and resolve in confronting our planet’s changing trajectory. Dr. J. Marshall Shepherd is a leading international weather-climate expert and is the Georgia Athletic Association Distinguished Professor of Geography and Atmospheric Sciences at the University of Georgia. Dr. Shepherd was the 2013 President of American Meteorological Society (AMS), the nation’s largest and oldest professional/science society in the atmospheric and related sciences. View his profile here Dr. J. Marshall Shepherd is a leading international weather-climate expert and is the Georgia Athletic Association Distinguished Professor of Geography and Atmospheric Sciences at the University of Georgia. He's available to speak with the media about this topic - simply click on his icon now to arrange an interview today.

A global team of researchers using the new X-ray Imaging and Spectroscopy Mission (XRISM) telescope, launched in fall 2023, discovered something unexpected while observing a well-studied neutron star system called GX13+1. Instead of simply capturing a clearer view of its usual, predictable activity, their February 2024 observation revealed a surprisingly slow cosmic wind, the cause of which could offer new insights into the fundamental physics of how matter accumulates, or “accretes,” in certain types of binary systems. The study was one of the first from XRISM looking at wind from an X-ray binary system, and its results were published in Nature—the world's leading multidisciplinary science journal—in September 2025. Spectral analysis indicated GX13+1 was at that very moment undergoing a luminous super-Eddington phase, meaning the neutron star was shining so brightly that the radiation pressure from its surface overcame gravity, leading to a powerful ejection of any infalling material (hence the slow cosmic wind). Further comparison to previous data implied that such phases may be part of a cycle, and could “change the way we think about the behavior of these systems,” according to Joey Neilsen, PhD, associate professor of Physics at Villanova University. Dr. Neilsen played a prominent role as a co-investigator and one of the corresponding authors of the project, along with colleagues at the University of Durham (United Kingdom), Osaka University (Japan), and the University of Teacher Education Fukuoka (Japan). Overall, the collaboration featured researchers from dozens of institutions across the world. GX13+1 is a binary system consisting of a neutron star orbiting a K5 III companion star—a cooler giant star nearing the end of its life. Neutron stars are small, incredibly dense cores of supergiant stars that have undergone supernovae explosions. They are so dense, Dr. Neilsen says, that one teaspoon of its material would weigh about the same as Mount Everest. Because of this, they yield an incredibly strong gravitational field. When these highly compact neutron stars orbit companion stars, they can pull in, or accrete, material from that companion. That inflowing material forms a visible rotating disk of gas and dust called an accretion disk, which is extremely hot and shines brightly in X-rays. It’s so bright that sometimes it can actually drive matter away from the neutron star. “Imagine putting a giant lightbulb in a lake,” Dr. Neilsen said. “If it’s bright enough, it will start to boil that lake and then you would get steam, which flows away like a wind. It’s the same concept; the light can heat up and exert pressure on the accretion disk, launching a wind.” The original purpose of the study was to use XRISM to observe an accretion disk wind, with GX13+1 targeted specifically because its disk is persistently bright, it reliably produces winds, and it has been well studied using Chandra— NASA’s flagship X-ray observatory—and other telescopes for comparison. XRISM can measure the X-ray energies from these systems a factor of 10 more precisely than Chandra, allowing researchers to both demonstrate the capabilities of the new instrument and study the motion of outflowing gas around the neutron star. This can provide new insights into accretion processes. “It's like comparing a blurry image to a much sharper one,” Dr. Neilsen said. “The atomic physics hasn't changed, but you can see it much more clearly.” The researchers uncovered an exciting surprise when the higher-resolution spectrum showed much deeper absorption lines than expected. They determined that the wind was nearly opaque to X-rays and slow at “only” 1.4 million miles per hour—surprisingly leisurely for such a bright source. Based on the data, the team was able to infer that GX13+1 must have been even brighter than usual and undergoing a super-Eddington phase. So much material was ejected that it made GX13+1 appear fainter to the instrument. “There's a theoretical maximum luminosity that you can get out of an accreting object, called the Eddington limit. At that point, the radiation pressure from the light of the infalling gas is so large that it can actually hold the matter away,” Dr. Neilsen said, equating it to standing at the bottom of a waterfall and shining light so brightly that the waterfall stops. “What we saw was that GX13+1 had to have been near, or maybe even above, the Eddington limit.” The team compared their XRISM data from this super-Eddington phase to a set of previous observations without the resolution to measure the absorption lines directly. They found several older observations with faint, unusually shaped X-ray spectra similar to the one seen by XRISM. “XRISM explained these periods with funny-shaped spectra as not just anomalies, but the result of this phenomenally strong accretion disk wind in all its glory,” Dr. Neilsen said. “If we hadn’t caught this exact period with XRISM, we would never have understood those earlier data.” The connection suggests that this system spends roughly 10 percent of its time in a super-Eddington phase, which means super-Eddington accretion may be more common than previously understood—perhaps even following cycles—in neutron star or black hole binary systems. “Temporary super-Eddington phases might actually be a thing that accreting systems do, not just something unique to this system,” Dr. Neilsen said. “And if neutron stars and black holes are doing it, what about supermassive black holes? Perhaps this could pave the way for a deeper understanding of all these systems.”
Some interesting areas that I’ve seen in the press: "Consumer Sentiment was measured at the 7th lowest point (55.1) since its inception in 1952, yet we’re not seeing a huge decrease in spending (CNN). Part of the argument is the spending is an average measure and really wealthy consumers are not feeling the pinch and spending like normal or moreso, while less financially-well-off-individuals are pulling back their spending (Spectrum Local News). Presumably, the shutdown doesn’t help that figure. In terms of consumer groups affected, let’s look at government workers first. An article by the BBC claimed roughly 750,000 “non-essential” federal workers could be furloughed without pay. This means that many to most of those are going to struggle with paying for the necessities and this becomes more and more of a strain the longer the shutdown wears on. Furloughed Workers: Most furloughed workers are required to be paid back pay when the shutdown is over by law. That could in some ways create more purchases in the future if they can’t be bought currently, but could also lead to things like more credit card debt as people can put charges on a credit card to pay back later. While from a consumer psychology standpoint that might make sense, but it’s a very risky practical strategy. Gov’t contractors don’t get the same guarantee. Businesses that rely heavily on such groups (e.g., in a town where many fall into those segments) might suffer or shutter. This means other consumers that frequent those establishments have their routines disrupted , and force them to find other providers. Essential Workers: Then we have the group of “essential” workers that must go to work and still not be paid, Air Traffic Controllers, The military, TSA Agents, certain law enforcement groups, etc. that all might draw back spending with no immediate income. That can cause major issues for retailers and producers, which could lead to more layoffs in the private sector, putting more consumers into financial straits. If you’re someone that likes to visit national parks or zoo’s like the National Zoo, or the Smithsonian Museums (which has claimed they’ll have funding at least through October 6th), you could be disappointed to have reduced accessibility or outright closures due to the shutdown, again according to the BBC. Healthcare: Healthcare could definitely be affected, particularly for those on Medicaid and medicare (i.e., the elderly and poor). So if you view medical services as consumer good, then there will be issues there as well (increased wait times, decreased satisfaction, etc.), which is likely to add apprehension and anxiety to many consumers. Travel: If you’re a traveler, staffing shortages in the TSA and Air Traffic Controllers could lead to significant travel delays, which could disrupt leisure or business plans, or force people to cancel plans altogether. If you’re traveling abroad getting your passport updated could take longer. All these things (and many more) may happen or not depending on the length of the shutdown and the severity of the furloughs. Those in better financial positions will suffer less, while those already in less desirable financial situations might find that delays in some of their normally federally funded services (e.g., SNAP, WIC, etc.) create even bigger issues."
Recently, Craig Albert, PhD, was published in the Journal of Political Science Education. The article, 'Cyber-Enabled Education Operations: Towards a Strategic Cybersecurity Curriculum for the Social Sciences,' looks into how U.S. cyber intelligence training is overly technical and should integrate political science and social science courses to build strategic thinkers who understand adversaries’ motives and policies, ultimately strengthening U.S. national security. Craig Albert, PhD, is a professor of Political Science and the graduate director of the PhD in Intelligence, Defense, and Cybersecurity Policy and the Master of Arts in Intelligence and Security Studies at Augusta University. His areas of concentration include international security studies, cybersecurity policy, information warfare/influence operations/propaganda, ethnic conflict, cyberterrorism and cyberwar, and political philosophy. View his profile here. Here's the abstract from the paper in Research Gate: Most cyber intelligence analysts within the United States Intelligence Community (USIC) typically enter the field with strong technical expertise, often derived from degrees in computer science or extensive technical training. However, a critical gap exists in education and training on the strategic dimensions of cyber threats. This paper advocates for the integration of cybersecurity-focused courses within social science disciplines, particularly political science, to cultivate strategic thinkers who can contribute effectively to the USIC. The inclusion of strategic policy coursework in political science curricula, as well as more broadly across social science programs, would better prepare students for careers in the USIC by deepening their understanding of the motivations, capabilities, and intentions of the United States’ strategic adversaries in cyberspace—specifically Russia, China, Iran, and North Korea. Such training would equip analysts with critical insights to improve their effectiveness in identifying, attributing, and mitigating cyber intrusions. Moreover, a stronger emphasis on the human behavior and policy dimensions of cybersecurity would enhance the overall competency of the USIC workforce, thereby strengthening U.S. national security policy. Looking to know more? Let us help. Craig Albert, PhD, is available to speak with media. Simply click on his icon now to arrange an interview today.

How well-meaning parents sink their child's chances of college admission
"What's the number one parent behavior that will hurt a child's chance of admission?" The question was posed to Robert Alexander, the University of Rochester vice provost and dean of enrollment management, on the podcast "College Knowledge." He was quick to answer. "Parents needs to be empowering the student and not driving the conversation" when it comes to choosing a college and engaging with college admissions professionals, Alexander replied. He explained that too many parents have a narrow view of what they deem as "acceptable" institutions of higher education for their child. They come by it honestly, he said, with most of their knowledge derived from their own college searches and dreams a generation ago. They tend to home in on 20 or 30 schools when, in reality, the universe of quality colleges and universities has expanded exponentially since the days these parents were considering where to study, Alexander said. "Widening that lends and thinking beyond the 20 or 30 schools they know a lot about or think they know a lot about or see a lot of bumper stickers for, that's really important," Alexander said. "There are many more really great institutions and what's important is not your child getting into 'the best college' that they can, but instead their child finding the best fit at one or maybe a range of different institutions." Alexander is an expert in undergraduate admissions and enrollment management who speaks on the subjects to national audiences and whose work has been published in national publications. Click his profile to reach him.

#Expert Perspective: When AI Follows the Rules but Misses the Point
When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.

How to respond when your teen rebels
Why do some rebellious teenagers shun parental warnings about their behavior while others take them to heart? University of Rochester psychologist Judith Smetana has devoted her career to unpacking that question. Her research reveals that parents who live out their values — and take the time to understand the perspective of their teenagers — have the most success at positively shaping adolescent behavior. Smetana’s latest study, published in the Journal of Youth and Adolescence, shows that when parents “walk the walk” and model their values consistently, teens perceive rules and warnings as supportive guidance rather than controlling commands. But that alone won’t stop all risky teenage behavior. What really works, Smetana’s research finds, is “perspective-taking”: when parents try to understand their child’s feelings and the reasons for them. Smetana is widely cited for her expertise on moral development, autonomy, and parent-teen conflict — and how these dynamics shape young people’s lives. Connect with her by clicking on her profile.








