Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Expert Insight: Dampening the Data Desert: A First Step Toward Improving Space Coast Climate Resilience

By Steven Lazarus Like many coastal regions, Florida’s Space Coast faces significant climate resilience challenges and risks. According to the National Oceanic and Atmospheric Administration (NOAA), Florida has over 8,000 miles of shoreline, more than any other state in the contiguous U.S. In addition, the 2020 census indicates that that there are 21 million Florida residents, 75-80% of which live in coastal counties. This makes our state particularly vulnerable to rising sea levels, which are directly responsible for a host of coastal impacts, such as saltwater intrusion, sunny-day (high-tide) flooding, worsening surge, etc. There is growing evidence that storms are becoming wetter as the atmosphere warms— increasing the threat associated with compound flooding, which involves the combined effects of storm surge, rainfall, tides and river flow. Inland flooding events are also increasing due to overdevelopment, heavy precipitation and aging and/or inadequate infrastructure. The economic ramifications of these problems are quite evident, as area residents are confronted with the rising costs of their homeowners and flood insurance policies. As the principal investigator on a recently funded Department of Energy grant, Space Coast ReSCUE (Resilience Solutions for Climate, Urbanization, and Environment), I am working with Argonne National Laboratory, Florida Tech colleagues, community organizations and local government to improve our climate resilience in East Central Florida. It is remarkable that, despite its importance for risk management, urban planning and evaluating the environmental impacts of runoff, official data regarding local flooding is virtually nonexistent! Working alongside a local nonprofit, we have installed 10 automated weather stations and manual rain gauges in what was previously a “data desert” east of the Florida Tech campus: one at Stone Magnet Middle School and others at local homes. “We think that a ‘best methods’ approach is proactive, informed and cost-effective. The foundation of good decision-making, assessment and planning is built on data (model and observations), which are critical to adequately addressing the impact of climate on our communities.” – steven lazarus, meteorology professor, ocean engineering and marine sciences Data from these stations are available, in real-time, from two national networks: CoCoRaHS and Weather Underground. The citizen science initiative involving the rain gauge measurements is designed to document flooding in a neighborhood with limited resources. In addition to helping residents make informed choices, these data will also provide a means by which we can evaluate our flood models that will be used to create highly detailed flood maps of the neighborhood. We are working with two historic extreme-precipitation events: Hurricane Irma (2017) and Tropical Storm Fay (2008)—both of which produced excessive flooding in the area. What might the local flooding look like, in the future, as storms become wetter? To find out, we plan to simulate these two storms in both present-day and future climate conditions. What will heat stress, a combination of temperature and humidity, feel like in the future? What impact will this have on energy consumption? The station data will also be used develop and test building energy-efficiency tools designed to help the community identify affordable ways to reduce energy consumption, as well as to produce high-precision urban heat island (heat stress) maps that account for the impact of individual buildings. The heat island and building energy modeling will be complemented by a drone equipped with an infrared camera, which will provide an observation baseline. We think that a “best methods” approach is proactive, informed and cost-effective. The foundation of good decision-making, assessment and planning is built on data (model and observations), which are critical to adequately addressing the impact of climate on our communities.

Steven Lazarus, Ph.D.
3 min. read

#Expert Perspective: When AI Follows the Rules but Misses the Point

When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.

Wei Jiang
6 min. read

No More Edits for “Face the Nation”

Mark Lukasiewicz, dean of Hofstra’s Lawrence Herbert School of Communication, is featured in an article in Variety: “CBS News Agrees Not to Edit ‘Face The Nation’ Interviews Following Homeland Security Backlash.” The report covers a CBS News decision to discontinue editing taped interviews with newsmakers who appear on “Face the Nation.” The agreement came after the Trump administration complained about an interview with Secretary of Homeland Security Kristi Noem. During the course of the segment, Noem made unsubstantiated statements about Kilmar Abergo Garcia, a Salvadoran man who was deported from the U.S., despite having protected legal status. CBS decided to air an edited version of the interview and to make the full exchange available online. “A national news organization is apparently surrendering a major part of its editorial decision-making power to appease the administration and to bend to its implied and explicit threats. Choosing to edit an interview, or not, is a matter for newsrooms and news organizations to decide. The government has no business in that decision,” said Dean Lukasiewicz.

Mark Lukasiewicz
1 min. read

4 out of 5 US Troops Surveyed Understand the Duty to Disobey Illegal Orders

This article is republished from The Conversation under a Creative Commons license. Read the original article here. With his Aug. 11, 2025, announcement that he was sending the National Guard – along with federal law enforcement – into Washington, D.C. to fight crime, President Donald Trump edged U.S. troops closer to the kind of military-civilian confrontations that can cross ethical and legal lines. Indeed, since Trump returned to office, many of his actions have alarmed international human rights observers. His administration has deported immigrants without due process, held detainees in inhumane conditions, threatened the forcible removal of Palestinians from the Gaza Strip and deployed both the National Guard and federal military troops to Los Angeles to quell largely peaceful protests. When a sitting commander in chief authorizes acts like these, which many assert are clear violations of the law, men and women in uniform face an ethical dilemma: How should they respond to an order they believe is illegal? The question may already be affecting troop morale. “The moral injuries of this operation, I think, will be enduring,” a National Guard member who had been deployed to quell public unrest over immigration arrests in Los Angeles told The New York Times. “This is not what the military of our country was designed to do, at all.” Troops who are ordered to do something illegal are put in a bind – so much so that some argue that troops themselves are harmed when given such orders. They are not trained in legal nuances, and they are conditioned to obey. Yet if they obey “manifestly unlawful” orders, they can be prosecuted. Some analysts fear that U.S. troops are ill-equipped to recognize this threshold. We are scholars of international relations and international law. We conducted survey research at the University of Massachusetts Amherst’s Human Security Lab and discovered that many service members do understand the distinction between legal and illegal orders, the duty to disobey certain orders, and when they should do so. Compelled to disobey U.S. service members take an oath to uphold the Constitution. In addition, under Article 92 of the Uniform Code of Military Justice and the U.S. Manual for Courts-Martial, service members must obey lawful orders and disobey unlawful orders. Unlawful orders are those that clearly violate the U.S. Constitution, international human rights standards or the Geneva Conventions. Service members who follow an illegal order can be held liable and court-martialed or subject to prosecution by international tribunals. Following orders from a superior is no defense. Our poll, fielded between June 13 and June 30, 2025, shows that service members understand these rules. Of the 818 active-duty troops we surveyed, just 9% stated that they would “obey any order.” Only 9% “didn’t know,” and only 2% had “no comment.” When asked to describe unlawful orders in their own words, about 25% of respondents wrote about their duty to disobey orders that were “obviously wrong,” “obviously criminal” or “obviously unconstitutional.” Another 8% spoke of immoral orders. One respondent wrote that “orders that clearly break international law, such as targeting non-combatants, are not just illegal — they’re immoral. As military personnel, we have a duty to uphold the law and refuse commands that betray that duty.” Just over 40% of respondents listed specific examples of orders they would feel compelled to disobey. The most common unprompted response, cited by 26% of those surveyed, was “harming civilians,” while another 15% of respondents gave a variety of other examples of violations of duty and law, such as “torturing prisoners” and “harming U.S. troops.” One wrote that “an order would be obviously unlawful if it involved harming civilians, using torture, targeting people based on identity, or punishing others without legal process.” Soldiers, not lawyers But the open-ended answers pointed to another struggle troops face: Some no longer trust U.S. law as useful guidance. Writing in their own words about how they would know an illegal order when they saw it, more troops emphasized international law as a standard of illegality than emphasized U.S. law. Others implied that acts that are illegal under international law might become legal in the U.S. “Trump will issue illegal orders,” wrote one respondent. “The new laws will allow it,” wrote another. A third wrote, “We are not required to obey such laws.” Several emphasized the U.S. political situation directly in their remarks, stating they’d disobey “oppression or harming U.S. civilians that clearly goes against the Constitution” or an order for “use of the military to carry out deportations.” Still, the percentage of respondents who said they would disobey specific orders – such as torture – is lower than the percentage of respondents who recognized the responsibility to disobey in general. This is not surprising: Troops are trained to obey and face numerous social, psychological and institutional pressures to do so. By contrast, most troops receive relatively little training in the laws of war or human rights law. Political scientists have found, however, that having information on international law affects attitudes about the use of force among the general public. It can also affect decision-making by military personnel. This finding was also borne out in our survey. When we explicitly reminded troops that shooting civilians was a violation of international law, their willingness to disobey increased 8 percentage points. Drawing the line As my research with another scholar showed in 2020, even thinking about law and morality can make a difference in opposition to certain war crimes. The preliminary results from our survey led to a similar conclusion. Troops who answered questions on “manifestly unlawful orders” before they were asked questions on specific scenarios were much more likely to say they would refuse those specific illegal orders. When asked if they would follow an order to drop a nuclear bomb on a civilian city, for example, 69% of troops who received that question first said they would obey the order. But when the respondents were asked to think about and comment on the duty to disobey unlawful orders before being asked if they would follow the order to bomb, the percentage who would obey the order dropped 13 points to 56%. While many troops said they might obey questionable orders, the large number who would not is remarkable. Military culture makes disobedience difficult: Soldiers can be court-martialed for obeying an unlawful order, or for disobeying a lawful one. Yet between one-third to half of the U.S. troops we surveyed would be willing to disobey if ordered to shoot or starve civilians, torture prisoners or drop a nuclear bomb on a city. The service members described the methods they would use. Some would confront their superiors directly. Others imagined indirect methods: asking questions, creating diversions, going AWOL, “becoming violently ill.” Criminologist Eva Whitehead researched actual cases of troop disobedience of illegal orders and found that when some troops disobey – even indirectly – others can more easily find the courage to do the same. Whitehead’s research showed that those who refuse to follow illegal or immoral orders are most effective when they stand up for their actions openly. The initial results of our survey – coupled with a recent spike in calls to the GI Rights Hotline – suggest American men and women in uniform don’t want to obey unlawful orders. Some are standing up loudly. Many are thinking ahead to what they might do if confronted with unlawful orders. And those we surveyed are looking for guidance from the Constitution and international law to determine where they may have to draw that line. Zahra Marashi, an undergraduate research assistant at the University of Massachusetts Amherst, contributed to the research for this article.

Charli Carpenter
6 min. read

First AI-powered Smart Care Home system to improve quality of residential care

Partnership between Lee Mount Healthcare and Aston University will develop and integrate a bespoke AI system into a care home setting to elevate the quality of care for residents By automating administrative tasks and monitoring health metrics in real time, the smart system will support decision making and empower care workers to focus more on people The project will position Lee Mount Healthcare as a pioneer of AI in the care sector and opening the door for more care homes to embrace technology. Aston University is partnering with dementia care provider Lee Mount Healthcare to create the first ‘Smart Care Home’ system incorporating artificial intelligence. The project will use machine learning to develop an intelligent system that can automate routine tasks and compliance reporting. It will also draw on multiple sources of resident data – including health metrics, care needs and personal preferences – to inform high-quality care decisions, create individualised care plans and provide easy access to updates for residents’ next of kin. There are nearly 17,000 care homes in the UK looking after just under half a million residents, and these numbers are expected to rise in the next two decades. Over half of social care providers still retain manual and paper-based approaches to care management, offering significant opportunity to harness the benefits of AI to enhance efficiency and care quality. The Smart Care Home system will allow for better care to be provided at lower cost, freeing up staff from administrative tasks so they can spend more time with residents. Manjinder Boo Dhiman, director of Lee Mount Healthcare, said: “As a company, we’ve always focused on innovation and breaking barriers, and this KTP builds on many years of progress towards digitisation. We hope by taking the next step into AI, we’ll also help to improve the image of the care sector and overcome stereotypes, to show that we are forward thinking and can attract the best talent.” Dr Roberto Alamino, lecturer in Applied AI & Robotics with the School of Computer Science and Digital Technologies at Aston University said: “The challenges of this KTP are both technical and human in nature. For practical applications of machine learning, it’s important to establish a common language between us as researchers and the users of the technology we are developing. We need to fully understand the problems they face so we can find feasible, practical solutions. For specialist AI expertise to develop the smart system, LMH is partnering with the Aston Centre for Artificial Intelligence Research and Application (ACAIRA) at Aston University, of which Dr Alamino is a member. ACAIRA is recognised internationally for high-quality research and teaching in computer science and artificial intelligence (AI) and is part of the College of Engineering and Physical Sciences. The Centre’s aim is to develop AI-based solutions to address critical social, health, and environmental challenges, delivering transformational change with industry partners at regional, national and international levels. The project is a Knowledge Transfer Partnership. (KTP). Funded by Innovate UK, KTPs are collaborations between a business, a university and a highly qualified research associate. The UK-wide programme helps businesses to improve their competitiveness and productivity through the better use of knowledge, technology and skills. Aston University is a sector leading KTP provider, ranked first for project quality, and joint first for the volume of active projects. For more information on the KTP visit the webpage.

3 min. read

Expert Insights: Navigating Tariffs in a Time of Global Disruption

As global headlines swirl with shifting tariff regulations, U.S. businesses are navigating uncertain waters. With new trade actions impacting industries from automotive to renewable energy, the ripple effects are being felt across supply chains, labor markets, and even insurance models. In this conversation, J.S. Held experts Peter Davis, Timothy Gillihan, Andrea Korney, and Robert Strahle unpack how tariffs are shaping decision-making across industries and where organizations can spot opportunities amid the volatility. Highlights: • Industries most likely to experience tariff impacts • Potential disruptions in manufacturing processes • Supply chain and quality concerns • Expected changes coming in the insurance, reinsurance, and construction markets • The importance of strategic tariff engineering • Guidance for dealing with uncertainty and a rapidly changing business environment Looking to connect with Peter Davis and Andrea Korney? Click on their profile cards to arrange an interview or get deeper insights. For any other media inquiries - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com

Andrea KorneyPeter S. Davis, CPA, ABV, CFF, CIRA, CTP, CFE
1 min. read

LSU AgCenter Research Enables Better Flood Protection for Homes

The American Society of Civil Engineers (ASCE) recently released its new standard for flood-resistant design and construction, ASCE/SEI 24-24, which provides new minimum requirements that can be adopted for all structures subject to building codes and floodplain management regulations. The new elevation standard was directly supported by LSU research and should help reduce flood risk and make flood insurance more affordable. “Without the research by the LSU AgCenter, the advancements made to the elevation requirements would not have been possible,” said Manny Perotin, co-chair of the Association of State Floodplain Managers’ Nonstructural Floodproofing Committee, who helped update the standard. “Dr. Carol Friedland’s research shows there are better ways to protect communities from flooding than adding one foot of additional freeboard.” The research team, led by Friedland, an engineer, professor, and director of LSU AgCenter’s LaHouse, showed how previous standards were failing to protect some homeowners. They mapped the impact of moving from a standard based on a fixed freeboard amount to being based on real risk in every census tract in the U.S. In response to these findings, they developed a free online tool to help builders, planners, managers, and engineers calculate the elevation required under the new standards. “Many on the committee said it would be too hard to do these complex calculations,” said Adam Reeder, principal at the engineering and construction firm CDMSmith, who helped lead the elevation working group for the new ASCE 24 elevation standards. “But the LSU AgCenter’s years of research in this area and the development of the tool makes calculations and implementation simple. This allowed the new elevation standard to get passed.” Flooding, the biggest risk to homes in Louisiana, continues to threaten investments and opportunities to build generational wealth. On top of flood losses, residents see insurance premiums increase without resources to help them make informed decisions and potentially lower costs. In response to this problem, Friedland is working on developing a whole suite of tools together with more than 130 partners as part of a statewide Disaster Resilience Initiative. When presenting to policy makers and various organizations, Friedland often starts by asking what percentage of buildings they want to flood in their community in the next 50 years. “Of course, we all want this number to be zero,” Friedland said. “But we have been building and designing so 40% will flood. People have a hard time believing this, but it’s the reality of how past standards did not adequately address flood risk.” Designing to the 100-year elevation means a building has a 0.99 chance of not flooding in any given year. But when you run that probability over a period of 50 years (0.99 x 0.99 x 0.99… 50 times, or 0.99 ^ 50), the number you end up with is a 60.5% chance of not flooding in 50 years. This means a 39.5% chance of flooding at least once. “We’ve been building to the 100-year elevation while wanting the protection of building to the 500-year elevation, which is a 10% chance of flooding in 50 years,” Friedland said. “Now, with the higher ASCE standard, we can finally get to 10% instead of 40%.” As the AgCenter’s research led to guidelines, then to this new standard, Friedland has also been providing testimony to the International Code Council to turn the stronger standard into code. In May, Friedland helped lead a workshop at the Association of State Floodplain Managers’ national conference, held in New Orleans. There, she educated floodplain managers about the new standard while demonstrating LSU’s web-based calculation tool, which was designed for professionals, while her team also develops personalized decision-making tools such as Flood Safe Home for residents. At the conference, Friedland received the 2025 John R. Sheaffer Award for Excellence in Floodproofing. More than two-thirds of the cost of natural hazards in Louisiana comes from flooding, according to LSU AgCenter research in partnership with the Governor’s Office of Homeland Security and Emergency Preparedness for the State Hazard Mitigation Plan. That cost was recently estimated to rise to $3.6 billion by 2050. “Historically, we have lived with almost a 40% chance of flooding over 50 years, which in most people’s opinion is too high—and the number could be even higher,” Reeder said. “Most building owners don’t understand the risk they are living with, and it only becomes apparent after a flood. The work done by the LSU AgCenter is critical in improving resilience in communities that can’t afford to be devastated by flooding.” “This may be the most significant upgrade in the nation’s flood loss reduction standards since the creation of the National Flood Insurance Program minimums in 1973, and it could not come at a better time as annual flood losses in the country now average more than $45 billion per year,” said Chad Berginnis, executive director of the Association of State Floodplain Managers. In addition to LaHouse’s work to prevent flooding, Friedland’s team is also working to increase energy efficiency in homes to help residents save money on utility bills. Their HEROES program, an acronym for home energy resilience outreach, education, and support, is funded by the U.S. Department of Agriculture and has already reached 140,000 people in Louisiana. Article originally posted here.

Carol Friedland
4 min. read

Video Insights: What Investors Need to Know About Shifting Tariffs

Unprecedented uncertainty brought on by quickly evolving tariff policies is creating challenges and additional considerations for investors and other capital providers. In this video, J.S. Held experts Brian Gleason, John Peiserich, James E. Malackowski, and Tom Burns – experts in turnaround, supply chain, intellectual property, and political risk – pose twelve questions to private equity sponsors and their portfolio companies to explore amid the continued tariff uncertainty. Restructuring and operations expert Brian Gleason has managed or participated in more than 300 turnaround engagements over the past 29 years and applies the principles utilized in J.S. Held's work advising companies in crisis. In the video, Brian addresses three essential questions that investors should consider with their portfolio companies during this period of unprecedented tariff-policy-induced uncertainty: 1) How have tariffs impacted business forecasting and investor confidence? 2) What are the key actions portfolio company management teams should take during tariff-induced uncertainty? 3) What leadership strategies are recommended for navigating the economic stress caused or complicated by tariffs? Business intelligence expert Tom Burns has extensive experience leading intelligence collection assignments for financial institutions, law firms, and blue-chip multinationals around the world. Tom explores the additional pre-acquisition diligence essential amid tariff uncertainty in the video. He addresses three questions, including: 4) How have tariffs changed the due diligence process in acquisitions? 5) What is transshipment, and why is it a concern for investors and their portfolio companies? 6) What steps should investors take to manage tariff-related risks in acquisitions? Capital projects, environmental risk, and compliance expert John Peiserich has over 30 years of experience advising heavy industry and law firms throughout the country with a focus on Oil & Gas, Energy, and Public Utilities. In the video, John reflects upon: 7) Why is it important for investors to assess the owner-operator's understanding of supply chain risks? 8) How have tariffs introduced new challenges for large-scale projects? 9) What is the potential impact of supply chain and tariff-related delays on investment outcomes? James E. Malackowski has a unique perspective on intellectual property litigation risk, strategic management, and monetization, which benefits from his prior work at a leading private equity firm. In the video, he advises investors and their portfolio companies to consider: 10) How do tariffs influence decisions around manufacturing relocation and intellectual property? 11) What IP-related risks should companies consider when relocating manufacturing operations? 12) What steps should investors take to ensure IP is properly managed in response to tariffs? The J.S. Held Tariffs and Trade Series is a collection of intelligence, insights, and action plans that inform strategic business decision-making and foster resilience in an increasingly volatile global market. To view more of our Tariffs and Trade Series expert analysis and commentary, visit: Looking to know more or connect with John Peiserich and James E. Malackowski? Simply click on either expert's icon now to arrange an interview today. If you are looking to connect with Brian Gleason or Tom Burns - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com

John Peiserich, Esq.James E. Malackowski, CPA, CLP
2 min. read

Why Simultaneous Voting Makes for Good Decisions

How can organizations make robust decisions when time is short, and the stakes are high? It’s a conundrum not unfamiliar to the U.S. Food and Drug Administration. Back in 2021, the FDA found itself under tremendous pressure to decide on the approval of the experimental drug aducanumab, designed to slow the progress of Alzheimer’s disease—a debilitating and incurable condition that ranks among the top 10 causes of death in the United States. Welcomed by the market as a game-changer on its release, aducanumab quickly ran into serious problems. A lack of data on clinical efficacy along with a slew of dangerous side effects meant physicians in their droves were unwilling to prescribe it. Within months of its approval, three FDA advisors resigned in protest, one calling aducanumab, “the worst approval decision that the FDA has made that I can remember.” By the start of 2024, the drug had been pulled by its manufacturers. Of course, with the benefit of hindsight and data from the public’s use of aducanumab, it is easy for us to tell that FDA made the wrong decision then. But is there a better process that would have given FDA the foresight to make the right decision, under limited information? The FDA routinely has to evaluate novel drugs and treatments; medical and pharmaceutical products that can impact the wellbeing of millions of Americans. With stakes this high, the FDA is known to tread carefully: assembling different advisory, review, and funding committees providing diverse knowledge and expertise to assess the evidence and decide whether to approve a new drug, or not. As a federal agency, the FDA is also required to maintain scrupulous records that cover its decisions, and how those decisions are made. The Impact of Voting Mechanisms on Decision Quality Some of this data has been analyzed by Goizueta’s Tian Heong Chan, associate professor of information systems and operation management. Together with Panos Markou of the University of Virginia’s Darden School of Business, Chan scrutinized 17 years’ worth of information, including detailed transcripts from more than 500 FDA advisory committee meetings, to understand the mechanisms and protocols used in FDA decision-making: whether committee members vote to approve products sequentially, with everyone in the room having a say one after another; or if voting happens simultaneously via the push of a button, say, or a show of hands. Chan and Markou also looked at the impact of sequential versus simultaneous voting to see if there were differences in the quality of the decisions each mechanism produced. Their findings are singular. It turns out that when stakeholders vote simultaneously, they make better decisions. Drugs or products approved this way are far less likely to be issued post-market boxed warnings (warnings issued by FDA that call attention to potentially serious health risks associated with the product, that must be displayed on the prescription box itself), and more than two times less likely to be recalled. The FDA changed its voting protocols in 2007, when they switched from sequentially voting around the room, one person after another, to simultaneous voting procedures. And the results are stunning. Tian Heong Chan, Associate Professor of Information Systems & Operation Management “Decisions made by simultaneous voting are more than twice as effective,” says Chan. “After 2007, you see that just 3.4% of all drugs and products approved this way end up being discontinued or recalled. This compares with an 8.6% failure rate for drugs approved by the FDA using more sequential processes—the round robin where individuals had been voting one by one around the room.” Imagine you are told beforehand that you are going to vote on something important by simply raising your hand or pressing a button. In this scenario, you are probably going to want to expend more time and effort in debating all the issues and informing yourself before you decide. Tian Heong Chan “On the other hand, if you know the vote will go around the room, and you will have a chance to hear how others’ speak and explain their decisions, you’re going to be less motivated to exchange and defend your point of view beforehand,” says Chan. In other words, simultaneous decision-making is two times less likely to generate a wrong decision as the sequential approach. Why is this? Chan and Markou believe that these voting mechanisms impact the quality of discussion and debate that undergird decision-making; that the quality of decisions is significantly impacted by how those decisions are made. Quality Discussion Leads to Quality Decisions Parsing the FDA transcripts for content, language, and tonality in both settings, Chan and Markou find evidence to support this. Simultaneous voting or decision-making drives discussions that are characterized by language that is more positive, more authentic, and more even in terms of expressions of authority and hierarchy, says Chan. What’s more, these deliberations and exchanges are deeper and more far-ranging in quality. We find marked differences in the tone of speech and the topics discussed when stakeholders know they will be voting simultaneously. There is less hierarchy in these exchanges, and individuals exhibit greater confidence in sharing their points of view more freely. Tian Heong Chan “We also see more questions being asked, and a broader range of topics and ideas discussed,” says Chan. In this context, decision-makers are also less likely to reach unanimous agreement. Instead, debate is more vigorous and differences of opinion remain more robust. Conversely, sequential voting around the room is typically preceded by shorter discussion in which stakeholders share fewer opinions and ask fewer questions. And this demonstrably impacts the quality of the decisions made, says Chan. Sharing a different perspective to a group requires effort and courage. With sequential voting or decision-making, there seems to be less interest in surfacing diverse perspectives or hidden aspects to complex problems. Tian Heong Chan “So it’s not that individuals are being influenced by what other people say when it comes to voting on the issue—which would be tempting to infer—rather, it’s that sequential voting mechanisms seem to take a bit more effort out of the process.” When decision-makers are told that they will have a chance to vote and to explain their vote, one after another, their incentives to make a prior effort to interrogate each other vigorously, and to work that little bit harder to surface any shortcomings in their own understanding or point of view, or in the data, are relatively weaker, say Chan and Markou. The Takeaway for Organizations Making High-Stakes Decisions Decision-making in different contexts has long been the subject of scholarly scrutiny. Chan and Markou’s research sheds new light on the important role that different mechanisms have in shaping the outcomes of decision-making—and the quality of the decisions that are jointly taken. And this should be on the radar of organizations and institutions charged with making choices that impact swathes of the community, they say. “The FDA has a solid tradition of inviting diversity into its decision-making. But the data shows that harnessing the benefits of diversity is contingent on using the right mechanisms to surface the different expertise you need to be able to see all the dimensions of the issue, and make better informed decisions about it,” says Chan. A good place to start? By a concurrent show of hands. Tian Heong Chan is an associate professor of information systems and operation management. he is available to speak about this topic - click on his con now to arrange an interview today.

Fast-striking and unpredictable, tornadoes pose major challenges for emergency planners

At least 20 U.S. states have been hit with tornadoes – some of them deadly – over the past week. Experts from the University of Delaware's Disaster Research Center can speak to the difficulty of drawing up plans in advance of tornadoes, which can develop quickly and unexpectedly, as well as a variety of topics related to storm preparedness, evacuations and recovery. Those experts include: Jennifer Horney: Environmental impacts of disasters and potential public health impacts for chronic and infectious diseases. Horney, who co-authored a paper on the increase in tornado outbreaks, can talk about how impacts on the morbidity and mortality that result from tornadoes. Tricia Wachtendorf: Evacuation decision-making, disaster response and coordination, disaster relief (donations) and logistics, volunteer and emergent efforts, social vulnerability. James Kendra: Disaster response, nursing homes and hospitals, volunteers, response coordination. Jennifer Trivedi: Challenges for people with disabilities during disaster, cultural issues and long-term recovery. Sarah DeYoung: Pets in emergencies, infant feeding in disasters and decision-making in evacuation. A.R. Siders: Expert on sea level rise and managed retreat – the concept of planned community movement away from flood-prone areas. To reach these experts directly, visit their profile and click on the contact button.

Tricia WachtendorfJames KendraJennifer HorneySarah DeYoungJennifer TrivediA.R. Siders
1 min. read