Small Changes Can Save Lives: How a Police Officer’s First Words Can Transform Communities

Jul 1, 2024

12 min



Britt Nestor knew something needed to change.


Nestor is a police officer in North Carolina. Unlike many in her field, who recite interview-ready responses about wanting to be a police officer since childhood, Nestor admits that her arrival to the field of law enforcement was a serendipitous one.


Told by teachers to start rehearsing the line “do you want fries with that?” while in high school, Nestor went to college to prove them wrong—and even graduated with a 3.9 GPA solely to prove those same people wrong—but she had absolutely no idea what to do next. When a local police department offered to put her through the police academy, her first thought was, “absolutely not.”


“And here I am,” says Nestor, 12 years into her career, working in Special Victims Investigations as an Internet Crimes Against Children detective.


A Calling to Serve Community


Brittany Nestor, New Blue Co-Founder and President


Though she’d initially joined on a whim, Nestor stuck around and endured many growing pains, tasting some of the problematic elements of police culture firsthand. As a woman, there was particular pressure to prove herself; she resisted calling for back-up on dangerous calls for fear of being regarded as weak, and tried out for and joined the SWAT team to demonstrate her mettle.


"It took time to realize I didn’t need to make the most arrests or get the most drugs and guns to be a good cop. What was important was recognizing that I was uniquely positioned and given opportunities every single shift to make a difference in people’s lives—that is what I wanted to focus on."


Britt Nestor

Nestor found she took great pleasure in interacting with different kinds of people all day. She’s deeply fond of her community, where she is also a youth basketball coach. One of her greatest joys is being on call or working an event and hearing someone hail her from the crowd by yelling, “hey, coach!” When she landed in the Juvenile Investigations Unit, Nestor truly felt she’d found her calling.


Still, what she’d witnessed in her profession and in the news weighed on her. And she’s not alone; while there is continued debate on the urgency and extent of changes needed, 89% percent of people are in favor of police reform, according to a CBS/YouGov poll.



A few weeks after George Floyd’s murder in 2020, Nestor’s colleague Andy Saunders called her and told her they had to do something. It felt like the tipping point.


“I knew he was right. I needed to stop wishing and hoping police would do better and start making it happen.”


Andy Saunders, New Blue Co-Founder and CEO


That conversation was the spark that grew into New Blue. Founded in 2020, New Blue strives to reform the U.S. Criminal Justice system by uniting reform-minded police officers and community allies. The organization focuses on incubating crowd-sourced solutions from officers themselves, encouraging those in the field to speak up about what they think could improve relations between officers and the communities they serve.


“Over the years I’ve had so many ideas—often addressing problems brought to light by community members—that could have made us better. But my voice was lost. I didn’t have much support from the police force standing behind me. This is where New Blue makes the difference; it’s the network of fellows, alumni, partners, mentors, and instructors I’d needed in the past.”


Nestor and Saunders had valuable pieces of the puzzle as experienced law enforcement professionals, yet they knew they needed additional tools. What are the ethical guidelines around experimenting with new policing tactics? What does success look like, and how could they measure it?


The Research Lens


Over 400 miles away, another spark found kindling; like Nestor, Assistant Professor of Organization & Management Andrea Dittmann’s passion for making the world a better place is palpable. Also, like Nestor, it was an avid conversation with a colleague—Kyle Dobson—that helped bring a profound interest in police reform into focus.


Dittmann, whose academic career began in psychology and statistics, came to this field by way of a burgeoning interest in the need for research-informed policy. Much of her research explores the ways in which socioeconomic disparities play out in the work environment, and—more broadly—how discrepancies of power shape dynamics in organizations of all kinds.


When people imagine research in the business sector, law enforcement is unlikely to crop up in their mind. Indeed, Dittmann cites the fields of criminal justice and social work as being the traditional patrons of police research, both of which are more likely to examine the police force from the top down.


Andrea Dittmann


Dittmann, however, is a micro-oriented researcher, which means she assesses organizations from the bottom up; she examines the small, lesser-studied everyday habits that come to represent an organization’s values.


“We have a social psychology bent; we tend to focus on individual processes, or interpersonal interactions,” says Dittmann. She regards her work and that of her colleagues as a complementary perspective to help build upon the literature already available. Where Dittmann has eyes on the infantry level experience of the battleground, other researchers are observing from a bird’s eye view. Together, these angles can help complete the picture.


And while the “office” of a police officer may look very different from what most of us see every day, the police force is—at the end of the day—an organization: “Like all organizations, they have a unique culture and specific goals or tasks that their employees need to engage in on a day-to-day basis to be effective at their jobs,” says Dittmann.


Theory Meets Practice


Kyle Dobson, Postdoctoral Researcher at The University of Texas at Austin


What Dittmann and Dobson needed next was a police department willing to work with them, a feat easier said than done.


Enter Britt Nestor and New Blue.


"Kyle and I could instantly tell we had met people with the same goals and approach to reforming policing from within."


Andrea Dittmann


Dittmann was not surprised by the time it took to get permission to work with active officers.



“Initially, many officers were distrustful of researchers. Often what they’re seeing in the news are researchers coming in, telling them all the problems that they have, and leaving. We had to reassure them that we weren’t going to leave them high and dry. If we find a problem, we’re going to tell you about it, and we’ll work on building a solution with you. And of course, we don’t assume that we have all the answers, which is why we emphasize developing research ideas through embedding ourselves in police organizations through ride-alongs and interviews.”


After observing the same officers over years, they’re able to build rapport in ways that permit open conversations. Dittmann and Dobson now have research running in many pockets across the country, including Atlanta, Baltimore, Chicago, Washington, D.C. and parts of Texas.


The Rise of Community-Oriented Policing



For many police departments across the nation, there is a strong push to build closer and better relationships with the communities they serve. This often translates to police officers being encouraged to engage with citizens informally and outside the context of enforcing the law. If police spent more time chatting with people at a public park or at a café, they’d have a better chance to build rapport and foster a collective sense of community caretaking—or so the thinking goes. Such work is often assigned to a particular unit within the police force. This is the fundamental principle behind community-oriented policing: a cop is part of the community, not outside or above it.



This approach is not without controversy, as many would argue that the public is better served by police officers interacting with citizens less, not more. In light of the many high-profile instances of police brutality leaving names like Breonna Taylor and George Floyd echoing in the public’s ears, their reticence to support increased police-to-citizen interaction is understandable.


“Sometimes when I discuss this research, people say, ‘I just don’t think that officers should approach community members at all, because that’s how things escalate.’ Kyle and I acknowledge that’s a very important debate and has its merits.” As micro-oriented researchers, however, Dittmann and Dobson forgo advocating for or dismissing broad policy. They begin with the environment handed to them and work backward.


“The present and immediate reality is that there are officers on the street, and they’re having these interactions every day. So what can we do now to make those interactions go more smoothly? What constitutes a positive interaction with a police officer, and what does it look like in the field?”


Good Intentions Gone Awry



To find out, they pulled data through a variety of experiments, including live interactions, video studies and online experiments, relying heavily on observation of such police-to-citizen interactions.


"What we wanted to do is observe the heterogeneity of police interactions and see if there’s anything that officers are already doing that seems to be working out in the field, and if we can ‘bottle that up’ and turn that into a scalable finding."


Andrea Dittmann

Dittmann and her colleagues quickly discovered a significant discrepancy between some police officers’ perceived outcome of their interactions with citizens and what those citizens reported to researchers post-interaction.


“An officer would come back to us and they’d say it went great. Like, ‘I did what I was supposed to do, I made that really positive connection.’ And then we’d go to the community members, and we’d hear a very different story: ‘Why the heck did that officer just come up to me, I’m just trying to have a picnic in the park with my family, did I do something wrong?’” Community members reported feeling confused, harassed, or—at the worst end of the spectrum—threatened.


The vast majority—around 75% of citizens—reported being anxious from the very beginning of the interaction. It’s not hard to imagine how an officer approaching you apropos of nothing may stir anxious thoughts: have I done something wrong? Is there trouble in the area? The situation put the cognitive burden on the citizen to figure out why they were being approached.


The Transformational Potential of the “Transparency Statement”


And yet, they also observed officers (“super star” police officers, as Dittmann refers to them) who seemed to be especially gifted at cultivating better responses from community members.


What made the difference?



“They would explain themselves right from the start and say something like, ‘Hey, I’m officer so-and-so. The reason I’m out here today is because I’m part of this new community policing unit. We’re trying to get to know the community and to better understand the issues that you’re facing.’ And that was the lightbulb moment for me and Kyle: the difference here is that some of these officers are explaining themselves very clearly, making their benevolent intention for the interaction known right from the start of the conversation.”


Dittmann and her colleagues have coined this phenomenon the “transparency statement.” Using a tool called the Linguistic Inquiry & Word Count software and natural language processing tools, the research team was able to analyze transcripts of the conversations and tease out subconscious cues about the civilians’ emotional state, in addition to collecting surveys from them after the encounter. Some results jumped out quickly, like the fact that those people whose conversation with an officer began with a transparency statement had significantly longer conversations with them.


The team also employed ambulatory physiological sensors, or sensors worn on the wrist that measure skin conductivity and, by proxy, sympathetic nervous system arousal. From this data, a pattern quickly emerged: citizens’ skin conductance levels piqued early after a transparency statement (while this can be a sign of stress, in this context researchers determined it to reflect “active engagement” in the conversation) and then recovered to baseline levels faster than in the control group, a pattern indicative of positive social interaction.


Timing, too, is of the essence: according to the study, “many patrol officers typically made transparency statements only after trust had been compromised.” Stated simply, the interest police officers showed in them was “perceived as harassment” if context wasn’t provided first.


Overall, the effect was profound: citizens who were greeted with the transparency statement were “less than half as likely to report threatened emotions.” In fact, according to the study, “twice as many community members reported feeling inspired by the end of the interaction.”


What’s more, they found that civilians of color and those from lower socioeconomic backgrounds —who may reasonably be expected to have a lower baseline level of trust of law enforcement—“may profit more from greater transparency.”


Talk, it turns out, is not so cheap after all.


Corporate Offices, Clinics, and Classrooms


The implications of this research may also extend beyond the particulars of the police force. The sticky dynamics that form between power discrepancies are replicated in many environments: the classroom, between teachers and students; the office, between managers and employees; even the clinic, between medical doctors and patients. In any of these cases, a person with authority—perceived or enforceable—may try to build relationships and ask well-meaning questions that make people anxious if misunderstood. Is my boss checking in on me because she’s disappointed in my performance? Is the doctor being nice because they’re preparing me for bad news?


“We believe that, with calibration to the specific dynamics of different work environments, transparency statements could have the potential to ease tense conversations across power disparities in contexts beyond policing,” says Dittmann.


More Research, Action, and Optimism



What could this mean for policing down the road?


Imagine a future where most of the community has a positive relationship with law enforcement and there is mutual trust.


"I often heard from family and friends that they’d trust the police more ‘if they were all like you.’ I can hear myself saying, ‘There are lots of police just like me!’ and I truly believe that. I believe that so many officers love people and want to serve their communities—and I believe a lot of them struggle with the same things I do. They want to see our profession do better!"


Britt Nestor


“When I get a new case and I meet the survivor, and they’re old enough to talk with me, I always explain to them, ‘I work for you. How cool is that?’ And I truly believe this: I work for these kids and their families.”


The implications run deep; a citizen may be more likely to reach out to police officers about issues in their community before they become larger problems. An officer who is not on edge may be less likely to react with force.


Dittmann is quick to acknowledge that while the results of the transparency statement are very promising, they are just one piece of a very large story with a long and loaded history. Too many communities are under supported and overpoliced; it would be denying the gravity and complexity of the issue to suggest that there is any silver bullet solution, especially one so simple. More must be done to prevent the dynamics that lead to police violence to begin with.


“There’s a common narrative in the media these days that it’s too late, there’s nothing that officers can do,” says Dittmann. Yet Dittmann places value on continued research, action and optimism. When a simple act on the intervention side of affairs has such profound implications, and is not expensive or difficult to implement, one can’t help but see potential.


“Our next step now is to develop training on transparency statements, potentially for entire agencies,” says Dittmann. “If all the officers in the agency are interacting with transparency statements, then we see this bottom-up approach, with strong potential to scale. If every interaction you have with an officer in your community starts out with that transparency statement, and then goes smoothly, now we’re kind of getting to a place where we can hopefully talk about better relations, more trust in the community, at a higher, more holistic, level.”


While the road ahead is long and uncertain, Dittmann’s optimism is boosted by one aspect of her findings: those community members who reported feeling inspired after speaking with police officers who made their benevolent intentions clear.


"That was really powerful for me and Kyle. That’s what gets me out of bed in the morning. It’s worth trying to move the needle, even just a little bit."


Andrea Dittmann

Looking to know more?  Andrea Dittman is available to speak with media about this important research. Simply click on her icon now to arrange an interview today.

You might also like...

Check out some other posts from Emory University, Goizueta Business School

6 min

#Expert Perspective: When AI Follows the Rules but Misses the Point

vxfv When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.

8 min

#Expert Research: Incentives Speed Up Operating Room Turnover Procedures

The operating room (OR) is the economic hub of most healthcare systems in the United States today, generating up to 70% of hospital revenue. Ensuring these financial powerhouses run efficiently is a major priority for healthcare providers. But there’s a challenge. Turnovers—cleaning, preparing, and setting up the OR between surgeries—are necessary and unavoidable processes. OR turnovers can incur significant costs in staff time and resources, but at the same time, do not generate revenue. For surgeons, the lag between wheels out and wheels in is idle time. For incoming patients, who may have spent hours fasting in preparation for a procedure, it is also a potential source of frustration and anxiety. Reducing OR turnover time is a priority for many US healthcare providers, but it’s far from simple. For one thing, cutting corners in pursuit of efficiency risks patient safety. Then there’s the makeup of OR teams themselves. As a rule, well-established or stable teams work fastest and best, their efficiency fueled by familiarity and well-oiled interpersonal dynamics. But in hospital settings, staff work in shifts and according to different schedules, which creates a certain fluidity in the way turnover teams amalgamate. These team members may not know each other or have any prior experience working together. For hospital administrators this represents a quandary. How do you cut OR turnover time without compromising patient care or hiring in more staff to build more stable teams? To put that another way: how do you motivate OR workers to maintain standards and drive efficiency—irrespective of the team they work with at any given time? One novel approach instituted by Georgia’s Phoebe Putney Health System is the focus of new research by Asa Griggs Candler Professor of Accounting, Karen Sedatole PhD. Under the stewardship of perioperative medical director and anesthesiologist, Jason Williams MD 02MR 20MBA, and with support from Sedatole and co-authors, Ewelina Forker 23PhD of the University of Wisconsin and Harvard Business School’s Susanna Gallini PhD, staff at Phoebe ran a field experiment incentivizing individual OR workers to ramp up their own performance in turnover processes. What they have found is a simple and cost-effective intervention that reduces the lag between procedures by an average of 6.4 percent. Homing in on the Individual Williams and his team at Phoebe kicked off efforts to reduce OR turnover times by first establishing a benchmark to calculate how long it should take to prepare for different types of procedure or surgery. This can vary significantly, says Williams: while a gallbladder removal should take less than 30 minutes, open-heart surgery might take an hour or longer to prepare. “There’s a lot of variation in predicting how long it should take to get things set up for different procedures. We got there by analyzing three years of data to create a baseline, and from there, having really homed in on that data, we were able to create a set of predictions and then compare those with what we were seeing in our operating rooms—and track discrepancies, over-, and underachievement.” Williams, a Goizueta MBA graduate who also completed his anesthesiology residency at Emory University’s School of Medicine, then enlisted the support of Sedatole and her colleagues to put together a data analysis system that would capture the impact of two distinct mechanisms, both designed to incentivize individual staff members to work faster during turnovers. The first was a set of electronic dashboards programmed to record and display the average OR turnover performance for teams on a weekly basis, and segment these into averages unique to individuals working in each of the core roles within any given OR turnover team. The dashboard displayed weekly scores and ranked them from best to worst on large TV monitors with interactive capabilities—users could filter the data for types of surgery and other dimensions. Broadcasting metrics this way afforded Williams and his team a means of identifying and then publicly recognizing top-performing staff, but that’s not all. The dashboards also provided a mechanism with which to filter out team dynamics, and home in on individual efforts. “If you are put in a room with one team, and they are slower than others, then you are going to be penalized. Your efforts will not shine. Now, say you are put in with a bigger or faster team, your day’s numbers are going to be much higher. So, we had to find a way to accommodate and allow for the team effect, to observe individual effort. The dashboards meant we could do this. Over the period of a week or a month, the effect of other people in the team is washed out. You begin to see the key individuals pop up again and again over time, and you can see those who are far above their peers versus those who, for whatever reason, are not so efficient.” Sharing “relative performance” information has been shown to be highly motivating in many settings. The hope was that it would here, too. Three core roles: Who’s who in the Operating Room turnover team? OR turnover teams consist of three roles: circulating nurse, scrub tech, and anesthetist. While other surgery staff might be present during a turnover, depending on the needs of consecutive procedures, these are the three core roles in the team, and they are not interchangeable in any way: each individual assumes the same responsibilities in every team they join. Typically, turnover tasks will include removing instruments and equipment from the previous surgery and setting up for the next: restocking supplies and restoring the sterile environment. Turnover tasks and activities will vary according to the type of procedure coming next, but these tasks are always performed by the same three roles: nurse, scrub tech, and anesthetist, working within their own area of expertise and specialty. OR turnover teams are assembled based on staff schedules and availability, making them highly fluid. Different nurses will work with different scrub techs and different anesthetists depending on who is free and available at any given time. With dashboards on display across the hospital’s surgery department, Williams decided to trial a second motivational mechanism; this time something more tangible. “We decided to offer a simple $40 Dollar Store gift card to each week’s top performing anesthetist, nurse, or scrub technician to see if it would incentivize people even more. And to keep things interesting, and sustain motivation, we made sure that anyone who’d won the contest two weeks in a row would be ineligible to win the gift card the following week,” says Williams. “It was a bit of a shot in the dark, and we didn’t know if it would work.” Altogether, the dashboards remained in situ over a period of about 33 months while the gift card promotion ran for 73 weeks. It was important to stress the foundational importance of safety and then allow individuals to come up with their own ways to tighten procedures. This was a bottom-up, grassroots experience where the people doing the work came up with their own ways to make their times better, without cutting corners, without cutting quality, and without cutting any safety measures. Jason Williams MD 02MR 20MBA Incentives: Make it Something Special and Unique Crunching all of this data, Sedatole and her colleagues could isolate the effect of each mechanism on performance and turnover times at Phoebe. While the dashboards had “negligible” effect on productivity, the addition of the store gift cards had immediate, significant, and sustained impact on individuals’ efforts. Differences in the effectiveness of the two incentives—the relative performance dashboard and the gift cards—are attributable to team fluidity, says Sedatole. “It’s all down to familiarity. Dashboards are effective if you care about your reputation and your standing with peers. And in fluid team settings, where people don’t really know each other, reputation seems to matter less because these individuals may never work together again. They simply care less about rankings because they are effectively strangers.” Tangible rewards, on the other hand, have what Sedatole calls a “hedonic” value: they can feel more special and unique to the recipient, even if they carry relatively little monetary value. Something like a $40 gift card to Target can be more motivating to individuals even than the same amount in cash. There’s something hedonic about a prize that differentiates it from cash—after all, you will just end up spending that $40 on the electricity bill. Asa Griggs Candler Professor of Accounting, Karen Sedatole “A tangible reward is something special because of its hedonic nature and the way that human beings do mental accounting,” says Sedatole. “It occupies a different place in the brain, so we treat it differently.” In fact, analyzing the results, Sedatole and her colleagues find that the introduction of gift cards at Phoebe equates to an average incremental improvement of 6.4% in OR turnover performance; a finding that does not vary over the 73-week timeframe, she adds. To get the same result by employing more staff to build more stable teams, Sedatole calculates that the hospital would have to increase peer familiarity to the 98th percentile: a very significant financial outlay and a lot of excess capacity if those additional team members are not working 100% of the time. These are key findings for healthcare systems and for administrators and decision-makers in any setting or sector where fluid teams are the norm, says Sedatole: from consultancy to software development to airline ground crews. Wherever diverse professionals come together briefly or sporadically to perform tasks and then disperse, individual motivation can be optimized by simple mechanisms—cost-effective tangible rewards—that give team members a fresh opportunity to earn the incentive in different settings on different occasions—a recurring chance to succeed that keeps the incentive systems engaging and effective over time. For healthcare in particular, this is a win-win-win, says Williams. “In the United States we are faced with lower reimbursements and higher costs, so we have to look for areas where we can gain efficiencies and minimize costs. In the healthcare value model, time and costs are denominators, and quality and service are numerators. Any way we can save on costs and improve efficiencies allows us to take care of more patients, and to be able to do that effectively. “We made some incredible improvements here. We went from just average to best in class, right to the frontier of operative efficiency. And there is so much more opportunity out there to pull more levers and reach new levels, which is truly encouraging.” Looking to know more or connect with Asa Griggs Candler Professor of Accounting, Karen Sedatole?  Simply click on her icon now to arrange an interview or time to talk today.

5 min

Why Simultaneous Voting Makes for Good Decisions

How can organizations make robust decisions when time is short, and the stakes are high? It’s a conundrum not unfamiliar to the U.S. Food and Drug Administration. Back in 2021, the FDA found itself under tremendous pressure to decide on the approval of the experimental drug aducanumab, designed to slow the progress of Alzheimer’s disease—a debilitating and incurable condition that ranks among the top 10 causes of death in the United States. Welcomed by the market as a game-changer on its release, aducanumab quickly ran into serious problems. A lack of data on clinical efficacy along with a slew of dangerous side effects meant physicians in their droves were unwilling to prescribe it. Within months of its approval, three FDA advisors resigned in protest, one calling aducanumab, “the worst approval decision that the FDA has made that I can remember.” By the start of 2024, the drug had been pulled by its manufacturers. Of course, with the benefit of hindsight and data from the public’s use of aducanumab, it is easy for us to tell that FDA made the wrong decision then. But is there a better process that would have given FDA the foresight to make the right decision, under limited information? The FDA routinely has to evaluate novel drugs and treatments; medical and pharmaceutical products that can impact the wellbeing of millions of Americans. With stakes this high, the FDA is known to tread carefully: assembling different advisory, review, and funding committees providing diverse knowledge and expertise to assess the evidence and decide whether to approve a new drug, or not. As a federal agency, the FDA is also required to maintain scrupulous records that cover its decisions, and how those decisions are made. The Impact of Voting Mechanisms on Decision Quality Some of this data has been analyzed by Goizueta’s Tian Heong Chan, associate professor of information systems and operation management. Together with Panos Markou of the University of Virginia’s Darden School of Business, Chan scrutinized 17 years’ worth of information, including detailed transcripts from more than 500 FDA advisory committee meetings, to understand the mechanisms and protocols used in FDA decision-making: whether committee members vote to approve products sequentially, with everyone in the room having a say one after another; or if voting happens simultaneously via the push of a button, say, or a show of hands. Chan and Markou also looked at the impact of sequential versus simultaneous voting to see if there were differences in the quality of the decisions each mechanism produced. Their findings are singular. It turns out that when stakeholders vote simultaneously, they make better decisions. Drugs or products approved this way are far less likely to be issued post-market boxed warnings (warnings issued by FDA that call attention to potentially serious health risks associated with the product, that must be displayed on the prescription box itself), and more than two times less likely to be recalled. The FDA changed its voting protocols in 2007, when they switched from sequentially voting around the room, one person after another, to simultaneous voting procedures. And the results are stunning. Tian Heong Chan, Associate Professor of Information Systems & Operation Management “Decisions made by simultaneous voting are more than twice as effective,” says Chan. “After 2007, you see that just 3.4% of all drugs and products approved this way end up being discontinued or recalled. This compares with an 8.6% failure rate for drugs approved by the FDA using more sequential processes—the round robin where individuals had been voting one by one around the room.” Imagine you are told beforehand that you are going to vote on something important by simply raising your hand or pressing a button. In this scenario, you are probably going to want to expend more time and effort in debating all the issues and informing yourself before you decide. Tian Heong Chan “On the other hand, if you know the vote will go around the room, and you will have a chance to hear how others’ speak and explain their decisions, you’re going to be less motivated to exchange and defend your point of view beforehand,” says Chan. In other words, simultaneous decision-making is two times less likely to generate a wrong decision as the sequential approach. Why is this? Chan and Markou believe that these voting mechanisms impact the quality of discussion and debate that undergird decision-making; that the quality of decisions is significantly impacted by how those decisions are made. Quality Discussion Leads to Quality Decisions Parsing the FDA transcripts for content, language, and tonality in both settings, Chan and Markou find evidence to support this. Simultaneous voting or decision-making drives discussions that are characterized by language that is more positive, more authentic, and more even in terms of expressions of authority and hierarchy, says Chan. What’s more, these deliberations and exchanges are deeper and more far-ranging in quality. We find marked differences in the tone of speech and the topics discussed when stakeholders know they will be voting simultaneously. There is less hierarchy in these exchanges, and individuals exhibit greater confidence in sharing their points of view more freely. Tian Heong Chan “We also see more questions being asked, and a broader range of topics and ideas discussed,” says Chan. In this context, decision-makers are also less likely to reach unanimous agreement. Instead, debate is more vigorous and differences of opinion remain more robust. Conversely, sequential voting around the room is typically preceded by shorter discussion in which stakeholders share fewer opinions and ask fewer questions. And this demonstrably impacts the quality of the decisions made, says Chan. Sharing a different perspective to a group requires effort and courage. With sequential voting or decision-making, there seems to be less interest in surfacing diverse perspectives or hidden aspects to complex problems. Tian Heong Chan “So it’s not that individuals are being influenced by what other people say when it comes to voting on the issue—which would be tempting to infer—rather, it’s that sequential voting mechanisms seem to take a bit more effort out of the process.” When decision-makers are told that they will have a chance to vote and to explain their vote, one after another, their incentives to make a prior effort to interrogate each other vigorously, and to work that little bit harder to surface any shortcomings in their own understanding or point of view, or in the data, are relatively weaker, say Chan and Markou. The Takeaway for Organizations Making High-Stakes Decisions Decision-making in different contexts has long been the subject of scholarly scrutiny. Chan and Markou’s research sheds new light on the important role that different mechanisms have in shaping the outcomes of decision-making—and the quality of the decisions that are jointly taken. And this should be on the radar of organizations and institutions charged with making choices that impact swathes of the community, they say. “The FDA has a solid tradition of inviting diversity into its decision-making. But the data shows that harnessing the benefits of diversity is contingent on using the right mechanisms to surface the different expertise you need to be able to see all the dimensions of the issue, and make better informed decisions about it,” says Chan. A good place to start? By a concurrent show of hands. Tian Heong Chan is an associate professor of information systems and operation management. he is available to speak about this topic - click on his con now to arrange an interview today.

View all posts