AI Art: What Should Fair Compensation Look Like?

Jun 28, 2024

5 min

David Schweidel



New research from Goizueta’s David Schweidel looks at questions of compensation to human artists when images based on their work are generated via artificial intelligence.


Artificial intelligence is making art. That is to say, compelling artistic creations based on thousands of years of art production may now be just a few text prompts away. And it’s all thanks to generative AI trained on internet images. You don’t need Picasso’s skillset to create something in his style. You just need an AI-powered image generator like DALL-E 3 (created by OpenAI), Midjourney, or Stable Diffusion.


If you haven’t tried one of these programs yet, you really should (free or beta versions make this a low-risk proposal). For example, you might use your phone to snap a photo of your child’s latest masterpiece from school. Then, you might ask DALL-E to render it in the swirling style of Vincent Van Gogh. A color printout of that might jazz up your refrigerator door for the better.


Intellectual Property in the Age of AI


Now, what if you wanted to sell your AI-generated art on a t-shirt or poster? Or what if you wanted to create a surefire logo for your business? What are the intellectual property (IP) implications at work?


Take the case of a 35-year-old Polish artist named Greg Rutkowski. Rutkowski has reportedly been included in more AI-image prompts than Pablo Picasso, Leonardo da Vinci, or Van Gogh. As a professional digital artist, Rutkowski makes his living creating striking images of dragons and battles in his signature fantasy style. That is, unless they are generated by AI, in which case he doesn’t.


“They say imitation is the sincerest form of flattery. But what about the case of a working artist? What if someone is potentially not receiving payment because people can easily copy his style with generative AI?” That’s the question David Schweidel, Rebecca Cheney McGreevy Endowed Chair and professor of marketing at Goizueta Business School is asking. Flattery won’t pay the bills. “We realized early on that IP is a huge issue when it comes to all forms of generative AI,” Schweidel says. “We have to resolve such issues to unlock AI’s potential.”


Schweidel’s latest working paper is titled “Generative AI and Artists: Consumer Preferences for Style and Fair Compensation.” It is coauthored with professors Jason Bell, Jeff Dotson, and Wen Wang (of University of Oxford, Brigham Young University, and University of Maryland, respectively). In this paper, the four researchers analyze a series of experiments with consumers’ prompts and preferences using Midjourney and Stable Diffusion. The results lead to some practical advice and insights that could benefit artists and AI’s business users alike.


Real Compensation for AI Work?


In their research, to see if compensating artists for AI creations was a viable option, the coauthors wanted to see if three basic conditions were met:


– Are artists’ names frequently used in generative AI prompts?

– Do consumers prefer the results of prompts that cite artists’ names?

– Are consumers willing to pay more for an AI-generated product that was created citing some artists’ names?


Crunching the data, they found the same answer to all three questions: yes.


More specifically, the coauthors turned to a dataset that contains millions of “text-to-image” prompts from Stable Diffusion. In this large dataset, the researchers found that living and deceased artists were frequently mentioned by name. (For the curious, the top three mentioned in this database were: Rutkowski, artgerm [another contemporary artist, born in Hong Kong, residing in Singapore] and Alphonse Mucha [a popular Czech Art Nouveau artist who died in 1939].)


Given that AI users are likely to use artists’ names in their text prompts, the team also conducted experiments to gauge how the results were perceived. Using deep learning models, they found that including an artist’s name in a prompt systematically improves the output’s aesthetic quality and likeability.


The Impact of Artist Compensation on Perceived Worth


Next, the researchers studied consumers’ willingness to pay in various circumstances. The researchers used Midjourney with the following dynamic prompt:


“Create a picture of ⟨subject⟩ in the style of ⟨artist⟩”.


The subjects chosen were the advertising creation known as the Most Interesting Man in the World, the fictional candy tycoon Willy Wonka, and the deceased TV painting instructor Bob Ross (Why not?). The artists cited were Ansel Adams, Frida Kahlo, Alphonse Mucha and Sinichiro Wantabe. The team repeated the experiment with and without artists in various configurations of subjects and styles to find statistically significant patterns. In some, consumers were asked to consider buying t-shirts or wall art. In short, the series of experiments revealed that consumers saw more value in an image when they understood that the artist associated with it would be compensated.



Here’s a sample of imagery AI generated using three subjects names “in the style of Alphonse Mucha.”
Source: Midjourney cited in http://dx.doi.org/10.2139/ssrn.4428509


“I was honestly a bit surprised that people were willing to pay more for a product if they knew the artist would get compensated,” Schweidel explains. “In short, the pay-per-use model really resonates with consumers.” In fact, consumers preferred pay-per-use over a model in which artists received a flat fee in return for being included in AI training data. That is to say, royalties seem like a fairer way to reward the most popular artists in AI. Of course, there’s still much more work to be done to figure out the right amount to pay in each possible case.


What Can We Draw From This?

We’re still in the early days of generative AI, and IP issues abound. Notably, the New York Times announced in December that it is suing OpenAI (the creator of ChatGPT) and Microsoft for copyright infringement. Millions of New York Times articles have been used to train generative AI to inform and improve it.


“The lawsuit by the New York Times could feasibly result in a ruling that these models were built on tainted data. Where would that leave us?” asks Schweidel.


"One thing is clear: we must work to resolve compensation and IP issues. Our research shows that consumers respond positively to fair compensation models. That’s a path for companies to legally leverage these technologies while benefiting creators."


David Schweidel


To adopt generative AI responsibly in the future, businesses should consider three things. First, they should communicate to consumers when artists’ styles are used. Second, they should compensate contributing artists. And third, they should convey these practices to consumers. “And our research indicates that consumers will feel better about that: it’s ethical.”



AI is quickly becoming a topic of regulators, lawmakers and journalists and if you're looking to know more - let us help.


David A. Schweidel, Professor of Marketing, Goizueta Business School at Emory University


To connect with David to arrange an interview - simply click his icon now.

Connect with:
David Schweidel

David Schweidel

Professor of Marketing & Goizueta Chair in Business Technology

Marketing analytics expert focused on the opportunities at the intersection of marketing and technology

Marketing TechnologyAISocial MediaPolitical MarketingCustomer Analytics

You might also like...

Check out some other posts from Emory University, Goizueta Business School

5 min

12 Days of Holiday Experts - Goizueta Business School Sources for the Season

It's that time of the year again!  And as Americans get ready for another journey into the festive season, there are always opportunities for stories to be told about shopping, travelling, buying, returning, and making sure you don't get ripped off or scammed during all the hustle and bustle, Here's a stocking full of topics and expert sources who are here to help with your coverage this holiday! Gifts, Giving, and all the Costs That Come With It Economics of the Holiday Season A successful Q4 makes the difference between annual profitability and loss for many businesses. Professor Tom Smith is available to discuss seasonal hiring, retail expectations, the impact of tariffs, and the importance of the holiday season to retailers. View his profile here Black Friday & Using AI to find the Perfect Gift  Professor Doug Bowman expects to see more Shoppers (esp. Gen Z) experimenting with GenAI for personalization, inspiration, product discovery, summarizing reviews, generating lists, and finding deals. Results may be mixed, depending on the data the AI was trained on. He also expects more purposeful and complex shopping, with fewer impulse purchases and more searching (both online and in brick-and-mortar stores), due to lower inventory levels/assortments at some retailers. View his profile here Food and Travel Pricing Professor Saloni Firasta Vastani can discuss the cost of this year’s holiday dinners. What’s gone up and what’s gone down? She can also discuss the cost of travel this holiday season and offer tips on how consumers can secure a better deal. View her profile here Avoiding Holiday Overspend Professor Usha Rackliffe can discuss how holiday shopping can expose consumers to credit products, such as store credit cards, that offer various incentives and often result in overspending. She can discuss the pros and cons of the buy now, pay later offers and how interest rates will play into this year’s holiday shopping and spending. View her profile here Gift Giving Professor Ira Bedzow says there are three ways gift-giving can promote both personal growth and professional development. View his profile here Gifts Express Relationship, Not Reciprocity. Contracts and transactions are about keeping score—I give, you give back. Gifts are about connection. A thoughtful gift doesn’t close a deal; it opens a door. Personally, it reframes love and friendship as ongoing commitments rather than conditional exchanges. Professionally, treating interactions as opportunities to build trust creates loyalty, sparks creativity, and builds a culture no contract can guarantee. The Art of Perspective-Taking in Choosing Gifts: The best gifts come from stepping outside yourself and asking: What would this person really want? This act of empathy is a skill worth practicing. Personally, it pulls us beyond ego; professionally, it sharpens our ability to anticipate needs, see through others’ eyes, and make decisions aligned with their values—a foundation for real leadership. Gifts as Lessons in Friendship and Human Connection: True friendship isn’t built on ideology, convenience, or self-interest. It’s rooted in caring for someone simply for who they are. Gift-giving is a rehearsal for that kind of connection. Personally, it reminds us that what we truly want typically comes through relationships, not rivalry. Professionally, it shows that lasting success rests less on shared advantage and more on genuine respect and human connection. Shopping for Sustainability Consumers are increasingly seeking eco-friendly products, and brands that emphasize sustainability are likely to see higher sales. Nearly 69% of shoppers prefer to buy from companies committed to ethical practices, such as those that use carbon-neutral shipping and offer recyclable packaging. Professor Dionne Nickerson focuses on how companies can integrate sustainability in their products and why it matters to consumers. View her profile here Pressure Purchasing As the days inch closer to the holidays, shoppers feel the pressure to find a gift. Professor Max Gaerth can discuss how stress, scarcity, and time pressure shape purchasing decisions. View his profile here Online Shopping and Influencing AI Changing How We Shop Professor David Schweidel examines how new AI tools are transforming the shopping experience and the ways brands utilize AI to engage with prospective customers and personalize product recommendations. He can also discuss OpenAI’s Atlas and how it puts ChatGPT directly into your browser. View his profile here Influencers Influencing Our Purchases How are creators impacting the economy, and are influencers impacting our purchasing decisions? Professor Marina Cooley looks at the creator economy and how TikTok and Instagram are impacting our holiday wish lists, and what it takes for a product to go from unknown to trending. She can also discuss TikTok Shop (something Instagram has struggled to execute).   View her profile here How to Attract Customers to the Store this Holiday: Shopping looks different, and it is up to retailers to stand out not just in the brick-and-mortar world but also online. The success of a business can balance on the customer experience. Professor Reshma Shah can discuss the policies that brick-and-mortar retailers need to have in place to successfully merge online shopping and the in-person shopping experience. View her profile here Holiday Scams Tis The Season for Scams Bad actors are using AI to scam consumers. From phone calls to emails, Professor Tucker Balch can tell us how to spot a scam and what we can do to protect ourselves. View his profile here Holiday Returns Product Returns Professor Doug Bowman can discuss the retail strategy and the impact of holiday gift returns, comparing online returns to those in brick-and-mortar stores. View his profile here He can also weigh in on: Why are returns so expensive for retailers? Online returns vs. brick and mortar returns Predicting online returns - helping retailers understand how likely it is that a product will be returned. As well: Are retailers still offering free returns? What’s this costing them? Is this likely to continue? What will they do differently? If you’re a journalist covering the holiday season, our experts can help shape your story. Use the “Connect” button on any expert’s profile to send an inquiry — all inquiries are monitored by our media team to ensure a quick, timely response.

6 min

#Expert Perspective: When AI Follows the Rules but Misses the Point

When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.

8 min

#Expert Research: Incentives Speed Up Operating Room Turnover Procedures

The operating room (OR) is the economic hub of most healthcare systems in the United States today, generating up to 70% of hospital revenue. Ensuring these financial powerhouses run efficiently is a major priority for healthcare providers. But there’s a challenge. Turnovers—cleaning, preparing, and setting up the OR between surgeries—are necessary and unavoidable processes. OR turnovers can incur significant costs in staff time and resources, but at the same time, do not generate revenue. For surgeons, the lag between wheels out and wheels in is idle time. For incoming patients, who may have spent hours fasting in preparation for a procedure, it is also a potential source of frustration and anxiety. Reducing OR turnover time is a priority for many US healthcare providers, but it’s far from simple. For one thing, cutting corners in pursuit of efficiency risks patient safety. Then there’s the makeup of OR teams themselves. As a rule, well-established or stable teams work fastest and best, their efficiency fueled by familiarity and well-oiled interpersonal dynamics. But in hospital settings, staff work in shifts and according to different schedules, which creates a certain fluidity in the way turnover teams amalgamate. These team members may not know each other or have any prior experience working together. For hospital administrators this represents a quandary. How do you cut OR turnover time without compromising patient care or hiring in more staff to build more stable teams? To put that another way: how do you motivate OR workers to maintain standards and drive efficiency—irrespective of the team they work with at any given time? One novel approach instituted by Georgia’s Phoebe Putney Health System is the focus of new research by Asa Griggs Candler Professor of Accounting, Karen Sedatole PhD. Under the stewardship of perioperative medical director and anesthesiologist, Jason Williams MD 02MR 20MBA, and with support from Sedatole and co-authors, Ewelina Forker 23PhD of the University of Wisconsin and Harvard Business School’s Susanna Gallini PhD, staff at Phoebe ran a field experiment incentivizing individual OR workers to ramp up their own performance in turnover processes. What they have found is a simple and cost-effective intervention that reduces the lag between procedures by an average of 6.4 percent. Homing in on the Individual Williams and his team at Phoebe kicked off efforts to reduce OR turnover times by first establishing a benchmark to calculate how long it should take to prepare for different types of procedure or surgery. This can vary significantly, says Williams: while a gallbladder removal should take less than 30 minutes, open-heart surgery might take an hour or longer to prepare. “There’s a lot of variation in predicting how long it should take to get things set up for different procedures. We got there by analyzing three years of data to create a baseline, and from there, having really homed in on that data, we were able to create a set of predictions and then compare those with what we were seeing in our operating rooms—and track discrepancies, over-, and underachievement.” Williams, a Goizueta MBA graduate who also completed his anesthesiology residency at Emory University’s School of Medicine, then enlisted the support of Sedatole and her colleagues to put together a data analysis system that would capture the impact of two distinct mechanisms, both designed to incentivize individual staff members to work faster during turnovers. The first was a set of electronic dashboards programmed to record and display the average OR turnover performance for teams on a weekly basis, and segment these into averages unique to individuals working in each of the core roles within any given OR turnover team. The dashboard displayed weekly scores and ranked them from best to worst on large TV monitors with interactive capabilities—users could filter the data for types of surgery and other dimensions. Broadcasting metrics this way afforded Williams and his team a means of identifying and then publicly recognizing top-performing staff, but that’s not all. The dashboards also provided a mechanism with which to filter out team dynamics, and home in on individual efforts. “If you are put in a room with one team, and they are slower than others, then you are going to be penalized. Your efforts will not shine. Now, say you are put in with a bigger or faster team, your day’s numbers are going to be much higher. So, we had to find a way to accommodate and allow for the team effect, to observe individual effort. The dashboards meant we could do this. Over the period of a week or a month, the effect of other people in the team is washed out. You begin to see the key individuals pop up again and again over time, and you can see those who are far above their peers versus those who, for whatever reason, are not so efficient.” Sharing “relative performance” information has been shown to be highly motivating in many settings. The hope was that it would here, too. Three core roles: Who’s who in the Operating Room turnover team? OR turnover teams consist of three roles: circulating nurse, scrub tech, and anesthetist. While other surgery staff might be present during a turnover, depending on the needs of consecutive procedures, these are the three core roles in the team, and they are not interchangeable in any way: each individual assumes the same responsibilities in every team they join. Typically, turnover tasks will include removing instruments and equipment from the previous surgery and setting up for the next: restocking supplies and restoring the sterile environment. Turnover tasks and activities will vary according to the type of procedure coming next, but these tasks are always performed by the same three roles: nurse, scrub tech, and anesthetist, working within their own area of expertise and specialty. OR turnover teams are assembled based on staff schedules and availability, making them highly fluid. Different nurses will work with different scrub techs and different anesthetists depending on who is free and available at any given time. With dashboards on display across the hospital’s surgery department, Williams decided to trial a second motivational mechanism; this time something more tangible. “We decided to offer a simple $40 Dollar Store gift card to each week’s top performing anesthetist, nurse, or scrub technician to see if it would incentivize people even more. And to keep things interesting, and sustain motivation, we made sure that anyone who’d won the contest two weeks in a row would be ineligible to win the gift card the following week,” says Williams. “It was a bit of a shot in the dark, and we didn’t know if it would work.” Altogether, the dashboards remained in situ over a period of about 33 months while the gift card promotion ran for 73 weeks. It was important to stress the foundational importance of safety and then allow individuals to come up with their own ways to tighten procedures. This was a bottom-up, grassroots experience where the people doing the work came up with their own ways to make their times better, without cutting corners, without cutting quality, and without cutting any safety measures. Jason Williams MD 02MR 20MBA Incentives: Make it Something Special and Unique Crunching all of this data, Sedatole and her colleagues could isolate the effect of each mechanism on performance and turnover times at Phoebe. While the dashboards had “negligible” effect on productivity, the addition of the store gift cards had immediate, significant, and sustained impact on individuals’ efforts. Differences in the effectiveness of the two incentives—the relative performance dashboard and the gift cards—are attributable to team fluidity, says Sedatole. “It’s all down to familiarity. Dashboards are effective if you care about your reputation and your standing with peers. And in fluid team settings, where people don’t really know each other, reputation seems to matter less because these individuals may never work together again. They simply care less about rankings because they are effectively strangers.” Tangible rewards, on the other hand, have what Sedatole calls a “hedonic” value: they can feel more special and unique to the recipient, even if they carry relatively little monetary value. Something like a $40 gift card to Target can be more motivating to individuals even than the same amount in cash. There’s something hedonic about a prize that differentiates it from cash—after all, you will just end up spending that $40 on the electricity bill. Asa Griggs Candler Professor of Accounting, Karen Sedatole “A tangible reward is something special because of its hedonic nature and the way that human beings do mental accounting,” says Sedatole. “It occupies a different place in the brain, so we treat it differently.” In fact, analyzing the results, Sedatole and her colleagues find that the introduction of gift cards at Phoebe equates to an average incremental improvement of 6.4% in OR turnover performance; a finding that does not vary over the 73-week timeframe, she adds. To get the same result by employing more staff to build more stable teams, Sedatole calculates that the hospital would have to increase peer familiarity to the 98th percentile: a very significant financial outlay and a lot of excess capacity if those additional team members are not working 100% of the time. These are key findings for healthcare systems and for administrators and decision-makers in any setting or sector where fluid teams are the norm, says Sedatole: from consultancy to software development to airline ground crews. Wherever diverse professionals come together briefly or sporadically to perform tasks and then disperse, individual motivation can be optimized by simple mechanisms—cost-effective tangible rewards—that give team members a fresh opportunity to earn the incentive in different settings on different occasions—a recurring chance to succeed that keeps the incentive systems engaging and effective over time. For healthcare in particular, this is a win-win-win, says Williams. “In the United States we are faced with lower reimbursements and higher costs, so we have to look for areas where we can gain efficiencies and minimize costs. In the healthcare value model, time and costs are denominators, and quality and service are numerators. Any way we can save on costs and improve efficiencies allows us to take care of more patients, and to be able to do that effectively. “We made some incredible improvements here. We went from just average to best in class, right to the frontier of operative efficiency. And there is so much more opportunity out there to pull more levers and reach new levels, which is truly encouraging.” Looking to know more or connect with Asa Griggs Candler Professor of Accounting, Karen Sedatole?  Simply click on her icon now to arrange an interview or time to talk today.

View all posts