Expert Insight: NFL Fandom: The Last Cultural Unifier?

Aug 7, 2024

8 min

Michael Lewis



In 2024, few cultural touchstones unify America. One of the remaining cultural unifiers is the NFL. It is almost guaranteed that the Super Bowl will be the most watched television program each year. Add Taylor Swift (another rare cultural unifier) attending to watch her boyfriend and an appealing halftime musical guest, and you can have over 120 million people watching the same program at the same time. Nothing else comes close.


There is little doubt that the NFL is the undisputed champion of American sports. But how do the various NFL fandoms compare? Which team has the top fandom, and which struggles (struggle is relative here, as the lowest-ranked NFL fandom is still impressive)? This is an interesting question in a couple of ways. First, it reveals something important about the level of connection in different cities. Cities with stronger fan bases tend to have more of a shared identity. Boston residents share more love across their teams (Celtics, Red Sox, Bruins, Patriots) than folks from Tampa Bay. “Sports” cities are fundamentally different. It's also an interesting marketing analysis. Fandoms are people who share passion and love for what are essentially brands. Examining fandom can reveal something critical about how brands that inspire fandom are built.


Comparing fan bases can also inflame passions. Sports fans are (often) the ultimate fans as they closely identify with their teams and feel each victory as a personal triumph and each loss as a defeat. Because fans’ identities are tied to their teams, ranking fan bases can feel like an attack. Saying Browns fans aren’t as good of fans as Ravens fans feels like an attack on Cleveland.


The deeper perspective motivating this analysis is that fandom is about cultural passion, so what people are fans of largely dictates the tone and content of our societies. A society that loves baseball, country music, and trucks feels very different from one that favors soccer, opera, and Vespas. The fandom rankings are a snapshot in time of how fandom works in the NFL. And remember, the NFL is not just the top sports league in America but also the closest thing we have in 2024 to a shared societal passion.


Analyzing Fandoms


I have been ranking NFL and other fan bases for more than a decade. These fandom analyses are an example of brand equity analytics, and they use two types of data. The goal is to understand the relationship between market characteristics and fandom outcomes at the league level. We can then evaluate each team based on how it performs relative to league norms.


The fandom or market outcome measures include things like data on prices, attendance, and social media following. These are measures of fan engagement. Prices provide a signal of how much market power a team has created. Attendance shows the enthusiasm of fans in the market to pay for tickets and take the time to travel and attend. Social media following reveals how many fans the team has in and out of their home market. Each metric has advantages and limitations. Social media following provides an indication of national fandom, but it also captures casual fans who would never pay for a ticket.


The second aspect of the analysis focuses on market potential. NFL markets vary from New York, with a population of 20 million, to Green Bay, with a few hundred thousand. Income levels in San Francisco are far higher than in Jacksonville or Cleveland. I use a range of demographics, but income and population are the major factors. Again, the metrics are good but not perfect. For example, using MSA populations isn’t perfect because teams have different footprints. The Packers are more of a Wisconsin team than a Green Bay team. The teams in New York and LA share a market. Should they each get half of the metro area population? One factor that I do not control for is competition. In the southeast, NFL teams may compete with SEC teams. I have debated this issue (with myself) and have decided to neglect it.


This year's analysis includes a significant change from last year. The significant change is that I am not controlling for team performance. Controlling for team performance is helpful because it isolates core or unchanging fandom. This approach has appeal, as we can argue that teams with more passionate fandoms will be more resilient against losing seasons. The downside of controlling for performance is that we get less of a measurement of the fandom's overall value. If a team like Kansas City is on an extended winning streak, then the Chiefs brand is very valuable at the moment. Controlling for winning makes the analysis more about the core, near-permanent passion of a fandom, while not controlling makes the results more relevant to current brand power.


The analysis involves three steps. The first step creates measures of each team’s relative fandom outcomes and market potential. The second step develops a statistical model of the relationship between market potential and fandom outcomes. The third step compares each team's fandom outcomes with the statistical model's predictions. The third step is a comparison of actual results versus predicted – the key point is that the prediction is based on leaguewide data. As these analyses are always imperfect, the best way to consider the fandom rankings is as tiers. I like the idea of quadrants.


Some brief comments on the members of each quadrant (Elite, Solid, Role Players, Benchwarmers). I will be discussing each fandom on social media.


TikTok: @fanalyticspodcast


Instagram: @fanalyticsmikelewis


YouTube: @fanalyticsmike


A bonus figure follow the Quad overviews.


The Results



Quadrant 1: The Elite

The Dallas Cowboys lead the top group of teams, followed by the Packers, Eagles, Chiefs, 49ers, Raiders, Patriots, and Steelers. Sounds a lot like what the man on the street would list as the top NFL brands. The Cowboys and Packers leading the way is no surprise. The Cowboys are second in social following and the leaders in attendance. The Packers are an astonishing fandom story as the team is located in the definitive small market. The Eagles leading the Steelers is going to be troubling in Western Pennsylvania, but the Eagles have more pricing power and more social following. The 49ers are a solid NFL fandom with few weaknesses. The Patriots are in a new era, and it will be fascinating to see if they maintain their top-tier position as Brady and Belichick become memories.


The Chiefs' presence in the top group is a change from past years and is due to the shift away from controlling for performance. The Chiefs have a great fandom, but the team’s success currently pumps them up. The Chiefs are in a brand-building phase as the team continues building its dynasty. The question for the Chiefs is where they end up long-term.


I don't fully understand the Raiders' ranking. The Raiders are midrange in attendance and social following but do well because are reported to have the highest prices in the league. I suspect this is more an idiosyncrasy of the Las Vegas market than a reflection of significant passionate fandom.


Quadrant 2: Solid Performers

The Quadrant 2 teams are the Broncos, Giants, Panthers, Seahawks, Saints, Ravens, Texans, and Browns. These are the solid performers of NFL fandoms (brands). These are teams with above expected fandom outcomes for their relative market potentials.


The Quadrant 2 clubs are all passionate fanbases (maybe one exception) despite very different histories. For example, the AFC North rival Ravens and Browns differ in both relative history and frequency of winning. Cleveland fandom involves significant character, while the Ravens are a “blue-collar” brand that has been a consistent winner. There are a lot of great stories in Quad 2. The Saints were once the Aints but are now a core part of New Orleans. The Broncos and Giants are great fandoms who are probably angry to be left out of Quad 1.


The Panthers' position is unexpected and may be due to some inflated social media numbers. This is the challenge when an analysis is based only on data. When data gets a little weird, like an inflated social media follower count dating back to Cam Newton's days, the results can also get a little weird. This is a teachable moment—do not analyze and interpret data without knowing the context (the data-generating processes).


Quadrant 3: Role Players

Quadrant 3 fandoms are teams whose fandom outcomes are slightly below average league performance (for similar markets). The Quadrant 3 teams include (in order) the Bills, Falcons, Buccaneers, Jets, Vikings, Bears, Dolphins, and Bengals. There are some interesting teams in Quad 3. The Bills have a great and notorious fandom. Jumping through flaming tables in subzero weather should get you into the top half of the rankings? The big-market Jets and the small-market Bengals have two of the most fascinating QBs in the league. Both clubs could be poised to get to Quad 2 with a Super Bowl or two. Da’Bears may be one of the most disappointing results. A team with an SNL skit devoted to their fandom in a market like Chicago shouldn’t be in Quad 3. Other quick comments: The Falcons need to win a title. Florida is tough for professional teams. The Vikings should play outside.


Quadrant 4: Hopium

These are the NFL's weakest fandoms, with the key phrase being “the NFL’s.” The Quad 4 teams, in order, are the Lions, Rams, Jaguars, Colts, Titans, Commanders, Chargers, and Cardinals. It’s a lot of teams who have not won regularly and have many moves and name changes. The Lions are poised for a move upward and maybe a sleeping giant of a fandom. They have the most watchable coach in the league and the most surprising celebrity fan. An interesting side story in Quad 4 is the battle for Los Angeles between the Rams (formerly of Saint Louis) and the Chargers (previously San Diego). They play in the same market, but the Rams have won more. But will Herbert lead the Chargers past the Rams?


Quad 4 illustrates an important lesson: consistency. The Rams moved from St. Louis and then back to LA. The Chargers went from San Diego to LA. The Colts left Baltimore in the middle of the night. The Titans were the Oilers and moved from Houston to Nashville. The Cardinals were the other NFL team Saint Louis lost. The Commanders should have stopped with their previous name.



The Fandom Outcomes / Market Potential Matrix


The following figure is a bit of bonus material that may provide some insight into the inner workings of the analysis.


The figure below shows the performance of each team on the Fandom Outcome and the Market Potential Indexes. The upper left region features teams with less lucrative markets but above-average fandoms, like the Packers, Steelers, and Chiefs. The lower right region is the teams with below-average fandom outcomes despite high potential markets, like the Commanders, Chargers, and Rams. This pictorial representation is also interesting as it shows teams with similar positions. These similarities can be somewhat surprising. For example, the Lions and Dolphins have very similar profiles despite the differences between Detroit and Miami.




Mike Lewis is an expert in the areas of analytics and marketing. This approach makes Professor Lewis a unique expert on fandom as his work addresses the complete process from success on the field to success at the box office and the campaign trail.


Michael is available to speak with media - simply click on his icon now to arrange an interview today.



Interested in following Future Fandom! Subscribe for free to receive new posts.



Connect with:
Michael Lewis

Michael Lewis

Professor of Marketing

www.fandomanalytics.com All Things Fandom and Sports Analytics

Revenue Management & Dynamic PricingCustomer Relationship ManagementSports AnalyticsSports MarketingFandom

You might also like...

Check out some other posts from Emory University, Goizueta Business School

5 min

Why Simultaneous Voting Makes for Good Decisions

How can organizations make robust decisions when time is short, and the stakes are high? It’s a conundrum not unfamiliar to the U.S. Food and Drug Administration. Back in 2021, the FDA found itself under tremendous pressure to decide on the approval of the experimental drug aducanumab, designed to slow the progress of Alzheimer’s disease—a debilitating and incurable condition that ranks among the top 10 causes of death in the United States. Welcomed by the market as a game-changer on its release, aducanumab quickly ran into serious problems. A lack of data on clinical efficacy along with a slew of dangerous side effects meant physicians in their droves were unwilling to prescribe it. Within months of its approval, three FDA advisors resigned in protest, one calling aducanumab, “the worst approval decision that the FDA has made that I can remember.” By the start of 2024, the drug had been pulled by its manufacturers. Of course, with the benefit of hindsight and data from the public’s use of aducanumab, it is easy for us to tell that FDA made the wrong decision then. But is there a better process that would have given FDA the foresight to make the right decision, under limited information? The FDA routinely has to evaluate novel drugs and treatments; medical and pharmaceutical products that can impact the wellbeing of millions of Americans. With stakes this high, the FDA is known to tread carefully: assembling different advisory, review, and funding committees providing diverse knowledge and expertise to assess the evidence and decide whether to approve a new drug, or not. As a federal agency, the FDA is also required to maintain scrupulous records that cover its decisions, and how those decisions are made. The Impact of Voting Mechanisms on Decision Quality Some of this data has been analyzed by Goizueta’s Tian Heong Chan, associate professor of information systems and operation management. Together with Panos Markou of the University of Virginia’s Darden School of Business, Chan scrutinized 17 years’ worth of information, including detailed transcripts from more than 500 FDA advisory committee meetings, to understand the mechanisms and protocols used in FDA decision-making: whether committee members vote to approve products sequentially, with everyone in the room having a say one after another; or if voting happens simultaneously via the push of a button, say, or a show of hands. Chan and Markou also looked at the impact of sequential versus simultaneous voting to see if there were differences in the quality of the decisions each mechanism produced. Their findings are singular. It turns out that when stakeholders vote simultaneously, they make better decisions. Drugs or products approved this way are far less likely to be issued post-market boxed warnings (warnings issued by FDA that call attention to potentially serious health risks associated with the product, that must be displayed on the prescription box itself), and more than two times less likely to be recalled. The FDA changed its voting protocols in 2007, when they switched from sequentially voting around the room, one person after another, to simultaneous voting procedures. And the results are stunning. Tian Heong Chan, Associate Professor of Information Systems & Operation Management “Decisions made by simultaneous voting are more than twice as effective,” says Chan. “After 2007, you see that just 3.4% of all drugs and products approved this way end up being discontinued or recalled. This compares with an 8.6% failure rate for drugs approved by the FDA using more sequential processes—the round robin where individuals had been voting one by one around the room.” Imagine you are told before hand that you are going to vote on something important by simply raising your hand or pressing a button. In this scenario, you are probably going to want to expend more time and effort in debating all the issues and informing yourself before you decide. Tian Heong Chan “On the other hand, if you know the vote will go around the room, and you will have a chance to hear how others’ speak and explain their decisions, you’re going to be less motivated to exchange and defend your point of view beforehand,” says Chan. In other words, simultaneous decision-making is two times less likely to generate a wrong decision as the sequential approach. Why is this? Chan and Markou believe that these voting mechanisms impact the quality of discussion and debate that undergird decision-making; that the quality of decisions is significantly impacted by how those decisions are made. Quality Discussion Leads to Quality Decisions Parsing the FDA transcripts for content, language, and tonality in both settings, Chan and Markou find evidence to support this. Simultaneous voting or decision-making drives discussions that are characterized by language that is more positive, more authentic, and more even in terms of expressions of authority and hierarchy, says Chan. What’s more, these deliberations and exchanges are deeper and more far-ranging in quality. We find marked differences in the tone of speech and the topics discussed when stakeholders know they will be voting simultaneously. There is less hierarchy in these exchanges, and individuals exhibit greater confidence in sharing their points of view more freely. Tian Heong Chan “We also see more questions being asked, and a broader range of topics and ideas discussed,” says Chan. In this context, decision-makers are also less likely to reach unanimous agreement. Instead, debate is more vigorous and differences of opinion remain more robust. Conversely, sequential voting around the room is typically preceded by shorter discussion in which stakeholders share fewer opinions and ask fewer questions. And this demonstrably impacts the quality of the decisions made, says Chan. Sharing a different perspective to a group requires effort and courage. With sequential voting or decision-making, there seems to be less interest in surfacing diverse perspectives or hidden aspects to complex problems. Tian Heong Chan “So it’s not that individuals are being influenced by what other people say when it comes to voting on the issue—which would be tempting to infer—rather, it’s that sequential voting mechanisms seem to take a bit more effort out of the process.” When decision-makers are told that they will have a chance to vote and to explain their vote, one after another, their incentives to make a prior effort to interrogate each other vigorously, and to work that little bit harder to surface any shortcomings in their own understanding or point of view, or in the data, are relatively weaker, say Chan and Markou. The Takeaway for Organizations Making High-Stakes Decisions Decision-making in different contexts has long been the subject of scholarly scrutiny. Chan and Markou’s research sheds new light on the important role that different mechanisms have in shaping the outcomes of decision-making—and the quality of the decisions that are jointly taken. And this should be on the radar of organizations and institutions charged with making choices that impact swathes of the community, they say. “The FDA has a solid tradition of inviting diversity into its decision-making. But the data shows that harnessing the benefits of diversity is contingent on using the right mechanisms to surface the different expertise you need to be able to see all the dimensions of the issue, and make better informed decisions about it,” says Chan. A good place to start? By a concurrent show of hands. Tian Heong Chan is an associate professor of information systems and operation management. he is available to speak about this topic - click on his con now to arrange an interview today.

4 min

Expert Perspective: The Hidden Costs of Cultural Appropriation

In our interconnected world, cultural borrowing is everywhere. But why do some instances earn applause while others provoke outrage? This question is becoming increasingly crucial for business leaders who must carefully navigate cultural boundaries. Take the backlash the Kardashian-Jenner family faced for adopting styles from minority cultures or the controversy over non-Indigenous designers using Native American patterns in fashion. These examples highlight the issue of cultural appropriation, where borrowing elements from another culture without genuine understanding or respect can lead to accusations of exploitation. Abraham Oshotse, an assistant professor of organization and management at Goizueta Business School, along with Assistant Professor of Sociology and Anthropology at Hebrew University Yael Berda and Associate Professor of Organizational Behavior at the Stanford Graduate School of Business Amir Goldberg, explores this in their research on “cultural tariffing.” They shed light on why high-status individuals, such as celebrities or industry leaders, often come under fire when crossing cultural boundaries. The Concept of Cultural Tariffing Oshotse and coauthors define cultural tariffing as “the act of imposing a social cost on cultural boundary crossing. It is levied on high-status actors crossing into low-status culture, in order to mitigate the reproduction of the status inequality.” This notion suggests that the acceptance or rejection of cultural boundary-crossing is influenced by the perceived costs and benefits. Cultural appropriation involves taking elements from a culture that one does not belong to, without permission or authority. For example, when Elvis Presley brought African-American music into the mainstream, it was initially seen as elevating the genre. However, in today’s context, such acts might be criticized as appropriation rather than celebration. This research seeks to analyze people’s modern reactions to different examples of cultural boundary-crossing and which conditions induce cultural tariffing. The Hypotheses The researchers make four hypotheses about participants’ reactions to cultural appropriation: People will disapprove of cultural borrowing if there’s a clear power imbalance, with the borrowing group having more status or privilege than the group they are borrowing from. Cultural borrowing is more likely to be criticized if the person doing it has a higher socioeconomic status within their social group. Cultural borrowing is more likely to be criticized if the person doing it has only a shallow connection to the culture they’re borrowing from. Cultural borrowing is more likely to be criticized if the person doing it benefits more from it than the people from the culture they are borrowing from. Put to the Test Oshotse et al exposed respondents to four scenarios per hypothesis (16 total) with a permissible and a transgressive condition. In the permissible condition, subjects exhibit lower status or socioeconomic standing or a stronger connection to the target culture. Subjects in the transgressive condition exhibit a higher status or socioeconomic standing and less of an authentic connection to the target culture. Insights from the Study Oshotse’s study offers four key insights: Status Matters: Cultural boundary-crossing is more likely to generate disapproval if there’s a clear status difference favoring the adopter. Superficial Connections: The less authentic the adopter’s connection to the target culture, the more likely they are to face backlash. Socioeconomic Influence: Higher socioeconomic status within the adopter’s social group increases the likelihood of disapproval. Value Extraction: The more value the adopter gains relative to the culture they’re borrowing from, the higher the disapproval. These insights are crucial for leaders who want to navigate cultural boundaries successfully, ensuring their actions are seen as respectful and inclusive rather than exploitative. Real-World Implications for Business Leaders Why does this matter for business leaders? Understanding cultural tariffing is crucial when expanding into new markets, launching multicultural campaigns, or even managing diverse teams. The research suggests that crossing cultural boundaries without deep understanding or respect can backfire. That’s especially true when the adopter holds a higher socioeconomic status. Consider the example of a luxury brand adopting traditional African patterns without engaging with the communities behind them. In this case, it risks being seen as exploitative rather than innovative. The consequences aren’t just reputational; they can also impact the brand’s bottom line. This research isn’t just about isolated incidents; it mirrors sweeping societal shifts. Over the past 50 years, Western views have evolved to embrace ethnic diversity and multicultural exchange. But with this newfound appreciation comes a fresh set of challenges. Today’s leaders must navigate cultural interactions with greater care, fully aware of the historical and social contexts that shape perceptions of appropriation. In today’s global and interconnected business landscape, mastering the subtleties of cultural appropriation and tariffing is crucial. Leaders who tread thoughtfully can boost their reputation and success, while those who falter may face serious backlash. By understanding the hidden costs of crossing cultural boundaries, business leaders can cultivate authentic exchanges and steer clear of the pitfalls of appropriation. Abraham Oshotse is an assistant professor of organization & management. He is available speak to media regarding  this important topic - simply click on his icon now to arrange an interview today.

6 min

Hiring More Nurses Generates Revenue for Hospitals

Underfunding is driving an acute shortage of trained nurses in hospitals and care facilities in the United States. It is the worst such shortage in more than four decades. One estimate from the American Hospital Association puts the deficit north of one million. Meanwhile, a recent survey by recruitment specialist AMN Healthcare suggests that 900,000 more nurses will drop out of the workforce by 2027. American nurses are quitting in droves, thanks to low pay and burnout as understaffing increases individual workload. This is bad news for patient outcomes. Nurses are estimated to have eight times more routine contact with patients than physicians. They shoulder the bulk of all responsibility in terms of diagnostic data collection, treatment plans, and clinical reporting. As a result, understaffing is linked to a slew of serious problems, among them increased wait times for patients in care, post-operative infections, readmission rates, and patient mortality—all of which are on the rise across the U.S. Tackling this crisis is challenging because of how nursing services are reimbursed. Most hospitals operate a payment system where services are paid for separately. Physician services are billed as separate line items, making them a revenue generator for the hospitals that employ them. But under Medicare, nursing services are charged as part of a fixed room and board fee, meaning that hospitals charge the same fee regardless of how many nurses are employed in the patient’s care. In this model, nurses end up on the other side of hospitals’ balance sheets: a labor expense rather than a source of income. For beleaguered administrators looking to sustain quality of care while minimizing costs (and maximizing profits), hiring and retaining nursing staff has arguably become something of a zero-sum game in the U.S. The Hidden Costs of Nurse Understaffing But might the balance sheet in fact be skewed in some way? Could there be potential financial losses attached to nurse understaffing that administrators should factor into their hiring and remuneration decisions? Research by Goizueta Professors Diwas KC and Donald Lee, as well as recent Goizueta PhD graduates Hao Ding 24PhD (Auburn University) and Sokol Tushe 23PhD (Muma College of Business), would suggest there are. Their new peer-reviewed publication* finds that increasing a single nurse’s workload by just one patient creates a 17% service slowdown for all other patients under that nurse’s care. Looking at the data another way, having one additional nurse on duty during the busiest shift (typically between 7am and 7pm) speeds up emergency department work and frees up capacity to treat more patients such that hospitals could be looking at a major increase in revenue. The researchers calculate that this productivity gain could equate to a net increase of $470,000 per 10,000 patient visits—and savings to the tune of $160,000 in lost earnings for the same number of patients as wait times are reduced. “A lot of the debate around nursing in the U.S. has focused on the loss of quality in care, which is hugely important,” says Diwas KC. But looking at the crisis through a productivity lens means we’re also able to understand the very real economic value that nurses bring too: the revenue increases that come with capacity gains. Diwas KC, Goizueta Foundation Term Professor of Information Systems & Operations Management “Our findings challenge the predominant thinking around nursing as a cost,” adds Lee. “What we see is that investing in nursing staff more than pays for itself in downstream financial benefits for hospitals. It is effectively a win-win-win for patients, nurses, and healthcare providers.” Nurse Load: the Biggest Impact on Productivity To get to these findings, the researchers analyzed a high-resolution dataset on patient flow through a large U.S. teaching hospital. They looked at the real-time workloads of physicians and nurses working in the emergency department between April 2018 and March 2019, factoring in variables such as patient demographics and severity of complaint or illness. Tracking patients from admission to triage and on to treatment, the researchers were able to tease out the impact that the number of nurses and physicians on duty had on patient throughput. Using a novel machine learning technique developed at Goizueta by Lee, they were able to identify the effect of increasing or reducing the workforce. The contrast between physicians and nursing staff is stark, says Tushe. “When you have fewer nurses on duty, capacity and patient throughput drops by an order of magnitude—far, far more than when reducing the number of doctors. Our results show that for every additional patient the nurse is responsible for, service speed falls by 17%. That compares to just 1.4% if you add one patient to the workload of an attending physician. In other words, nurses’ impact on productivity in the emergency department is more than eight times greater.” Boosting Revenue Through Reduced Wait Times Adding an additional nurse to the workforce, on the other hand, increases capacity appreciably. And as more patients are treated faster, hospitals can expect a concomitant uptick in revenue, says KC. “It’s well documented that cutting down wait time equates to more patients treated and more income. Previous research shows that reducing service time by 15 minutes per 30,000 patient visits translates to $1.4 million in extra revenue for a hospital.” In our study, we calculate that staffing one additional nurse in the 7am to 7pm emergency department shift reduces wait time by 23 minutes, so hospitals could be looking at an increase of $2.33 million per year. Diwas KC This far eclipses the costs associated with hiring one additional nurse, says Lee. “According to 2022 U.S. Bureau of Labor Statistics, the average nursing salary in the U.S. is $83,000. Fringe benefits account for an additional 50% of the base salary. The total cost of adding one nurse during the 7am to 7pm shift is $310,000 (for 2.5 full-time employees). When you do the math, it is clear. The net hospital gain is $2 million for the hospital in our study. Or $470,000 per 10,000 patient visits.” Incontrovertible Benefits to Hiring More Nurses These findings should provide compelling food for thought both to healthcare administrators and U.S. policymakers. For too long, the latter have fixated on the upstream costs, without exploring the downstream benefits of nursing services, say the researchers. Their study, the first to quantify the economic value of nurses in the U.S., asks “better questions,” argues Tushe; exploiting newly available data and analytics to reveal incontrovertible financial benefits that attach to hiring—and compensating—more nurses in American hospitals. We know that a lot of nurses are leaving the profession not just because of cuts and burnout, but also because of lower pay. We would say to administrators struggling to hire talented nurses to review current wage offers, because our analysis suggests that the economic surplus from hiring more nurses could be readily applied to retention pay rises also. Sokol Tushe 23PhD, Muma College of Business The Case for Mandated Ratios For state-level decision makers, Lee has additional words of advice. “In 2004, California mandated minimum nurse-to-patient ratios in hospitals. Since then, six more states have added some form of minimum ratio requirement. The evidence is that this has been beneficial to patient outcomes and nurse job satisfaction. Our research now adds an economic dimension to the list of benefits as well. Ipso facto, policymakers ought to consider wider adoption of minimum nurse-to-patient ratios.” However, decision makers go about tackling the shortage of nurses in the U.S., they should go about it fast and soon, says KC. “This is a healthcare crisis that is only set to become more acute in the near future. As our demographics shift and our population starts again out, demand for quality will increase. So too must the supply of care capacity. But what we are seeing is the nursing staffing situation in the U.S. moving in the opposite direction. All of this is manifesting in the emergency department. That’s where wait times are getting longer, mistakes are being made, and overworked nurses are quitting. It is creating a vicious cycle that needs to be broken.” Diwas Diwas KC is a professor of information systems & operations management and Donald Lee is an associate professor of information systems & operations management. Both experts are available to speak about this important topic - simply click on either icon now to arrange an interview today.

View all posts