Apple continues to shine atop the Forbes Most Valuable Brands List

May 23, 2019

1 min

Ryan Hamilton

It’s a list that reads like the uniform of a NASCAR pit-crew. The top names and logos of some the most popular and well-known companies in the world.


And this year, it’s once again Apple sitting on top of the list of Forbes World’s Most Valuable Brands.


Rounding out the top 10 were also some of the heaviest hitters and most well-known names in the market.  Here’s the list.

 

1.    Apple

2.    Google

3.    Microsoft

4.    Amazon

5.    Facebook

6.    Coca-Cola

7.    Samsung

8.    Disney

9.    Toyota

10. McDonald’s


Now building a bran in not easy. And one bad news story or misstep can taint a brand forever – just ask British Petroleum or the Weinstein Company.


So, what does it take to build, maintain and preserve a brand for decades? It’s not as easy as you might think. If you’re covering or curious let one of our experts help.

Professor Ryan Hamilton is a consumer psychologist, whose research investigates shopper decision making. In particular, he is interested in how brands, prices and choice architecture influence decision making at the point of purchase.


Ryan is available to speak with media regarding brands and brand reputation – simply click on his icon to arrange an interview.




Connect with:
Ryan Hamilton

Ryan Hamilton

Associate Professor of Marketing
Customer PsychologyCustomer Decision MakingBrandingPrice and Price Image

You might also like...

Check out some other posts from Emory University, Goizueta Business School

6 min

Hiring More Nurses Generates Revenue for Hospitals

Underfunding is driving an acute shortage of trained nurses in hospitals and care facilities in the United States. It is the worst such shortage in more than four decades. One estimate from the American Hospital Association puts the deficit north of one million. Meanwhile, a recent survey by recruitment specialist AMN Healthcare suggests that 900,000 more nurses will drop out of the workforce by 2027. American nurses are quitting in droves, thanks to low pay and burnout as understaffing increases individual workload. This is bad news for patient outcomes. Nurses are estimated to have eight times more routine contact with patients than physicians. They shoulder the bulk of all responsibility in terms of diagnostic data collection, treatment plans, and clinical reporting. As a result, understaffing is linked to a slew of serious problems, among them increased wait times for patients in care, post-operative infections, readmission rates, and patient mortality—all of which are on the rise across the U.S. Tackling this crisis is challenging because of how nursing services are reimbursed. Most hospitals operate a payment system where services are paid for separately. Physician services are billed as separate line items, making them a revenue generator for the hospitals that employ them. But under Medicare, nursing services are charged as part of a fixed room and board fee, meaning that hospitals charge the same fee regardless of how many nurses are employed in the patient’s care. In this model, nurses end up on the other side of hospitals’ balance sheets: a labor expense rather than a source of income. For beleaguered administrators looking to sustain quality of care while minimizing costs (and maximizing profits), hiring and retaining nursing staff has arguably become something of a zero-sum game in the U.S. The Hidden Costs of Nurse Understaffing But might the balance sheet in fact be skewed in some way? Could there be potential financial losses attached to nurse understaffing that administrators should factor into their hiring and remuneration decisions? Research by Goizueta Professors Diwas KC and Donald Lee, as well as recent Goizueta PhD graduates Hao Ding 24PhD (Auburn University) and Sokol Tushe 23PhD (Muma College of Business), would suggest there are. Their new peer-reviewed publication* finds that increasing a single nurse’s workload by just one patient creates a 17% service slowdown for all other patients under that nurse’s care. Looking at the data another way, having one additional nurse on duty during the busiest shift (typically between 7am and 7pm) speeds up emergency department work and frees up capacity to treat more patients such that hospitals could be looking at a major increase in revenue. The researchers calculate that this productivity gain could equate to a net increase of $470,000 per 10,000 patient visits—and savings to the tune of $160,000 in lost earnings for the same number of patients as wait times are reduced. “A lot of the debate around nursing in the U.S. has focused on the loss of quality in care, which is hugely important,” says Diwas KC. But looking at the crisis through a productivity lens means we’re also able to understand the very real economic value that nurses bring too: the revenue increases that come with capacity gains. Diwas KC, Goizueta Foundation Term Professor of Information Systems & Operations Management “Our findings challenge the predominant thinking around nursing as a cost,” adds Lee. “What we see is that investing in nursing staff more than pays for itself in downstream financial benefits for hospitals. It is effectively a win-win-win for patients, nurses, and healthcare providers.” Nurse Load: the Biggest Impact on Productivity To get to these findings, the researchers analyzed a high-resolution dataset on patient flow through a large U.S. teaching hospital. They looked at the real-time workloads of physicians and nurses working in the emergency department between April 2018 and March 2019, factoring in variables such as patient demographics and severity of complaint or illness. Tracking patients from admission to triage and on to treatment, the researchers were able to tease out the impact that the number of nurses and physicians on duty had on patient throughput. Using a novel machine learning technique developed at Goizueta by Lee, they were able to identify the effect of increasing or reducing the workforce. The contrast between physicians and nursing staff is stark, says Tushe. “When you have fewer nurses on duty, capacity and patient throughput drops by an order of magnitude—far, far more than when reducing the number of doctors. Our results show that for every additional patient the nurse is responsible for, service speed falls by 17%. That compares to just 1.4% if you add one patient to the workload of an attending physician. In other words, nurses’ impact on productivity in the emergency department is more than eight times greater.” Boosting Revenue Through Reduced Wait Times Adding an additional nurse to the workforce, on the other hand, increases capacity appreciably. And as more patients are treated faster, hospitals can expect a concomitant uptick in revenue, says KC. “It’s well documented that cutting down wait time equates to more patients treated and more income. Previous research shows that reducing service time by 15 minutes per 30,000 patient visits translates to $1.4 million in extra revenue for a hospital.” In our study, we calculate that staffing one additional nurse in the 7am to 7pm emergency department shift reduces wait time by 23 minutes, so hospitals could be looking at an increase of $2.33 million per year. Diwas KC This far eclipses the costs associated with hiring one additional nurse, says Lee. “According to 2022 U.S. Bureau of Labor Statistics, the average nursing salary in the U.S. is $83,000. Fringe benefits account for an additional 50% of the base salary. The total cost of adding one nurse during the 7am to 7pm shift is $310,000 (for 2.5 full-time employees). When you do the math, it is clear. The net hospital gain is $2 million for the hospital in our study. Or $470,000 per 10,000 patient visits.” Incontrovertible Benefits to Hiring More Nurses These findings should provide compelling food for thought both to healthcare administrators and U.S. policymakers. For too long, the latter have fixated on the upstream costs, without exploring the downstream benefits of nursing services, say the researchers. Their study, the first to quantify the economic value of nurses in the U.S., asks “better questions,” argues Tushe; exploiting newly available data and analytics to reveal incontrovertible financial benefits that attach to hiring—and compensating—more nurses in American hospitals. We know that a lot of nurses are leaving the profession not just because of cuts and burnout, but also because of lower pay. We would say to administrators struggling to hire talented nurses to review current wage offers, because our analysis suggests that the economic surplus from hiring more nurses could be readily applied to retention pay rises also. Sokol Tushe 23PhD, Muma College of Business The Case for Mandated Ratios For state-level decision makers, Lee has additional words of advice. “In 2004, California mandated minimum nurse-to-patient ratios in hospitals. Since then, six more states have added some form of minimum ratio requirement. The evidence is that this has been beneficial to patient outcomes and nurse job satisfaction. Our research now adds an economic dimension to the list of benefits as well. Ipso facto, policymakers ought to consider wider adoption of minimum nurse-to-patient ratios.” However, decision makers go about tackling the shortage of nurses in the U.S., they should go about it fast and soon, says KC. “This is a healthcare crisis that is only set to become more acute in the near future. As our demographics shift and our population starts again out, demand for quality will increase. So too must the supply of care capacity. But what we are seeing is the nursing staffing situation in the U.S. moving in the opposite direction. All of this is manifesting in the emergency department. That’s where wait times are getting longer, mistakes are being made, and overworked nurses are quitting. It is creating a vicious cycle that needs to be broken.” Diwas Diwas KC is a professor of information systems & operations management and Donald Lee is an associate professor of information systems & operations management. Both experts are available to speak about this important topic - simply click on either icon now to arrange an interview today.

5 min

#Expert Research: The Use of AI in Financial Reporting

Artificial intelligence (AI) is developing into an amazing tool to help humans across multiple fields, including medicine and research, and much of that work is happening at Emory University’s Goizueta Business School. Financial reporting and auditing are both areas where AI can have a significant impact as companies and audit firms are rapidly adopting the use of such technology. But are financial managers willing to rely on the results of AI-generated information? In the context of audit adjustments, it depends on whether their company uses AI as well. Willing to Rely on AI? Cassandra Estep, assistant professor of accounting at Goizueta Business School, and her co-authors have a forthcoming study looking at financial managers’ perceptions of the use of AI, both within their companies and by their auditors. Research had already been done on how financial auditors react to using AI for evaluating complex financial reporting. That got Estep and her co-authors thinking there’s more to the story. “A big, important part of the financial reporting and auditing process is the managers within the companies being audited. We were interested in thinking about how they react to the use of AI by their auditors,” Estep says. “But then we also started thinking about what companies are investing in AI as well. That joint influence of the use of AI, both within the companies and by the auditors that are auditing the financials of those companies, is where it all started.” The Methodology Estep and her co-authors conducted a survey and experiment with senior-level financial managers with titles like CEO, CFO, or Controller – the people responsible for making financial reporting decisions within companies. The survey included questions to understand how companies are using AI. It also included open-ended questions designed to identify key themes about financial managers’ perceptions of AI use by their companies and their auditors. In the experiment, participants completed a hypothetical case in which they were asked about their willingness to record a downward adjustment to the fair value of a patent proposed by their auditors. The scenarios varied across randomly assigned conditions as to whether the auditor did or not did not use AI in coming up with the proposed valuation and adjustment, and whether their company did or did not use AI in generating their estimated value of the patent. When both the auditor and the company used AI, participants were willing to record a larger adjustment amount, i.e., decrease the value of the patent more. The authors find that these results are driven by increased perceptions of accuracy. It’s not necessarily a comfort thing, but a signal from the company that this is an acceptable way to do things, and it actually caused them to perceive the auditors’ information as more accurate and of higher quality. Cassandra Estep, assistant professor of accounting “Essentially, they viewed the auditors’ recommendation for adjusting the numbers to be more accurate and of higher quality, and so they were more willing to accept the audit adjustment,” Estep says. Making Financial Reporting More Efficient Financial reporting is a critical process in any business. Companies and investors need timely and accurate information to make important decisions. With the added element of AI, financial reporting processes can include more external data. We touched on the idea that these tools can hopefully process a lot more information and data. For example, we’ve seen auditors and managers talk about using outside information. Cassandra Estep “Auditors might be able to use customer reviews and feedback as one of the inputs to deciding how much warranty expense the company should be estimating. And is that amount reasonable? The idea is that if customers are complaining, there could be some problem with the products.” Adding data to analytical processes, when done by humans alone, adds a significant amount of time to the calculations. Research from the European Spreadsheets Risks Interest Group says that more than 90% of all financial spreadsheets contain at least one error. Some forms of AI can process hundreds of thousands of calculations overnight, typically with fewer errors. In short, it can be more efficient. Efficiency was brought up a lot in our survey, the idea that things could be done faster with AI. Cassandra Estep “We also asked the managers about their perspective on the audit side, and they did hope that audit fees would go down, because auditors would be able to do things more quickly and efficiently as well,” Estep says. “But the flip side of that is that using AI could also raise more questions and more issues that have to be investigated. There’s also the potential for more work.” The Fear of Being Replaced The fear of being replaced is a more or less universal worry for anyone whose industry is beginning to adopt the use of AI in some form. While the respondents in Estep’s survey looked forward to more efficient and effective handling of complex financial reporting by AI, they also emphasized the need to keep the human element involved in any decisions made using AI. What we were slightly surprised about was the positive reactions that the managers had in our survey. While some thought the use of AI was inevitable, there’s this idea that it can make things better. Cassandra Estep “But there’s still a little bit of trepidation,” Estep says. “One of the key themes that came up was yes, we need to use these tools. We should take advantage of them to improve the quality and the efficiency with which we do things. But we also need to keep that human element. At the end of the day, humans need to be responsible. Humans need to be making the decisions.” A Positive Outlook The benefits of AI were clear to the survey participants. They recognized it as a positive trend, whether or not it was currently used in their financial reporting. If they weren’t regularly using AI, they expected to be using it soon. I think one of the most interesting things to us about this paper is this idea that AI can be embraced. Companies and auditors are still somewhat in their infancy of figuring out how to use it, but big investments are being made. Cassandra Estep “And then, again, there’s the fact that our experiment also shows a situation where managers were willing to accept the auditors’ proposed adjustments. This arguably goes against their incentives as management to keep the numbers more positive or optimistic,” Estep continues. “The auditors are serving that role of helping managers provide more reliable financial information, and that can be viewed as a positive outcome.” “There’s still some hesitation. We’re still figuring out these tools. We see examples all the time of where AI has messed up, or put together false information. But I think the positive sentiment across our survey participants, and then also the results of our experiment, reinforce the idea that AI can be a good thing and that it can be embraced. Even in a setting like financial reporting and auditing, where there can be fear of job replacement, the focus on the human-technology interaction can hopefully lead to improved situations.” Cassandra Estep, is an assistant professor of accounting at Goizueta Business School, and a co-author of the forthcoming study looking at financial managers’ perceptions of the use of AI. She's available to speak about this important topic - simply click on her icon now to arrange an interview today.

5 min

Expert Perspective: Mitigating Bias in AI: Sharing the Burden of Bias When it Counts Most

Whether getting directions from Google Maps, personalized job recommendations from LinkedIn, or nudges from a bank for new products based on our data-rich profiles, we have grown accustomed to having artificial intelligence (AI) systems in our lives. But are AI systems fair? The answer to this question, in short—not completely. Further complicating the matter is the fact that today’s AI systems are far from transparent. Think about it: The uncomfortable truth is that generative AI tools like ChatGPT—based on sophisticated architectures such as deep learning or large language models—are fed vast amounts of training data which then interact in unpredictable ways. And while the principles of how these methods operate are well-understood (at least by those who created them), ChatGPT’s decisions are likened to an airplane’s black box: They are not easy to penetrate. So, how can we determine if “black box AI” is fair? Some dedicated data scientists are working around the clock to tackle this big issue. One of those data scientists is Gareth James, who also serves as the Dean of Goizueta Business School as his day job. In a recent paper titled “A Burden Shared is a Burden Halved: A Fairness-Adjusted Approach to Classification” Dean James—along with coauthors Bradley Rava, Wenguang Sun, and Xin Tong—have proposed a new framework to help ensure AI decision-making is as fair as possible in high-stakes decisions where certain individuals—for example, racial minority groups and other protected groups—may be more prone to AI bias, even without our realizing it. In other words, their new approach to fairness makes adjustments that work out better when some are getting the short shrift of AI. Gareth James became the John H. Harland Dean of Goizueta Business School in July 2022. Renowned for his visionary leadership, statistical mastery, and commitment to the future of business education, James brings vast and versatile experience to the role. His collaborative nature and data-driven scholarship offer fresh energy and focus aimed at furthering Goizueta’s mission: to prepare principled leaders to have a positive influence on business and society. Unpacking Bias in High-Stakes Scenarios Dean James and his coauthors set their sights on high-stakes decisions in their work. What counts as high stakes? Examples include hospitals’ medical diagnoses, banks’ credit-worthiness assessments, and state justice systems’ bail and sentencing decisions. On the one hand, these areas are ripe for AI-interventions, with ample data available. On the other hand, biased decision-making here has the potential to negatively impact a person’s life in a significant way. In the case of justice systems, in the United States, there’s a data-driven, decision-support tool known as COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) in active use. The idea behind COMPAS is to crunch available data (including age, sex, and criminal history) to help determine a criminal-court defendant’s likelihood of committing a crime as they await trial. Supporters of COMPAS note that statistical predictions are helping courts make better decisions about bail than humans did on their own. At the same time, detractors have argued that COMPAS is better at predicting recidivism for some racial groups than for others. And since we can’t control which group we belong to, that bias needs to be corrected. It’s high time for guardrails. A Step Toward Fairer AI Decisions Enter Dean James and colleagues’ algorithm. Designed to make the outputs of AI decisions fairer, even without having to know the AI model’s inner workings, they call it “fairness-adjusted selective inference” (FASI). It works to flag specific decisions that would be better handled by a human being in order to avoid systemic bias. That is to say, if the AI cannot yield an acceptably clear (1/0 or binary) answer, a human review is recommended. To test the results for their “fairness-adjusted selective inference,” the researchers turn to both simulated and real data. For the real data, the COMPAS dataset enabled a look at predicted and actual recidivism rates for two minority groups, as seen in the chart below. In the figures above, the researchers set an “acceptable level of mistakes” – seen as the dotted line – at 0.25 (25%). They then compared “minority group 1” and “minority group 2” results before and after applying their FASI framework. Especially if you were born into “minority group 2,” which graph seems fairer to you? Professional ethicists will note there is a slight dip to overall accuracy, as seen in the green “all groups” category. And yet the treatment between the two groups is fairer. That is why the researchers titled their paper “a burden shared is a burdened halved.” Practical Applications for the Greater Social Good “To be honest, I was surprised by how well our framework worked without sacrificing much overall accuracy,” Dean James notes. By selecting cases where human beings should review a criminal history – or credit history or medical charts – AI discrimination that would have significant quality-of-life consequences can be reduced. Reducing protected groups’ burden of bias is also a matter of following the laws. For example, in the financial industry, the United States’ Equal Credit Opportunity Act (ECOA) makes it “illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance,” as the Federal Trade Commission explains on its website. If AI-powered programs fail to correct for AI bias, the company utilizing it can run into trouble with the law. In these cases, human reviews are well worth the extra effort for all stakeholders. The paper grew from Dean James’ ongoing work as a data scientist when time allows. “Many of us data scientists are worried about bias in AI and we’re trying to improve the output,” he notes. And as new versions of ChatGPT continue to roll out, “new guardrails are being added – some better than others.” “I’m optimistic about AI,” Dean James says. “And one thing that makes me optimistic is the fact that AI will learn and learn – there’s no going back. In education, we think a lot about formal training and lifelong learning. But then that learning journey has to end,” Dean James notes. “With AI, it never ends.” Gareth James is the John H. Harland Dean of Goizueta Business School. If you're looking to connect with him - simply click on his icon now to arrange an interview today.

View all posts