Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

ChristianaCare also ranks as No. 1 overall employer for diversity and inclusion in Delaware, No. 14 among U.S. health systems (WILMINGTON, Del. – April 23) magazine ranked ChristianaCare as one of the best employers for diversity and inclusion in the United States in its list of Best Employers for Diversity 2021. ChristianaCare also ranked as the No. 1 employer for diversity in Delaware and the No. 14 health system for diversity in the nation. ChristianaCare ranked 121st out of the 500 employers that were recognized. “At ChristianaCare, our mission is simple, but profound – we take care of people,” said Janice Nevin, M.D., MPH, president and CEO of ChristianaCare, which is Delaware’s largest private employer. “And caring for people means that we work together, guided by our values of love and excellence, to bring equity and inclusion to everyone we serve, including our own caregivers. We are committed to building a workforce that reflects the diverse communities we serve, as we aspire to deliver high-quality, accessible care and achieve health equity.” ChristianaCare has committed to being an anti-racism organization and works to ensure that commitment is reflected through the organization’s policies, programs, and practices. (Read more about ChristianaCare’s anti-racism commitment here.) ChristianaCare’s inclusion efforts also include the launch of 10 employee resource groups, which connect caregivers who have a common interest or bond with one another. Formed by employees across all demographics – such as disability, gender, race, military status, national origin, sexual orientation, etc. – these voluntary grassroots groups work to improve inclusion and diversity at ChristianaCare. More than 750 caregivers at ChristianaCare participate in employee resource groups. ChristianaCare also recently launched LeadershipDNA, a new leadership development program that is specifically targeted to underrepresented, diverse populations and is designed to foster professional and career development. “We are grateful for this recognition, which affirms that our organization is committed to taking on the meaningful work to help our caregivers be exceptional today and even better tomorrow,” said Pamela Ridgeway, chief diversity officer and vice president of Inclusion and Diversity at ChristianaCare. “The fact that our caregivers can see the value and feel the impact of our inclusion and diversity efforts illustrates the importance for us to continue to push onward.” Forbes’ Best Employers for Diversity were identified from an independent survey of more than 50,000 U.S. employees working for companies employing at least 1,000 people in their U.S. operations. The employees were asked to give their opinion on a series of statements surrounding the topic of age, gender equality, ethnicity, disability, LGBTQ+, and general diversity concerning their own employer. The survey also gave survey participants the chance to evaluate other employers in their respective industries that stand out with regard to diversity. Only the recommendations of minority groups were considered. Also factored in was diversity engagement amongst managers and diversity among leadership. About ChristianaCare Headquartered in Wilmington, Delaware, ChristianaCare is one of the country’s most dynamic health care organizations, centered on improving health outcomes, making high-quality care more accessible and lowering health care costs. ChristianaCare includes an extensive network of primary care and outpatient services, home health care, urgent care centers, three hospitals (1,299 beds), a freestanding emergency department, a Level I trauma center and a Level III neonatal intensive care unit, a comprehensive stroke center and regional centers of excellence in heart and vascular care, cancer care and women’s health. It also includes the pioneering Gene Editing Institute. ChristianaCare is nationally recognized as a great place to work, rated by Forbes as the 5th best health system to work for in the United States and by IDG Computerworld as one of the nation’s Best Places to Work in IT. ChristianaCare is rated by HealthGrades as one of America’s 50 Best Hospitals and continually ranked among the nation’s best by U.S. News & World Report, Newsweek and other national quality ratings. ChristianaCare is a nonprofit teaching health system with more than 260 residents and fellows. With the unique CareVio™ data-powered care coordination service and a focus on population health and value-based care, ChristianaCare is shaping the future of health care. ####
What’s it all mean as ‘Big-Tech’ pivots to privacy? Let our Experts help explain if you are covering
The business of the internet as we know it, is about to change. As companies in the past have thrived, boomed, and found serious cash and success harvesting your data – that model may soon be coming to an end. With companies like Google and Apple leading the way, odds are a serious transformation is about to come and the know that notice has been served, it is getting a lot of attention. The decision, coming from the world’s biggest digital advertising company, could help push the industry away from the use of such individualized tracking, which has come under increasing criticism from privacy advocates and faces scrutiny from regulators. Google’s heft means the change could reshape the digital ad business, where many companies rely on tracking individuals to target their ads, measure the ads’ effectiveness, and stop fraud. Google accounted for 52% of last year’s global digital ad spending of $292 billion, according to Jounce Media, a digital ad consultancy. About 40% of the money that flows from advertisers to publishers on the open internet—meaning digital advertising outside of closed systems such as Google Search, YouTube, or Facebook—goes through Google’s ad buying tools, according to Jounce. March 03 – The Wall Street Journal. But what will this mean for powerhouses like Facebook or the multitude of apps and carriers who rely on data, and the money that comes with it to succeed? What lies ahead will be interesting, and if you are a journalist looking to cover this topic – then let our experts help. Vilma Todri is an Assistant Professor of Information Systems & Operations Management at Emory University’s Goizueta Business School. Previously, she worked for Google where she was developing integrated cross-platform advertising strategies for large business clients that partnered with Google and recently wrote a comprehensive paper on this very topic. Vilma is available to speak with media about this subject – simply click on her icon now to arrange an interview today.

Why online recommendations make it easier to hit “buy”
When it is time to buy something online, perhaps a coffee maker, you might head to Amazon and browse items for sale. One particular model might spark interest. The product page may contain recommendations for other goods: complementary products such as coffee filters; or recommendations for different, competitor coffee maker brands offering unique features and prices. E-commerce websites commonly use product recommendations — called co-purchase and co-view recommendations — to keep users locked into the sales funnel and increase customer retention. But what impact do these types of recommendations actually have on consumers? How do they influence one’s willingness to pay for the original product searched? In fact, the level of influence depends on how close a consumer is to making that purchase, says Jesse Bockstedt, associate professor of information systems & operations management at Emory’s Goizueta Business School. In addition, what type of recommendation the consumer sees plays a role in purchasing as well. To shed empirical light on this, Bockstedt teamed with Mingyue Zhang from the Shanghai International Studies University. “We were curious. We knew that recommendation systems are integral to how consumers discover products online – a good 35 percent of Amazon sales can be attributed to recommendations, for instance,” Bockstedt says. “But we knew a lot less about how recommendations change consumer behavior in relation to a focal product.” Specifically, the researchers were interested in looking at the effect of complementary versus substitutable products, and what impact the price of these types of products had on consumer behavior. They also wanted to know whether these effects were more or less amplified depending on whether consumers were at the exploratory phase in the buying process or ready to go ahead and make the purchase. To unpack the dynamics at play, Bockstedt and Zhang ran two experiments that simulated the online purchasing experience. The researchers had volunteers go through the process of evaluating different products and then report back on how much they were willing to pay for each. “We asked volunteers to look at a product page for a computer mouse, and we randomly assigned different recommendations to that page – some that were for other mice, and others that were for goods and products that would complement the original mouse. Going through the experiment, we also manipulated the price that volunteers saw on different pages, both for the recommended substitute and complementary products,” he says. “Finally, we looked at the effect of timing and the sales funnel. In one case we had volunteers look for a highly specific mouse and recommended a particular product page to them. To simulate the more exploratory phase, we gave them many pages and asked them to click on the one they found most interesting.” In total, Bockstedt and Zhang put 200+ volunteers through the replica virtual purchasing experience and recorded their willingness to pay the advertised price for the focal product on scale of 0 to 100, depending on what they had seen and the point in the sales funnel they had seen the recommendations. If you are looking to learn more about this research and the results, Emory has a full article published for reading and review. If you are a journalist looking to cover this topic or if you are simply interested in learning more, then let us help. Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. He is available to speak with media, simply click on his icon now – to book an interview today.

Personality matters: the tie between language and how well your video content performs
Why does one piece of online video content perform better than another? Does it come down to its relevance, production values, and posting and sharing strategies? Or are other dynamics at play? There are plenty of theories about what, when and how to post if you want to drive the performance of your video. But new research by Goizueta’s Rajiv Garg, associate professor of information systems and operations management, sheds empirical and highly nuanced new light on the type of language to inject in a content if you really want to accelerate consumption. And it turns out that a lot of it depends on personality. Together with Haris Krijestorac of HEC Paris and McCombs’ Maytal Saar-Tsechansky, Garg has run a large-scale study, analyzing the words spoken and used in speech-heavy videos posted to YouTube, and then organizing those words by personality – how they “score” in terms of the so-called Big Five personality traits. “The Big Five is a system or taxonomy that has been used by psychologists and others since the 1980s to organize different types of personality traits. These traits are extroversion, agreeableness, openness, conscientiousness, and neuroticism,” says Garg. “In previous research into video content performance, we’ve looked into mechanisms such as posting and re-posting on different channels and how they impact the virality of one video over another. But we were intrigued by the role of language and how different words map to these personality traits, which in turn might have an impact on user emotion or response.” Emory has this entire comprehensive article that includes more details on the Big Five and it is available for reading here: If you are a journalist looking to cover this topic – then let our experts help with your story. Rajiv Garg from Emory’s Goizueta Business School is available to speak with media – simply click on his icon now to arrange an interview today.

Online ratings systems shouldn’t just be a numbers game
When you’re browsing the internet for something to buy, watch, listen, or rent, chances are that you will scan online recommendations before you make your purchase. It makes sense. With an overabundance of options in front of you, it can be difficult to know exactly which movie or garment or holiday gift is the best fit. Personalized recommendation systems help users navigate the often-confusing labyrinth of online content. They take a lot of the legwork out of decision-making. And they are an increasingly commonplace function of our online behavior. All of which is in your best interest as a consumer, right? Yes and no, says Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. Bockstedt has produced a body of research in recent years that reveals a number of issues with recommendation systems that should be on the radar of organizations and users alike. While user ratings, often shown as stars on a five- or ten-point scale, can help you decide whether or not to go ahead and make a selection, online recommendations can also create a bias towards a product or experience that might have little or nothing to do with your actual preferences, Bockstedt says. Simply put, you’re more likely to watch, listen to, or buy something because it’s been recommended. And, when it comes to recommending the thing you’ve just watched, listened to, or bought yourself, your own rating might also be heavily influenced by the way it was recommended to you in the first place. “Our research has shown that when a consumer is presented with a product recommendation that has a predicted preference rating—for example, we think you’ll like this movie or it has four and a half out of five stars—this information creates a bias in their preferences,” Bockstedt says. “The user will report liking the item more after they consume it if the system’s initial recommendation was high, and they say they like it less post-consumption, if the system’s recommendation was low. This holds even if the system recommendations are completely made up and random. So the information presented to the user in the recommendation creates a bias in how they perceive the item even after they’ve actually consumed or used it.” This in turn creates a feedback loop which can reflect authentic preference, but this preference is very likely to be contaminated by bias. And that’s a problem, Bockstedt says. “Once you have error baked into your recommendation system via this biased feedback loop, it’s going to reproduce and reproduce so that as an organization you’re pushing your customers towards certain types of products or content and not others—albeit unintentionally,” Bockstedt explains. “And for users or consumers, it’s also problematic in the sense that you’re taking the recommendations at face value, trusting them to be accurate while in fact they may not be. So there’s a trust issue right there.” Online recommendation systems can also potentially open the door to less than scrupulous behaviors, Bockstedt adds. Because ratings can anchor user preferences and choices to one product over another, who’s to say organizations might not actually leverage the effect to promote more expensive options to their users? In other words, systems have the potential to be manipulated such that customers pay more—and pay more for something that they may not in fact have chosen in the first place. Addressing recommendation system-induced bias is imperative, Bockstedt says, because these systems are essentially here to stay. So how do you go about attenuating the effect? His latest paper sheds new and critical light on this. Together with Gediminas Adomavicius and Shawn P. Curley of the University of Minnesota and Indiana University’s Jingjing Zhang, Bockstedt ran a series of lab experiments to determine whether user bias could be eliminated or mitigated by showing users different types of recommendations or rating systems. Specifically they wanted to see if different formats or interface displays could diminish the bias effect on users. And what they found is highly significant. Emory has published a full article on this topic – and its available for reading here: If you are a journalist looking to cover this topic or if you are simply interested in learning more, then let us help. Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. He is available to speak with media, simply click on his icon now – to book an interview today.

Survival analysis: Forecasting lifespans of patients and products
How long will you live? Should you spring for that AppleCare+ warranty for your iPhone? When will your buddy pay you back for that lunch? For centuries, soothsayers have striven to understand the lifespan of things – be they patient longevity, product lifecycles, or even time to loan default. Nowadays, scientists have turned away from reading tea leaves and toward survival analysis – a complex data science method for predicting not only whether an event will happen (the death of a patient, the failure of a product or machine, default on a payment, and so on) but when this event is likely to occur. But it’s problematic. Until now, the tools of survival analysis have only been applicable in certain settings. This is due to the inherent heterogeneity of what is being analyzed: differences in patient lifestyles, demographics, product usage patterns, and so on. New research by Goizueta Business School’s Donald Lee, associate professor of information systems and operations management and of biostatistics and bioinformatics, has yielded a new tool that greatly extends survival analysis to broader use cases. “Historically, scientists have used classic survival analysis tools to predict the lifespan of different things in different fields, from products to patients,” Lee said. “Since the 1950s, the Kaplan-Meier estimator has been the benchmark for analyzing lifetime data, particularly in clinical trials. The next breakthrough came in the 1970s when the Cox proportional hazards model was introduced, which allows researchers to incorporate variables that can affect the predictability of things like patient mortality.” The problem with the existing survival analysis tools, Lee said, is that they make certain assumptions that can skew the predictions if the assumptions are not met. “There are very few existing tools that can incorporate variables without imposing assumptions on how they affect survival, let alone when there are a lot of variables that can also change over time. For example, two iPhones will have different lifespans depending on the temperature at which they are stored, amongst many other factors. But it’s unlikely that storing your phone at 30 degrees will halve its lifespan compared to storing it at 60 degrees. This sort of linear relationship is commonly assumed by existing tools.” Lee’s team developed a new survival methodology based on something called gradient boosting: a machine learning technique that combines decision trees to yield predictions. The method, Lee said, is totally assumption-free (or nonparametric in technical parlance) and can deal with a large number of variables that can change continuously over time, making it significantly more general than existing methods. Nothing like it has been seen until now, he noted. “Calculating the survival rate of anything is super complex because of the variables. Say you want to create an app for a smart watch that monitors the wearer’s vitals and use this information to create a real-time warning indicator for stroke. Doing this accurately is difficult for two reasons,” Lee explained. “First, a large number of variables may be relevant to stroke risk, and the variables can interact in ways that break the assumptions central to existing survival analysis methods. And second, variables like blood pressure vary over time, and it is the recent measurements that are most informative. This introduces an additional time dimension that further complicates things.” The software implementation of Lee’s method, BoXHED, overcomes both issues and allows scientists to develop real-time predictive models for conditions like stroke. The trained model can then be ported to a watch app to tell its wearer if and when they’re likely to have a stroke, a process known as inferencing in machine learning lingo. The implications, Lee said, are huge. “BoXHED now opens the door for modern applications of survival analysis. In previous research, I have looked at the design of early warning mortality indicators for patients with advanced cancer and also for patients in the ICU. These use other methods to make predictions at fixed points in time, but now they can be transformed into real-time warning indicators using BoXHED.” He cited the case of end-stage cancer patients who are often better served by hospice care than by aggressive therapy. “Accurate predictions of survival are absolutely critical for care planning. In previous analyses, we have seen that using existing predictive models to inform end-of-life care planning can potentially avert $1.9 million in medical costs and 1,600 days of unnecessary inpatient care per 1,000 patient visits in the United States. BoXHED is likely to lead to even better results.” Lee’s research paper is forthcoming in the Annals of Statistics. He has also created an open-source software implementation of BoXHED, which can radically improve the accuracy of survival analysis across a breadth of applications. The paper describing BoXHED was published in the International Conference on Machine Learning, and the latest version of the BoXHED software can be found online. If you are a journalist or looking to speak with Donald Lee – simply click on his icon now to arrange an interview or appointment today.

Study of auto recalls shows carmakers delay announcements until they can 'hide in the herd'
BLOOMINGTON, Ind. - Automotive recalls are occurring at record levels, but seem to be announced after inexplicable delays. A research study of 48 years of auto recalls announced in the United States finds carmakers frequently wait to make their announcements until after a competitor issues a recall - even if it is unrelated to similar defects. This suggests that recall announcements may not be triggered solely by individual firms' product quality defect awareness or concern for the public interest, but may also be influenced by competitor recalls, a phenomenon that no prior research had investigated. Researchers analyzed 3,117 auto recalls over a 48-year period -- from 1966 to 2013 -- using a model to investigate recall clustering and categorized recalls as leading or following within a cluster. They found that 73 percent of recalls occurred in clusters that lasted 34 days and had 7.6 following recalls on average. On average, a cluster formed after a 16-day gap in which no recalls were announced. They found 266 such clusters over the period studied. "The implication is that auto firms are either consciously or unconsciously delaying recall announcements until they are able to hide in the herd," said George Ball, assistant professor of operations and decision technologies and Weimer Faculty Fellow at the Indiana University Kelley School of Business. "By doing this, they experience a significantly reduced stock penalty from their recall." Ball is co-author of the study, "Hiding in the Herd: The Product Recall Clustering Phenomenon," recently published online in Manufacturing and Service Operations Management, along with faculty at the University of Illinois, the University of Notre Dame, the University of Minnesota and Michigan State University. Researchers found as much as a 67 percent stock market penalty difference between leading recalls, which initiate the cluster, and following recalls, who follow recalls and hide in the herd to experience a lower stock penalty. This indicates a "meaningful financial incentive for auto firms to cluster following recalls behind a leading recall announcement," researchers said. "This stock market penalty difference dissipates over time within a cluster. Additionally, across clusters, the stock market penalty faced by the leading recall amplifies as the time since the last cluster increases." The authors also found that firms with the highest quality reputation, in particular Toyota, triggered the most recall followers. "Even though Toyota announces some of the fewest recalls, when they do announce a recall, 31 percent of their recalls trigger a cluster and leads to many other following recalls," Ball said. "This number is between 5 and 9 percent for all other firms. This means that firms are likely to hide in the herd when the leading recall is announced by a firm with a stellar quality reputation such as Toyota. "A key recommendation of the study is for the National Highway Traffic Safety Administration (NHTSA) to require auto firms to report the specific defect awareness date for each recall, and to make this defect awareness date a searchable and publicly available data field in the auto recall dataset NHTSA provides online," Ball added. "This defect awareness date is required and made available by other federal regulators that oversee recalls in the U.S., such as the Food and Drug Administration. Making this defect awareness date a transparent, searchable and publicly available data field may discourage firms from hiding in the herd and prompt them to make more timely and transparent recall decisions." Co-authors of the study were Ujjal Mukherjee, assistant professor of business administration at the Gies College of Business at the University of Illinois who was the lead author; Kaitlin Wowak, assistant professor of IT, analytics, and operations at the Mendoza College of Business at the University of Notre Dame; Karthik Natarajan, assistant professor of supply chain and operations at the Carlson School of Management at the University of Minnesota; and Jason Miller, associate professor of supply chain management at the Broad College of Business at Michigan State University.

Lockdown teleworking impacts productivity of women more than men
When the COVID-19 pandemic led countries all over the world to lock down their economies in early 2020, there was an unprecedented global shift to teleworking in white collar sectors. A trend that had been gathering traction was suddenly and exponentially accelerated and many of the world’s largest corporations, Google and Facebook among them, have announced plans allowing employees to work from home well into 2021 or indefinitely. Remote working not only appears to work, but it appears to have a number of advantages—savings in office maintenance costs and time spent commuting, not to mention enabling organizations to safeguard productivity when there’s a major shock or crisis. But is it all good news? Or good news for all? A new paper by Ruomeng Cui, assistant professor of information systems and operations management at Emory’s Goizueta Business School, reveals an important drop in the productivity of female academics around the world in the wake of the COVID-19 lockdowns. In fact, in the ten weeks following the initial lockdown in the United States, their productivity fell by a stunning 13.9 percent relative to that of male colleagues. And it’s likely to do with the disproportionate burden of responsibility for household needs and childcare that persistently falls on women, Cui said. “We know that gender inequality persists both in the workplace and at home, and we were curious to see how the lockdown scenario would attenuate or exacerbate the situation for women,” Cui said. Anecdotal evidence from her own field—academia—showed that in the weeks following the stay at home mandate in March, there was an upswing of around 20 to 30 percent of papers submitted to journals. However, the overwhelming majority of these were being authored by men. Intrigued, Cui teamed up with Goizueta doctoral student Hao Ding and Feng Zhu from Harvard Business School to conduct a systematic study of female academics’ productivity and output during this period. “We knew that the lockdown had disrupted life for everyone, including academics. With schools and kindergartens closed and people taking care of work and household obligations at home, we intuited that women would be affected more than men as they are disproportionately burdened with domestic and childcare duties,” Cui said. For female academics this would theoretically be particularly acute, as the critical thinking that goes into research calls for quiet, interruption-free environments. To put this to the test, Cui and her co-authors created a large data set covering all the new social science research papers produced by men and women, across 18 disciplines and submitted to SSRN, a research repository, between December 2018 to May 2019 and then from December 2019 to May 2020. From this set, they were able to extract information on titles, authors’ names, affiliations, and addresses to identify their countries and institutions, as well as faculty pages to distinguish between men and women. In total they collected just under 43,000 papers written by more than 76,000 authors in 25 countries. Looking at the data, Cui and her colleagues were able to compute the total number of papers produced by male and female academics each week and then compare the productivity of both before and after the start of the lockdown. Prior to the pandemic, the 2019 period showed no significant changes in productivity in either gender. But in the 10 weeks following the shock of lockdown, a clear gap emerges between men and women, with female academics’ productivity falling by just under 14 percent in comparison to their male colleagues. Interestingly the effect was more pronounced in top-ranked research universities. This is likely because top schools require faculty to publish research as the primary requisite for promotion, so men would be motivated to continue authoring papers before and after the lockdown. These findings lend solid, empirical clout to the notion that women do take a hit to productivity when care and work time are reorganized, Cui noted. “We see clearly that women are producing less work as a consequence of working from home. In the field of academia, that has huge implications as achieving a permanent position, or tenure, is generally linked to your research output,” she said. “So, there is a serious fairness issue there. If women are producing less because the burden of household responsibility is greater for them than for men, then you’re likely to see fewer female academics get tenure through no fault of their own.” Indeed, one of the other findings of the study shows that while productivity fell, the quality of female-authored research measured by downloads and citations did not. Then there’s the issue of teleworking and gender. With a significant proportion of the world’s white-collar organizations still working from home and unlikely to head back to the office any time soon—and as many schools and childcare facilities remain closed due to the pandemic—Cui is concerned that productivity as a measure of value and a marker of success might mean the odds are further stacked against women. And not just in academia. “We looked at universities in particular, but our findings can really be externalized to any other industry because the underlying issues here are universal. So, with remote working becoming normalized, I think there’s a real onus on organizations of every type to think about how to mitigate these unintended consequences,” she said. “There needs to be more thought about how we measure value or potential of employees.” Cui calls for organizations and institutions to consider these factors when they evaluate male and female workers in the present context and looking to the future. Among the kinds of proactive moves they might consider are to make training programs for male and female employees that explore fairness and encourage a more even distribution of responsibility in the home and for children. “There’s nothing to be gained in prioritizing productivity as a tool for evaluation and just giving women more time, say, to produce as much,” Cui warned. “You’re just left with the same scenario of women doing more than their fair share. Solving this issue is really much more about being aware of it, getting educated about it, and changing your mindset.” If you are a journalist looking to cover this research or speak with Professor Ciu about the subjects of telework and productivity, simply click on her icon now to arrange an interview today.

The Alexa Effect: How the internet of things (IoT) is increasing retail sales
Imagine this scenario. You’re out of coffee but with the click of a button or a simple voice command, you reorder a two months’ supply that will arrive the same day. And that almond milk you like? Well, imagine your fridge already knew you were running low on supplies and independently sent the order to restock before you ran out. The stuff of science-fiction until only recently, internet of things (IoT) technology is beginning to change the way we live and work. Simply put, IoT is a system of interrelated devices—things that can include gadgets, digital objects, or machines, wearables and so on—which have the capacity to send and receive data over a network without human agency or human interaction. As a technology, IoT is novel, and it’s poised to reconfigure a range of sectors and industries—among them, the world of retail. Amazon is a leader in the consumer-facing space with an ecosystem of apps like Alexa, Fire TV, and the now-defunct Dash Button. Meanwhile, tech-savvy retailers are using IoT to facilitate operations. Smart shelves in stores can detect the status of perishable goods or inventory requirements; radio frequency identification (RFID) sensors can actively track the progress of produce through the supply chain. Retailers can even use IoT to send customers personalized digital coupons when they walk into the store. As IoT continues to gain traction around the globe, the potential for efficiency-boosting innovation in retail is clear. Less clear, however, is its actual impact on consumer choices and behaviors. Sure, IoT can save time and mental effort, but how does that translate into real-world business outcomes? This is the question that underscores new research by Vilma Todri and Panagiotis Adamopoulos, both assistant professors of information systems and operations management at Emory’s Goizueta Business School. They were keen to understand whether consumer behavior is significantly changed under the regime of this new technology as it continues its roll out across the world. Specifically, they wanted to know if IoT technology actually increases demand for products. And it turns out that it does. “IoT technology in retail is really in its infancy, so understanding its impact on users and business is key,” Adamopoulos said. “We wanted to shed light on these dynamics at this early point to spark interest and generate more debate around how retailers can leverage this technology.” Together with Stern’s Anindya Ghose, he and Todri put together a large data-set with information about sales of certain products in countries with existing IoT retail markets and in others where the technology has not yet been introduced. “We needed to take into account these sorts of variables to really understand the effect,” Todri said. “So, we had our control group of non-IoT retail markets, and we were able to compare sales data for the same products in countries where the technology has been adopted.” The researchers also controlled for time trends, looking at the impact on sale prior to and post IoT adoption. “Looking at the data over time and pinpointing the exact moment when a product has been made available for sale via IoT sales channels across different countries and at different moments, we were able to infer the effect of the technology on product sales,” Todri said. In total, they looked at sales for the same or similar products in six countries between 2015 and 2017. They also compared sales across different retailers. “By analyzing the same sales information for different products in different markets using different channels across the world, we can see differences in the data that can only be attributable to this new technological feature,” Adamopoulos said. And the differences are significant. The concept is fascinating, and if you are interested in learning more, a complete article about this topic is attached: If you are a journalist or looking to learn more about IoT, our experts can help. Vilma Todri and Panagiotis Adamopoulos, both assistant professors of information systems and operations management at Emory’s Goizueta Business School. Both experts are available to speak with media; simply click on either expert's icon to arrange an interview today.

Is hospital advertising actually good for our health?
Hospitals and healthcare organizations in the U.S. spend $1.5 billion on advertising every year. It’s a topic that provokes lively debate and a certain amount of controversy. Medical bodies, policy makers, and scholars alike question the ethics and efficacy of using (constrained) budgets to promote hospitals to patients. Diwas KC, professor of information systems & operations management at Emory University’s Goizueta Business School, and Tongil Kim, an assistant professor of management at Naveen Jindal School of Management in Texas, conducted a large-scale study of hospitals and patients in the state of Massachusetts to better understand the impact of hospital advertising. What they found is striking: Not only does television advertising work, it significantly drives demand, attracting patients living far from the hospital and beyond its regular area. And that’s not all. KC and Kim discovered that limiting hospital advertising or imposing an outright ban, as some groups have called for, might actually have serious negative effects on patient healthcare. “There has been a lot of discussion about banning advertising over recent years because of uncertainties around wasting money and resources,” KC said. In the paper “Impact of hospital advertising on patient demand and outcomes,” KC shows that there is a correlation between the amount spent on TV advertising and the quality of the hospital in question. Healthcare facilities that invest more in advertising tend to be “better” hospitals, he adds; they offer higher caliber care and services and, as such, they see much lower patient readmission rates—a key quality metric in healthcare. To get to these insights, KC and Kim looked at more than 220,000 individual patient visits to hospitals in the state of Massachusetts over a 24-month period. Among the data they collected were things like hospital type, location, and dollars spent on advertising. Patients were documented in terms of medical conditions, insurance, zip codes (to determine residence), and median household income. They were able to contrast those hospitals that invested in television advertising and those that did not. With the former, they uncovered a significant uptick in patient visits, with people coming from far further afield. This was particularly true of wealthier patients. Then there’s the question of patient outcomes. Here the data showed unequivocally that it’s the high-quality, low-readmission hospitals that advertise more—something that KC attributes to the natural tendency to get “more bang for the advertising buck when the quality of your product or service is better.” As for banning advertising, this would negatively impact these hospitals, he argues, limiting their ability to attract patients. It could also lead to an increase in population-level readmission rates. “Patient readmission rates are one of the key metrics along with mortality rates that tell us how well a healthcare facility is working,” said KC. “If a patient gets discharged but has to come back to a hospital in, say, 30 days, unless it’s a chronic condition or ongoing treatment, it’s a good indication that the patient didn’t get the level of care they should have the first time.” Indeed, “when we looked at all of the data, we found that the hospitals where there were fewest revisit rates were those that advertised more,” he said. KC finds that a blanket ban on hospital advertising could lead to an extra 1.2 readmissions for every 100 patients discharged. It’s a significant and “surprising” finding. And one that should inform the debate around healthcare advertising spend in the U.S. “There’s also the idea that this is a zero-sum game because if a patient doesn’t go to hospital A, they’re just going to go to hospital B—the one that advertises more—splitting the pie in different ways but not increasing that pie,” KC said. “What our study finds is that yes, advertising does draw patients away from one facility and towards another, but that the latter generally delivers better patient outcomes,” he said. “So, there is a social welfare benefit right there that suggests that you should not ban hospital advertising. There are real health benefits in allowing [advertising] to happen.” If you are a journalist looking to cover this topic - then let our experts help. Diwas KC is a Professor of Information Systems & Operations Management at Emory University’s Goizueta Business School. He is an expert in the areas of Data Analytics, Operations, and Healthcare. If you are interesting in arranging an interview - simply click on his icon to set up a time today.



