Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Why online recommendations make it easier to hit “buy” featured image

Why online recommendations make it easier to hit “buy”

When it is time to buy something online, perhaps a coffee maker, you might head to Amazon and browse items for sale. One particular model might spark interest. The product page may contain recommendations for other goods: complementary products such as coffee filters; or recommendations for different, competitor coffee maker brands offering unique features and prices. E-commerce websites commonly use product recommendations — called co-purchase and co-view recommendations — to keep users locked into the sales funnel and increase customer retention. But what impact do these types of recommendations actually have on consumers? How do they influence one’s willingness to pay for the original product searched? In fact, the level of influence depends on how close a consumer is to making that purchase, says Jesse Bockstedt, associate professor of information systems & operations management at Emory’s Goizueta Business School. In addition, what type of recommendation the consumer sees plays a role in purchasing as well. To shed empirical light on this, Bockstedt teamed with Mingyue Zhang from the Shanghai International Studies University. “We were curious. We knew that recommendation systems are integral to how consumers discover products online – a good 35 percent of Amazon sales can be attributed to recommendations, for instance,” Bockstedt says. “But we knew a lot less about how recommendations change consumer behavior in relation to a focal product.” Specifically, the researchers were interested in looking at the effect of complementary versus substitutable products, and what impact the price of these types of products had on consumer behavior. They also wanted to know whether these effects were more or less amplified depending on whether consumers were at the exploratory phase in the buying process or ready to go ahead and make the purchase. To unpack the dynamics at play, Bockstedt and Zhang ran two experiments that simulated the online purchasing experience. The researchers had volunteers go through the process of evaluating different products and then report back on how much they were willing to pay for each. “We asked volunteers to look at a product page for a computer mouse, and we randomly assigned different recommendations to that page – some that were for other mice, and others that were for goods and products that would complement the original mouse. Going through the experiment, we also manipulated the price that volunteers saw on different pages, both for the recommended substitute and complementary products,” he says. “Finally, we looked at the effect of timing and the sales funnel. In one case we had volunteers look for a highly specific mouse and recommended a particular product page to them. To simulate the more exploratory phase, we gave them many pages and asked them to click on the one they found most interesting.” In total, Bockstedt and Zhang put 200+ volunteers through the replica virtual purchasing experience and recorded their willingness to pay the advertised price for the focal product on scale of 0 to 100, depending on what they had seen and the point in the sales funnel they had seen the recommendations. If you are looking to learn more about this research and the results, Emory has a full article published for reading and review. If you are a journalist looking to cover this topic or if you are simply interested in learning more, then let us help. Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. He is available to speak with media, simply click on his icon now – to book an interview today.

Fewer cars, but more fatalities - What's happening on America's pandemic roadways featured image

Fewer cars, but more fatalities - What's happening on America's pandemic roadways

Fewer vehicles are traveling on America's roadways during the ongoing coronavirus pandemic, but the number of fatal car crashes in 2020 increased exponentially compared to the same time period in 2019. UConn expert Eric Jackson, a research professor and director of the Connecticut Transportation Safety Research Center, and behavioral research assistant Marisa Auguste examined the increase in a recent essay published by The Conversation:  Curious about traffic crashes during the pandemic, we decided to use our skills as a social scientist and a research engineer who study vehicle crash data to see what we could learn about Connecticut’s traffic deaths when the stay-at-home orders first went into place last March. A partnership between the Department of Transportation, local hospitals and the University of Connecticut discovered what many people intuitively knew: Traffic volume and multivehicle crashes fell significantly during the stay-at-home order. Statewide, daily vehicle traffic fell by 43% during the stay-at-home order compared to earlier in the year, while mean daily counts of multivehicle crashes decreased from 209 before the stay-at-home order to 80 during lockdown. What was unexpected, however, was the significant increase in single-vehicle crashes, especially fatal ones. During the stay-at-home period, the incidence rate of fatal single-vehicle crashes increased 4.1 times, while the rate of total single-vehicle crashes was also up significantly. Data about all crash types in the state, whether single- or multivehicle, tell a similar story. Although preliminary, police reports have placed the 2020 year-end total for traffic deaths at 308, a 24% increase from 2019. While the researchers said that it's unclear why this counterintuitive increase in fatalities on the roads has occurred, their advice to drivers? "Check your speed" and "don't drive angry." If you are a journalist looking to know more about this topic, let us help. Simply click on Eric Jackson’s icon to arrange an interview today.

Eric Jackson, Ph.D. profile photo
2 min. read
What does Meghan Markle's explosive interview say about how the Royal Family and British press treats women of color? featured image

What does Meghan Markle's explosive interview say about how the Royal Family and British press treats women of color?

It was hyped, promoted and delivered a ratings bonanza for CBS. Oprah Winfrey’s exclusive, no-holds barred interview with Meghan Markle and Prince Harry, left many aghast by her revelations of mistreatment, constant abuse in the media and even Meghan's experience of racism when it came to the status, security and skin color of her then unborn son. Even the day after, Oprah, praised for her masterful interviewing skills, is still revealing excerpts that shine a brighter light on the situation. The Duchess of Sussex claimed the press team that would defend the royal family "when they know something's not true" failed to come to their defense. Winfrey asked Prince Harry if he hoped his family would ever acknowledge that the differences in treatment were over race. "It would make a huge difference," he said. "Like I said, there's a lot of people that have seen it for what it was… like it's talked about across the world." The people who do not want to see it, Harry claimed, "choose not to see it." March 08 – CBS News The interview has the public discussing racism and misogyny and how these are playing out in the Royal Family dynamics and the British press. And if you are a journalist looking to explore this issue, then let our experts help. Dr. Adria Goldman’s research explores the intersectionality of race, gender, culture and its connection to communication and media. She enjoys examining media’s impact on perceptions, construction of identity, social relationships and belief systems. Dr. Goldman is available to speak with media regarding Oprah Winfrey's interview with Meghan Markle and Prince Harry and what it means when it comes to race, royalty and what impact it may have on the couple and the Royal Family moving forward. If you are looking to arrange an interview, simply click on her icon now to book a time today.

Adria Goldman profile photo
2 min. read
Personality matters: the tie between language and how well your video content performs featured image

Personality matters: the tie between language and how well your video content performs

Why does one piece of online video content perform better than another? Does it come down to its relevance, production values, and posting and sharing strategies? Or are other dynamics at play? There are plenty of theories about what, when and how to post if you want to drive the performance of your video. But new research by Goizueta’s Rajiv Garg, associate professor of information systems and operations management, sheds empirical and highly nuanced new light on the type of language to inject in a content if you really want to accelerate consumption. And it turns out that a lot of it depends on personality. Together with Haris Krijestorac of HEC Paris and McCombs’ Maytal Saar-Tsechansky, Garg has run a large-scale study, analyzing the words spoken and used in speech-heavy videos posted to YouTube, and then organizing those words by personality – how they “score” in terms of the so-called Big Five personality traits. “The Big Five is a system or taxonomy that has been used by psychologists and others since the 1980s to organize different types of personality traits. These traits are extroversion, agreeableness, openness, conscientiousness, and neuroticism,” says Garg. “In previous research into video content performance, we’ve looked into mechanisms such as posting and re-posting on different channels and how they impact the virality of one video over another. But we were intrigued by the role of language and how different words map to these personality traits, which in turn might have an impact on user emotion or response.” Emory has this entire comprehensive article that includes more details on the Big Five and it is available for reading here: If you are a journalist looking to cover this topic – then let our experts help with your story. Rajiv Garg from Emory’s Goizueta Business School is available to speak with media – simply click on his icon now to arrange an interview today.

Online ratings systems shouldn’t just be a numbers game featured image

Online ratings systems shouldn’t just be a numbers game

When you’re browsing the internet for something to buy, watch, listen, or rent, chances are that you will scan online recommendations before you make your purchase. It makes sense. With an overabundance of options in front of you, it can be difficult to know exactly which movie or garment or holiday gift is the best fit. Personalized recommendation systems help users navigate the often-confusing labyrinth of online content. They take a lot of the legwork out of decision-making. And they are an increasingly commonplace function of our online behavior. All of which is in your best interest as a consumer, right? Yes and no, says Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. Bockstedt has produced a body of research in recent years that reveals a number of issues with recommendation systems that should be on the radar of organizations and users alike. While user ratings, often shown as stars on a five- or ten-point scale, can help you decide whether or not to go ahead and make a selection, online recommendations can also create a bias towards a product or experience that might have little or nothing to do with your actual preferences, Bockstedt says. Simply put, you’re more likely to watch, listen to, or buy something because it’s been recommended. And, when it comes to recommending the thing you’ve just watched, listened to, or bought yourself, your own rating might also be heavily influenced by the way it was recommended to you in the first place. “Our research has shown that when a consumer is presented with a product recommendation that has a predicted preference rating—for example, we think you’ll like this movie or it has four and a half out of five stars—this information creates a bias in their preferences,” Bockstedt says. “The user will report liking the item more after they consume it if the system’s initial recommendation was high, and they say they like it less post-consumption, if the system’s recommendation was low. This holds even if the system recommendations are completely made up and random. So the information presented to the user in the recommendation creates a bias in how they perceive the item even after they’ve actually consumed or used it.” This in turn creates a feedback loop which can reflect authentic preference, but this preference is very likely to be contaminated by bias. And that’s a problem, Bockstedt says. “Once you have error baked into your recommendation system via this biased feedback loop, it’s going to reproduce and reproduce so that as an organization you’re pushing your customers towards certain types of products or content and not others—albeit unintentionally,” Bockstedt explains. “And for users or consumers, it’s also problematic in the sense that you’re taking the recommendations at face value, trusting them to be accurate while in fact they may not be. So there’s a trust issue right there.” Online recommendation systems can also potentially open the door to less than scrupulous behaviors, Bockstedt adds. Because ratings can anchor user preferences and choices to one product over another, who’s to say organizations might not actually leverage the effect to promote more expensive options to their users? In other words, systems have the potential to be manipulated such that customers pay more—and pay more for something that they may not in fact have chosen in the first place. Addressing recommendation system-induced bias is imperative, Bockstedt says, because these systems are essentially here to stay. So how do you go about attenuating the effect? His latest paper sheds new and critical light on this. Together with Gediminas Adomavicius and Shawn P. Curley of the University of Minnesota and Indiana University’s Jingjing Zhang, Bockstedt ran a series of lab experiments to determine whether user bias could be eliminated or mitigated by showing users different types of recommendations or rating systems. Specifically they wanted to see if different formats or interface displays could diminish the bias effect on users. And what they found is highly significant. Emory has published a full article on this topic – and its available for reading here: If you are a journalist looking to cover this topic or if you are simply interested in learning more, then let us help. Jesse Bockstedt, associate professor of information systems and operations management at Emory’s Goizueta Business School. He is available to speak with media, simply click on his icon now – to book an interview today.

Survival analysis: Forecasting lifespans of patients and products featured image

Survival analysis: Forecasting lifespans of patients and products

How long will you live? Should you spring for that AppleCare+ warranty for your iPhone? When will your buddy pay you back for that lunch? For centuries, soothsayers have striven to understand the lifespan of things – be they patient longevity, product lifecycles, or even time to loan default. Nowadays, scientists have turned away from reading tea leaves and toward survival analysis – a complex data science method for predicting not only whether an event will happen (the death of a patient, the failure of a product or machine, default on a payment, and so on) but when this event is likely to occur. But it’s problematic. Until now, the tools of survival analysis have only been applicable in certain settings. This is due to the inherent heterogeneity of what is being analyzed: differences in patient lifestyles, demographics, product usage patterns, and so on. New research by Goizueta Business School’s Donald Lee, associate professor of information systems and operations management and of biostatistics and bioinformatics, has yielded a new tool that greatly extends survival analysis to broader use cases. “Historically, scientists have used classic survival analysis tools to predict the lifespan of different things in different fields, from products to patients,” Lee said. “Since the 1950s, the Kaplan-Meier estimator has been the benchmark for analyzing lifetime data, particularly in clinical trials. The next breakthrough came in the 1970s when the Cox proportional hazards model was introduced, which allows researchers to incorporate variables that can affect the predictability of things like patient mortality.” The problem with the existing survival analysis tools, Lee said, is that they make certain assumptions that can skew the predictions if the assumptions are not met. “There are very few existing tools that can incorporate variables without imposing assumptions on how they affect survival, let alone when there are a lot of variables that can also change over time. For example, two iPhones will have different lifespans depending on the temperature at which they are stored, amongst many other factors. But it’s unlikely that storing your phone at 30 degrees will halve its lifespan compared to storing it at 60 degrees. This sort of linear relationship is commonly assumed by existing tools.” Lee’s team developed a new survival methodology based on something called gradient boosting: a machine learning technique that combines decision trees to yield predictions. The method, Lee said, is totally assumption-free (or nonparametric in technical parlance) and can deal with a large number of variables that can change continuously over time, making it significantly more general than existing methods. Nothing like it has been seen until now, he noted. “Calculating the survival rate of anything is super complex because of the variables. Say you want to create an app for a smart watch that monitors the wearer’s vitals and use this information to create a real-time warning indicator for stroke. Doing this accurately is difficult for two reasons,” Lee explained. “First, a large number of variables may be relevant to stroke risk, and the variables can interact in ways that break the assumptions central to existing survival analysis methods. And second, variables like blood pressure vary over time, and it is the recent measurements that are most informative. This introduces an additional time dimension that further complicates things.” The software implementation of Lee’s method, BoXHED, overcomes both issues and allows scientists to develop real-time predictive models for conditions like stroke. The trained model can then be ported to a watch app to tell its wearer if and when they’re likely to have a stroke, a process known as inferencing in machine learning lingo. The implications, Lee said, are huge. “BoXHED now opens the door for modern applications of survival analysis. In previous research, I have looked at the design of early warning mortality indicators for patients with advanced cancer and also for patients in the ICU. These use other methods to make predictions at fixed points in time, but now they can be transformed into real-time warning indicators using BoXHED.” He cited the case of end-stage cancer patients who are often better served by hospice care than by aggressive therapy. “Accurate predictions of survival are absolutely critical for care planning. In previous analyses, we have seen that using existing predictive models to inform end-of-life care planning can potentially avert $1.9 million in medical costs and 1,600 days of unnecessary inpatient care per 1,000 patient visits in the United States. BoXHED is likely to lead to even better results.” Lee’s research paper is forthcoming in the Annals of Statistics. He has also created an open-source software implementation of BoXHED, which can radically improve the accuracy of survival analysis across a breadth of applications. The paper describing BoXHED was published in the International Conference on Machine Learning, and the latest version of the BoXHED software can be found online. If you are a journalist or looking to speak with Donald Lee – simply click on his icon now to arrange an interview or appointment today.

Donald Lee profile photo
4 min. read
Study of auto recalls shows carmakers delay announcements until they can 'hide in the herd'  featured image

Study of auto recalls shows carmakers delay announcements until they can 'hide in the herd'

BLOOMINGTON, Ind. - Automotive recalls are occurring at record levels, but seem to be announced after inexplicable delays. A research study of 48 years of auto recalls announced in the United States finds carmakers frequently wait to make their announcements until after a competitor issues a recall - even if it is unrelated to similar defects. This suggests that recall announcements may not be triggered solely by individual firms' product quality defect awareness or concern for the public interest, but may also be influenced by competitor recalls, a phenomenon that no prior research had investigated. Researchers analyzed 3,117 auto recalls over a 48-year period -- from 1966 to 2013 -- using a model to investigate recall clustering and categorized recalls as leading or following within a cluster. They found that 73 percent of recalls occurred in clusters that lasted 34 days and had 7.6 following recalls on average. On average, a cluster formed after a 16-day gap in which no recalls were announced. They found 266 such clusters over the period studied. "The implication is that auto firms are either consciously or unconsciously delaying recall announcements until they are able to hide in the herd," said George Ball, assistant professor of operations and decision technologies and Weimer Faculty Fellow at the Indiana University Kelley School of Business. "By doing this, they experience a significantly reduced stock penalty from their recall." Ball is co-author of the study, "Hiding in the Herd: The Product Recall Clustering Phenomenon," recently published online in Manufacturing and Service Operations Management, along with faculty at the University of Illinois, the University of Notre Dame, the University of Minnesota and Michigan State University. Researchers found as much as a 67 percent stock market penalty difference between leading recalls, which initiate the cluster, and following recalls, who follow recalls and hide in the herd to experience a lower stock penalty. This indicates a "meaningful financial incentive for auto firms to cluster following recalls behind a leading recall announcement," researchers said. "This stock market penalty difference dissipates over time within a cluster. Additionally, across clusters, the stock market penalty faced by the leading recall amplifies as the time since the last cluster increases." The authors also found that firms with the highest quality reputation, in particular Toyota, triggered the most recall followers. "Even though Toyota announces some of the fewest recalls, when they do announce a recall, 31 percent of their recalls trigger a cluster and leads to many other following recalls," Ball said. "This number is between 5 and 9 percent for all other firms. This means that firms are likely to hide in the herd when the leading recall is announced by a firm with a stellar quality reputation such as Toyota. "A key recommendation of the study is for the National Highway Traffic Safety Administration (NHTSA) to require auto firms to report the specific defect awareness date for each recall, and to make this defect awareness date a searchable and publicly available data field in the auto recall dataset NHTSA provides online," Ball added. "This defect awareness date is required and made available by other federal regulators that oversee recalls in the U.S., such as the Food and Drug Administration. Making this defect awareness date a transparent, searchable and publicly available data field may discourage firms from hiding in the herd and prompt them to make more timely and transparent recall decisions." Co-authors of the study were Ujjal Mukherjee, assistant professor of business administration at the Gies College of Business at the University of Illinois who was the lead author; Kaitlin Wowak, assistant professor of IT, analytics, and operations at the Mendoza College of Business at the University of Notre Dame; Karthik Natarajan, assistant professor of supply chain and operations at the Carlson School of Management at the University of Minnesota; and Jason Miller, associate professor of supply chain management at the Broad College of Business at Michigan State University.

3 min. read
Power Grid Expert Weighs in on Texas Outages And How to Build a Better System featured image

Power Grid Expert Weighs in on Texas Outages And How to Build a Better System

Having run countless simulations and experiments aimed at building a more resilient power grid, Luigi Vanfretti is well acquainted with the weaknesses in the nation’s current system. This expertise was recently featured in a report about the factors that caused massive, ongoing power outages in Texas. Frozen well heads, gas pipes, and other factors contributed to a “perfect storm” of conditions, Vanfretti said. Some politicians and pundits have floated the notion that the catastrophe was primarily due to frozen wind turbines, but according to Vanfretti, an associate professor of electrical, computer, and systems engineering at Rensselaer Polytechnic Institute, the problem is far more complex. Additionally, the electrical grid in Texas is unique in that it has limited connections to neighboring states, which means there are limitations to how much assistance it can receive during a crisis. “It’s about the ability to route the power,” Vanfretti recently told the Times Union. Vanfretti is an expert in power grid modeling, simulation, stability, and control. His research focuses on creating a smarter, cleaner, more reliable power grid that is capable of integrating renewable energy. Within his Analysis Laboratory for Synchrophasor and Electrical Energy Technology (ALSET) Lab, Vanfretti and his team model the power grid and run simulations in order to develop, test, and improve smart inverters, software, and hardware that will be needed to create the smart grid of the future. You can watch him discuss his research here. Vanfretti is available to speak about what contributed to the devastating outages in Texas, as well as the changes and research necessary to create a more resilient power system.

Luigi Vanfretti profile photo
2 min. read
Eliminating The Barriers To Telehealth & Patient Retention featured image

Eliminating The Barriers To Telehealth & Patient Retention

During the ongoing national pandemic, healthcare is in a period of rapid evolution, bringing telehealth to the forefront of patient care. Telehealth is a proven strategy to improve health outcomes, but it’s gated behind socioeconomic privilege and leaves behind many of our community’s most vulnerable patients. One such disparity is the inability of many Americans to access digital health care. This silent epidemic affects lives daily. Many patients, especially those in rural communities, face obstacles when trying to get the care they need. From access to reliable transportation and affordable child care to financial instability and lack of culturally competent providers, there is no shortage of hurdles standing in the way of disadvantaged populations accessing quality care. Well-implemented telehealth services can offer a clear path through these common barriers to care while improving health outcomes and boosting patient retention. “We know that mobile health intervention is an effective tool for retaining patients in care, but it’s only as effective as it is accessible,” said Richard Walsh, our CEO. “It would be negligent to assume that every individual has access to the devices, internet, or knowledge necessary to engage in telemedicine.” Like other leaders in the industry, we know telehealth is a privilege, but at Continuud, we believe it should be a right.” As Nathan Walsh, our CXO, said, “During a public health crisis such as this, we have to be proactive in ensuring that underserved communities have access to the care that they need in every way possible.” Through our research and conversations with community health leaders, we have identified 4 common barriers to telehealth success: access to video-ready phones or tablets, access to a reliable & affordable internet connection, an understanding of how to use the device to access services, and trust in technology being used for health services. Our solution is to create a platform that not only solves these problems but also enhances the patient experience and drives the best possible outcome of telehealth intervention. Our platform, Access, provides 8-inch tablets with an unlimited data connection to patients. Each device ships with a secured environment and limited functionality customized by the health care provider to include the tools that patients need to access care. We have created a simple deployment and warehousing solution to make it easy for organizations to get started quickly. Our end-to-end deployment and recall services handle every aspect of the platform so organizations can remain focused on serving their patients. The platform supports patient-by-patient interface customizations, so each patient’s experience is tailored to their unique treatment plan. We have device insurance and same-day replacement built into the program to account for loss, theft, and damaged devices, so organizations will always have access to the inventory they need to serve their clients. At Continuud, we offer an integrated ecosystem designed from the ground up to enable health care providers to work more efficiently toward a common goal of driving positive health outcomes in their communities. Continuud is known throughout Indiana for our innovative approach to connecting high-risk populations to care and implementing strategic technology to help retain and learn from patients so providers can evolve with the needs of their patients. To learn more about our platform, click here to visit our homepage. If you would like to schedule a demo with our team to talk about the platform in greater detail, click here.

Richard Walsh profile photo
3 min. read
Why customers hold the key to a company’s true valuation featured image

Why customers hold the key to a company’s true valuation

When determining a fair valuation for a company—especially in anticipation of an initial public offering (IPO)—investors often rely heavily on “top down” approaches focusing primarily on traditional financial measures to do so. But what if this approach doesn’t paint the full picture? Daniel McCarthy, assistant professor of marketing at Emory’s Goizueta Business School, is building the case that augmenting traditional data sources with customer behavior data gives investors a more accurate company valuation. For the past several years, McCarthy and Peter Fader, professor of marketing at the Wharton School of the University of Pennsylvania, have worked to refine a customer-driven investment methodology they created. “Customer-based corporate valuation (CBCV) simply brings more focus to how individual customer behavior drives the top line,” they explained in “How to Value a Company by Analyzing Its Customers,” an article published in the Harvard Business Review (HBR) earlier this year. “This approach is driving a meaningful shift away from the common but dangerous mindset of ‘growth at all costs,’ towards revenue durability and unit economics—and bringing a much higher degree of precision, accountability, and diagnostic value to the new loyalty economy.” Fader, McCarthy’s PhD advisor while he was at Wharton, had done some of the seminal work on forecasting customer shopping/purchasing behaviors. This helped build baseline expertise for how one could go about the customer-level modeling. McCarthy recognized that this behavioral modeling could be put to good use in a financial setting, if done the right way. “There was this untapped source of intellectual property that’s been accumulating within marketing over the last 30 years,” McCarthy said. While other academics have done some conceptual work in the area, none, McCarthy noted, had done so in a way that was consistent with how financial professionals go about performing corporate valuation. McCarthy and Fader merged these well-validated customer-level models with standard corporate valuation methods, then put their resulting valuation tool head-to-head with alternative approaches. They found that their CBCV model subsequently outperformed. A full article on this subject is attached, within it, you will find key CBCV highlights such as: Using unit economics to more accurately predict revenue forecasts Gaining access to the right data The CBCV model is also good for managers and for customers Working to have publicly traded companies adopt CBCV McCarthy’s work on the CBCV methodology has earned him a number of awards, including the MSI Alden G. Clayton, American Statistical Association, INFORMS, and the Shankar-Spiegel dissertation awards. If you are a journalist covering this topic or if you want to learn more about this work or customer-based corporate valuation – then let our experts help. Daniel McCarthy is an Assistant Professor of Marketing at Emory University's Goizueta School of Business where his research specialty is the application of leading-edge statistical methodology to contemporary empirical marketing problems. If you are looking to contact Daniel – simply click on his icon now to arrange an interview today.

2 min. read