Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Expert Perspective: Mitigating Bias in AI: Sharing the Burden of Bias When it Counts Most

Whether getting directions from Google Maps, personalized job recommendations from LinkedIn, or nudges from a bank for new products based on our data-rich profiles, we have grown accustomed to having artificial intelligence (AI) systems in our lives. But are AI systems fair? The answer to this question, in short—not completely. Further complicating the matter is the fact that today’s AI systems are far from transparent. Think about it: The uncomfortable truth is that generative AI tools like ChatGPT—based on sophisticated architectures such as deep learning or large language models—are fed vast amounts of training data which then interact in unpredictable ways. And while the principles of how these methods operate are well-understood (at least by those who created them), ChatGPT’s decisions are likened to an airplane’s black box: They are not easy to penetrate. So, how can we determine if “black box AI” is fair? Some dedicated data scientists are working around the clock to tackle this big issue. One of those data scientists is Gareth James, who also serves as the Dean of Goizueta Business School as his day job. In a recent paper titled “A Burden Shared is a Burden Halved: A Fairness-Adjusted Approach to Classification” Dean James—along with coauthors Bradley Rava, Wenguang Sun, and Xin Tong—have proposed a new framework to help ensure AI decision-making is as fair as possible in high-stakes decisions where certain individuals—for example, racial minority groups and other protected groups—may be more prone to AI bias, even without our realizing it. In other words, their new approach to fairness makes adjustments that work out better when some are getting the short shrift of AI. Gareth James became the John H. Harland Dean of Goizueta Business School in July 2022. Renowned for his visionary leadership, statistical mastery, and commitment to the future of business education, James brings vast and versatile experience to the role. His collaborative nature and data-driven scholarship offer fresh energy and focus aimed at furthering Goizueta’s mission: to prepare principled leaders to have a positive influence on business and society. Unpacking Bias in High-Stakes Scenarios Dean James and his coauthors set their sights on high-stakes decisions in their work. What counts as high stakes? Examples include hospitals’ medical diagnoses, banks’ credit-worthiness assessments, and state justice systems’ bail and sentencing decisions. On the one hand, these areas are ripe for AI-interventions, with ample data available. On the other hand, biased decision-making here has the potential to negatively impact a person’s life in a significant way. In the case of justice systems, in the United States, there’s a data-driven, decision-support tool known as COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) in active use. The idea behind COMPAS is to crunch available data (including age, sex, and criminal history) to help determine a criminal-court defendant’s likelihood of committing a crime as they await trial. Supporters of COMPAS note that statistical predictions are helping courts make better decisions about bail than humans did on their own. At the same time, detractors have argued that COMPAS is better at predicting recidivism for some racial groups than for others. And since we can’t control which group we belong to, that bias needs to be corrected. It’s high time for guardrails. A Step Toward Fairer AI Decisions Enter Dean James and colleagues’ algorithm. Designed to make the outputs of AI decisions fairer, even without having to know the AI model’s inner workings, they call it “fairness-adjusted selective inference” (FASI). It works to flag specific decisions that would be better handled by a human being in order to avoid systemic bias. That is to say, if the AI cannot yield an acceptably clear (1/0 or binary) answer, a human review is recommended. To test the results for their “fairness-adjusted selective inference,” the researchers turn to both simulated and real data. For the real data, the COMPAS dataset enabled a look at predicted and actual recidivism rates for two minority groups, as seen in the chart below. In the figures above, the researchers set an “acceptable level of mistakes” – seen as the dotted line – at 0.25 (25%). They then compared “minority group 1” and “minority group 2” results before and after applying their FASI framework. Especially if you were born into “minority group 2,” which graph seems fairer to you? Professional ethicists will note there is a slight dip to overall accuracy, as seen in the green “all groups” category. And yet the treatment between the two groups is fairer. That is why the researchers titled their paper “a burden shared is a burdened halved.” Practical Applications for the Greater Social Good “To be honest, I was surprised by how well our framework worked without sacrificing much overall accuracy,” Dean James notes. By selecting cases where human beings should review a criminal history – or credit history or medical charts – AI discrimination that would have significant quality-of-life consequences can be reduced. Reducing protected groups’ burden of bias is also a matter of following the laws. For example, in the financial industry, the United States’ Equal Credit Opportunity Act (ECOA) makes it “illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance,” as the Federal Trade Commission explains on its website. If AI-powered programs fail to correct for AI bias, the company utilizing it can run into trouble with the law. In these cases, human reviews are well worth the extra effort for all stakeholders. The paper grew from Dean James’ ongoing work as a data scientist when time allows. “Many of us data scientists are worried about bias in AI and we’re trying to improve the output,” he notes. And as new versions of ChatGPT continue to roll out, “new guardrails are being added – some better than others.” “I’m optimistic about AI,” Dean James says. “And one thing that makes me optimistic is the fact that AI will learn and learn – there’s no going back. In education, we think a lot about formal training and lifelong learning. But then that learning journey has to end,” Dean James notes. “With AI, it never ends.” Gareth James is the John H. Harland Dean of Goizueta Business School. If you're looking to connect with him - simply click on his icon now to arrange an interview today.

Key topics at RNC 2024: Artificial Intelligence, Machine Learning and Cybersecurity

As the Republican National Convention 2024 begins, journalists from across the nation and the world will converge on Milwaukee, not only to cover the political spectacle but also to cover how the next potential administration will tackled issues that weren't likely on the radar or at least front and center last election: Artificial Intelligence, Machine Learning and Cybersecurity With technology and the threats that come with it moving at near exponential speeds - the next four years will see challenges that no president or administration has seen before. Plans and polices will be required that impact not just America - but one a global scale. To help visiting journalists navigate and understand these issues and how and where the Republican policies are taking on these topics our MSOE experts are available to offer insights. Dr. Jeremy Kedziora, Dr. Derek Riley and Dr. Walter Schilling are leading voices nationally on these important subjects and are ready to assist with any stories during the convention. . .    . Dr. Jeremy Kedziora Associate Professor, PieperPower Endowed Chair in Artificial Intelligence Expertise: AI, machine learning, ChatGPT, ethics of AI, global technology revolution, using these tools to solve business problems or advance business objectives, political science. View Profile “Artificial intelligence and machine learning are part of everyday life at home and work. Businesses and industries—from manufacturing to health care and everything in between—are using them to solve problems, improve efficiencies and invent new products,” said Dr. John Walz, MSOE president. “We are excited to welcome Dr. Jeremy Kedziora as MSOE’s first PieperPower Endowed Chair in Artificial Intelligence. With MSOE as an educational leader in this space, it is imperative that our students are prepared to develop and advance AI and machine learning technologies while at the same time implementing them in a responsible and ethical manner.” MSOE names Dr. Jeremy Kedziora as Endowed Chair in Artificial Intelligence MSOE online March 22, 2023 . .     . Dr. Derek Riley Professor, B.S. in Computer Science Program Director Expertise: AI, machine learning, facial recognition, deep learning, high performance computing, mobile computing, artificial intelligence View Profile “At this point, it's fairly hard to avoid being impacted by AI," said Derek Riley, the computer science program director at Milwaukee School of Engineering. “Generative AI can really make major changes to what we perceive in the media, what we hear, what we read.” Fake explicit pictures of Taylor Swift cause concern over lack of AI regulation CBS News January 26, 2024 . .    . Dr. Walter Schilling Professor Expertise: Cybersecurity and the latest technological advancements in automobiles and home automation systems; how individuals can protect their business operations and personal networks. View Profile Milwaukee School of Engineering cybersecurity professor Walter Schilling said it's a great opportunity for his students. "Just to see what the real world is like that they're going to be entering into," said Schilling. Schilling said cybersecurity is something all local organizations, from small business to government, need to pay attention to. "It's something that Milwaukee has to be concerned about as well because of the large companies that we have headquartered here, as well as the companies we're trying to attract in the future," said Schilling. Could the future of cybersecurity be in Milwaukee?: SysLogic holds 3rd annual summit at MSOE CBS News April 26, 2022 . .     . For further information and to arrange interviews with our experts, please contact: Media Relations Contact To schedule an interview or for more information, please contact: JoEllen Burdue Senior Director of Communications and Media Relations Phone: (414) 839-0906 Email: burdue@msoe.edu . .     . About Milwaukee School of Engineering (MSOE) Milwaukee School of Engineering is the university of choice for those seeking an inclusive community of experiential learners driven to solve the complex challenges of today and tomorrow. The independent, non-profit university has about 2,800 students and was founded in 1903. MSOE offers bachelor's and master's degrees in engineering, business and nursing. Faculty are student-focused experts who bring real-world experience into the classroom. This approach to learning makes students ready now as well as prepared for the future. Longstanding partnerships with business and industry leaders enable students to learn alongside professional mentors, and challenge them to go beyond what's possible. MSOE graduates are leaders of character, responsible professionals, passionate learners and value creators.

Walter Schilling, Jr., Ph.D.Jeremy Kedziora, Ph.D.Derek Riley, Ph.D.
3 min. read

Milwaukee-Based Experts Available During 2024 Republican National Convention

Journalists attending the Republican National Convention (RNC) are invited to engage with leading Milwaukee School of Engineering (MSOE) experts in a range of fields, including artificial intelligence (AI), machine learning, cybersecurity, urban studies, biotechnology, population health, water resources, and higher education. MSOE media relations are available to identify key experts and assist in setting up interviews (See contact details below). As the RNC brings national attention to Milwaukee, discussions are expected to cover pivotal topics such as national security, technological innovation, urban development, and higher education. MSOE's experts are well-positioned to provide research and insights, as well as local context for your coverage. Artificial Intelligence, Machine Learning, Cybersecurity Dr. Jeremy Kedziora Associate Professor, PieperPower Endowed Chair in Artificial Intelligence Expertise: AI, machine learning, ChatGPT, ethics of AI, global technology revolution, using these tools to solve business problems or advance business objectives, political science. View Profile Dr. Derek Riley Professor, B.S. in Computer Science Program Director Expertise: AI, machine learning, facial recognition, deep learning, high performance computing, mobile computing, artificial intelligence View Profile Dr. Walter Schilling Professor Expertise: Cybersecurity and the latest technological advancements in automobiles and home automation systems; how individuals can protect their business operations and personal networks. View Profile Milwaukee and Wisconsin:  Culture, Architecture & Urban Planning, Design Dr. Michael Carriere Professor, Honors Program Director Expertise: an urban historian, with expertise in American history, urban studies and sustainability; growth of Milwaukee's neighborhoods, the challenges many of them are facing, and some of the solutions that are being implemented. Dr. Carriere is an expert in Milwaukee and Wisconsin history and politics, urban agriculture, creative placemaking, and the Milwaukee music scene. View Profile Kurt Zimmerman Assistant Professor Expertise: Architectural history of Milwaukee, architecture, urban planning and sustainable design. View Profile Biotechnology Dr. Wujie Zhang Professor, Chemical and Biomolecular Engineering Expertise: Biomaterials; Regenerative Medicine and Tissue Engineering; Micro/Nano-technology; Drug Delivery; Stem Cell Research; Cancer Treatment; Cryobiology; Food Science and Engineering (Fluent in Chinese and English) View Profile Dr. Jung Lee Professor, Chemical and Biomolecular Engineering Expertise: Bioinformatics, drug design and molecular modeling. View Profile Population Health Robin Gates Assistant Professor, Nursing Expertise: Population health expert: understanding and addressing the diverse factors that influence health outcomes across different populations. View Profile Water Resources Dr. William Gonwa Professor, Civil Engineering Expertise: Water Resources, Sewers, Storm Water, Civil Engineering education View Profile Higher Education Dr. Eric Baumgartner Executive Vice President of Academics Expertise: Thought leadership on higher education, relevancy and value of higher ed, role of A.I. in future degrees and workforce development. View Profile Dr. Candela Marini Assistant Professor Expertise: Latin American Studies and Visual Culture View Profile Dr. John Walz President Expertise: Thought leadership on higher education, relevancy and value of higher ed View Profile Media Relations Contact To schedule an interview or for more information, please contact: JoEllen Burdue Senior Director of Communications and Media Relations Phone: (414) 839-0906 Email: burdue@msoe.edu About Milwaukee School of Engineering (MSOE) Milwaukee School of Engineering is the university of choice for those seeking an inclusive community of experiential learners driven to solve the complex challenges of today and tomorrow. The independent, non-profit university has about 2,800 students and was founded in 1903. MSOE offers bachelor's and master's degrees in engineering, business and nursing. Faculty are student-focused experts who bring real-world experience into the classroom. This approach to learning makes students ready now as well as prepared for the future. Longstanding partnerships with business and industry leaders enable students to learn alongside professional mentors, and challenge them to go beyond what's possible. MSOE graduates are leaders of character, responsible professionals, passionate learners and value creators.

3 min. read

AI Art: What Should Fair Compensation Look Like?

New research from Goizueta’s David Schweidel looks at questions of compensation to human artists when images based on their work are generated via artificial intelligence. Artificial intelligence is making art. That is to say, compelling artistic creations based on thousands of years of art production may now be just a few text prompts away. And it’s all thanks to generative AI trained on internet images. You don’t need Picasso’s skillset to create something in his style. You just need an AI-powered image generator like DALL-E 3 (created by OpenAI), Midjourney, or Stable Diffusion. If you haven’t tried one of these programs yet, you really should (free or beta versions make this a low-risk proposal). For example, you might use your phone to snap a photo of your child’s latest masterpiece from school. Then, you might ask DALL-E to render it in the swirling style of Vincent Van Gogh. A color printout of that might jazz up your refrigerator door for the better. Intellectual Property in the Age of AI Now, what if you wanted to sell your AI-generated art on a t-shirt or poster? Or what if you wanted to create a surefire logo for your business? What are the intellectual property (IP) implications at work? Take the case of a 35-year-old Polish artist named Greg Rutkowski. Rutkowski has reportedly been included in more AI-image prompts than Pablo Picasso, Leonardo da Vinci, or Van Gogh. As a professional digital artist, Rutkowski makes his living creating striking images of dragons and battles in his signature fantasy style. That is, unless they are generated by AI, in which case he doesn’t. “They say imitation is the sincerest form of flattery. But what about the case of a working artist? What if someone is potentially not receiving payment because people can easily copy his style with generative AI?” That’s the question David Schweidel, Rebecca Cheney McGreevy Endowed Chair and professor of marketing at Goizueta Business School is asking. Flattery won’t pay the bills. “We realized early on that IP is a huge issue when it comes to all forms of generative AI,” Schweidel says. “We have to resolve such issues to unlock AI’s potential.” Schweidel’s latest working paper is titled “Generative AI and Artists: Consumer Preferences for Style and Fair Compensation.” It is coauthored with professors Jason Bell, Jeff Dotson, and Wen Wang (of University of Oxford, Brigham Young University, and University of Maryland, respectively). In this paper, the four researchers analyze a series of experiments with consumers’ prompts and preferences using Midjourney and Stable Diffusion. The results lead to some practical advice and insights that could benefit artists and AI’s business users alike. Real Compensation for AI Work? In their research, to see if compensating artists for AI creations was a viable option, the coauthors wanted to see if three basic conditions were met: – Are artists’ names frequently used in generative AI prompts? – Do consumers prefer the results of prompts that cite artists’ names? – Are consumers willing to pay more for an AI-generated product that was created citing some artists’ names? Crunching the data, they found the same answer to all three questions: yes. More specifically, the coauthors turned to a dataset that contains millions of “text-to-image” prompts from Stable Diffusion. In this large dataset, the researchers found that living and deceased artists were frequently mentioned by name. (For the curious, the top three mentioned in this database were: Rutkowski, artgerm [another contemporary artist, born in Hong Kong, residing in Singapore] and Alphonse Mucha [a popular Czech Art Nouveau artist who died in 1939].) Given that AI users are likely to use artists’ names in their text prompts, the team also conducted experiments to gauge how the results were perceived. Using deep learning models, they found that including an artist’s name in a prompt systematically improves the output’s aesthetic quality and likeability. The Impact of Artist Compensation on Perceived Worth Next, the researchers studied consumers’ willingness to pay in various circumstances. The researchers used Midjourney with the following dynamic prompt: “Create a picture of ⟨subject⟩ in the style of ⟨artist⟩”. The subjects chosen were the advertising creation known as the Most Interesting Man in the World, the fictional candy tycoon Willy Wonka, and the deceased TV painting instructor Bob Ross (Why not?). The artists cited were Ansel Adams, Frida Kahlo, Alphonse Mucha and Sinichiro Wantabe. The team repeated the experiment with and without artists in various configurations of subjects and styles to find statistically significant patterns. In some, consumers were asked to consider buying t-shirts or wall art. In short, the series of experiments revealed that consumers saw more value in an image when they understood that the artist associated with it would be compensated. Here’s a sample of imagery AI generated using three subjects names “in the style of Alphonse Mucha.” Source: Midjourney cited in http://dx.doi.org/10.2139/ssrn.4428509 “I was honestly a bit surprised that people were willing to pay more for a product if they knew the artist would get compensated,” Schweidel explains. “In short, the pay-per-use model really resonates with consumers.” In fact, consumers preferred pay-per-use over a model in which artists received a flat fee in return for being included in AI training data. That is to say, royalties seem like a fairer way to reward the most popular artists in AI. Of course, there’s still much more work to be done to figure out the right amount to pay in each possible case. What Can We Draw From This? We’re still in the early days of generative AI, and IP issues abound. Notably, the New York Times announced in December that it is suing OpenAI (the creator of ChatGPT) and Microsoft for copyright infringement. Millions of New York Times articles have been used to train generative AI to inform and improve it. “The lawsuit by the New York Times could feasibly result in a ruling that these models were built on tainted data. Where would that leave us?” asks Schweidel. "One thing is clear: we must work to resolve compensation and IP issues. Our research shows that consumers respond positively to fair compensation models. That’s a path for companies to legally leverage these technologies while benefiting creators." David Schweidel To adopt generative AI responsibly in the future, businesses should consider three things. First, they should communicate to consumers when artists’ styles are used. Second, they should compensate contributing artists. And third, they should convey these practices to consumers. “And our research indicates that consumers will feel better about that: it’s ethical.” AI is quickly becoming a topic of regulators, lawmakers and journalists and if you're looking to know more - let us help. David A. Schweidel, Professor of Marketing, Goizueta Business School at Emory University To connect with David to arrange an interview - simply click his icon now.

Exploring the Depths: How AI is Revolutionizing Seafloor Research

In recent years, there has been a significant shift in the way seafloor research is conducted, all thanks to the groundbreaking advancements in artificial intelligence (AI) technology. The depths of our oceans have always been a mystery, but with the use of AI, scientists and researchers are now able to explore and uncover the hidden secrets that lie beneath the surface. With funding from the Department of Defense, University of Delaware oceanographer Art Trembanis and others are are using artificial intelligence and machine learning to analyze seafloor data from the Mid-Atlantic Ocean. The goal is to develop robust machine-learning methods that can accurately and reliably detect objects in seafloor data.  “You can fire up your phone and type dog, boat or bow tie into a search engine, and it's going to search for and find all those things. Why? Because there are huge datasets of annotated images for that,” he said. “You don't have that same repository for things like subway car, mine, unexploded ordnance, pipeline, shipwreck, seafloor ripples, and we are working to develop just such a repository for seabed intelligence.” Trembanis is able to talk about this research and the impact it could have on our day to day lives. He can be contacted by clicking his profile.  “You have commercial companies that are trying to track pipelines, thinking about where power cables will go or offshore wind farms, or figuring out where to find sand to put on our beaches,” said Trembanis. “All of this requires knowledge about the seafloor. Leveraging deep learning and AI and making it ubiquitous in its applications can serve many industries, audiences and agencies with the same methodology to help us go from complex data to actionable intelligence.” He has appeared in The Economic Times, Technical.ly and Gizmodo.

Arthur Trembanis
2 min. read

Aston University forensic linguistics experts partner in $11.3 million funding for authorship attribution research

Aston Institute for Forensic Linguistics (AIFL) is part of the project to infer authorship of uncredited documents based on writing style AIFL’s Professor Tim Grant and Dr Krzysztof Kredens are experts in authorship analysis Applications may include identifying counterintelligence risks, combating misinformation online, fighting human trafficking and even deciphering authorship of ancient religious texts. Aston University’s Institute for Forensic Linguistics (AIFL) is part of the AUTHOR research consortium which has won an $11.3 million contract to infer authorship of uncredited documents based on the writing style. The acronym stands for ‘Attribution, and Undermining the Attribution, of Text while providing Human-Oriented Rationales’. Worth $1.3 million, the Aston University part of the project is being led by Professor Tim Grant and Dr Krzysztof Kredens, who both are recognised internationally as experts in authorship analysis and who both engage in forensic linguistic casework as expert witnesses. In addition to their recognised general expertise and experience in this area, Professor Grant has specific expertise in using linguistic analysis to enhance online undercover policing and Dr Kredens has led projects to develop authorship identification techniques involving very large numbers of potential authors. The AUTHOR team is led by Charles River Analytics and is one of six teams of researchers that won The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) programme sponsored by the Intelligence Advanced Research Projects Activity (IARPA). The programme uses natural language processing techniques and machine learning to create stylistic fingerprints that capture the writing style of specific authors. On the flip side is authorship privacy - mechanisms that can anonymize identities of authors, especially when their lives are in danger. Pitting the attribution and privacy teams against each other will hopefully motivate each, says Dr Terry Patten, principal scientist at Charles River Analytics and principal investigator of the AUTHOR consortium. “One of the big challenges for the programme and for authorship attribution in general is that the document you’re looking at may not be in the same genre or on the same topic as the sample documents you have for a particular author,” Patten says. The same applies to languages: We might have example articles for an author in English but need to match the style even if the document at hand is in French. Authorship privacy too has its challenges: users must obfuscate the style without changing the meaning, which can be difficult to execute.” In the area of authorship attribution, the research and casework experience from Aston University will assist the team in identifying and using a broad spectrum of authorship markers. Authorship attribution research has more typically looked for words and their frequencies as identifying characteristics. However, Professor Grant’s previous work on online undercover policing has shown that higher-level discourse features - how authors structure their interactions - can be important ‘tells’ in authorship analysis. The growth of natural language processing (NLP) and one of its underlying techniques, machine learning, is motivating researchers to harness these new technologies in solving the classic problem of authorship attribution. The challenge, Patten says, is that while machine learning is very effective at authorship attribution, “deep learning systems that use neural networks can’t explain why they arrived at the answers they did.” Evidence in criminal trials can’t afford to hinge on such black-box systems. It’s why the core condition of AUTHOR is that it be “human-interpretable.” Dr Kredens has developed research and insights where explanations can be drawn out of black box authorship attribution systems, so that the findings of such systems can be integrated into linguistic theory as to who we are as linguistic individuals. Initially, the project is expected to focus on feature discovery: beyond words, what features can we discover to increase the accuracy of authorship attribution? The project has a range of promising applications – identifying counterintelligence risks, combating misinformation online, fighting human trafficking, and even figuring out the authorship of ancient religious texts. Professor Grant said: “We were really excited to be part of this project both as an opportunity to develop new findings and techniques in one of our core research areas, and also because it provides further recognition of AIFL’s international reputation in the field. Dr Kredens added: “This is a great opportunity to take our cutting-edge research in this area to a new level”. Professor Simon Green, Pro-Vice-Chancellor for Research, commented: “I am delighted that the international consortium bid involving AIFL has been successful. As one of Aston University’s four research institutes, AIFL is a genuine world-leader in its field, and this award demonstrates its reputation globally. This project is a prime example of our capacities and expertise in the area of technology, and we are proud to be a partner.” Patten is excited about the promise of AUTHOR as it is poised to make fundamental contributions to the field of NLP. “It’s really forcing us to address an issue that’s been central to natural language processing,” Patten says. “In NLP and artificial intelligence in general, we need to find a way to build hybrid systems that can incorporate both deep learning and human-interpretable representations. The field needs to find ways to make neural networks and linguistic representations work together.” “We need to get the best of both worlds,” Patten says. The team includes some of the world’s foremost researchers in authorship analysis, computational linguistics, and machine learning from Illinois Institute of Technology, Aston Institute for Forensic Linguistics, Rensselaer Polytechnic Institute, and Howard Brain Sciences Foundation.

4 min. read

Unattainably Perfect: Idealized Images of Influencers Negatively Affect Users’ Mental Health

Filters, Adobe Photoshop, and other digital tools are commonly used by social media “influencers.” These celebrities or individuals have a large follower base and “influence” or hold sway over online audiences. This digital enhancement of images is well-documented anecdotally. Instagram, in particular, has come under growing scrutiny by the media in recent years for promoting and popularizing unattainably perfect or unrealistic representations of its influencers. What’s less understood is the appeal and the actual effect that these digitally enhanced images have on followers–particularly in terms of people’s feelings of self-worth and their mental wellbeing. A ground-breaking study by Goizueta Business School’s David Schweidel and Morgan Ward sheds new light on the real-world impact of digital enhancement, and what they find should be cause for significant concern. Downstream Consequences: Impressions Have Lasting Impact Across a series of five studies with a broad sample of participants and using AI-powered deep learning data analysis to parse individuals’ responses, Schweidel and Ward have unearthed a series of insights around the lure of these kinds of idealized images, and the negative “downstream consequences” that they have on other users’ self-esteem. “Going into the research, we hypothesized that micro-influencers who digitally manipulate their images, offering unrealistic versions of themselves, would be more successful at engaging with other users–getting more follows, likes, and comments from them. And we do find this to be the case, but that’s not all,” says Schweidel. He and Ward also discover that when users are exposed to these kinds of images, they make comparisons between themselves and the enhanced influencers; comparisons that leave them feeling lacking, envious, and often inadequate in some way. In terms of mental health and wellbeing, this is alarming, says Ward. Our research shows unequivocally that when followers consume idealized versions of popular figures on social media there is a social comparison process that results in these users experiencing negative feelings and a substantial decline in their state of self-esteem. On the basis of these insights, is Meta–the owner of Facebook and Instagram–likely to take action to limit the use of digital enhancement on its platforms and apps any time soon? Unlikely, say Schweidel and Ward. “Meta seems to be fully aware of the deleterious effects that Instagram has on its users. However, the success of Instagram–and that of the brands and influencers that appear on the app–is fueled by increased consumer engagement: the very engagement that this kind of digital enhancement of images drives. So the incentive is there to maintain the practices that keep users engaged, even if there’s a trade-off in their emotional and mental health.” This is a fascinating and important topic - and if you're a reporter looking to know, then let us help. David A. Schweidel is professor of marketing at Emory University’s Goizueta Business School. He is an expert in the areas of customer relationship management and social media analytics. Morgan Ward is an assistant professor of marketing at Emory University’s Goizueta Business School and is an expert in consumer behavior. Both experts are available to speak with media - simply click on an icon to arrange a discussion today.

Emory Experts - Why Companies Invest in Local Social Media Influencers

Companies seek local influencers to pitch products. Even though most influencers amass geographically dispersed followings on social media, companies are willing to funnel billions of sponsorship dollars to multiple influencers located in different geographic areas, effectively creating sponsorships that span cities, countries, and in some cases even, the globe. The desire to work with local influencers has spawned advertising agencies that specialize in connecting companies with influencers and may soon redefine the influencer economy. This trend has merit, our research team finds. In a new Journal of Marketing study, we show a positive link between online influence and how geographically close an influencer’s followers are located. The nearer a follower is geographically to someone who posts an online recommendation, the more likely she is to follow that recommendation. To investigate whether geographical distance still matters when word of mouth is disseminated online, our research team examined thousands of actual purchases made on Twitter. We found the likelihood that people who saw a Tweet mentioning someone they follow bought a product would subsequently also buy the product increases the closer they reside to the purchaser. Not only were followers significantly associated with a higher likelihood to heed an influencer’s recommendation the closer they physically resided to the influencer, the more quickly they were to do so, too. We find that this role of geographic proximity in the effectiveness of online influence occurs across several known retailers and for different types of products, including video game consoles, electronics and sports equipment, gift cards, jewelry, and handbags. We show the results hold even when using different ways to statistically measure the effects, including state-of-the-art machine learning and deep learning techniques on millions of Twitter messages. We posit that this role of geographic proximity may be due to an invisible connection between people that is rooted in the commonality of place. This invisible link can lead people to identify more closely with someone who is located nearby, even if they do not personally know that person. The result is that people are more likely to follow someone’s online recommendation when they live closer to them. These online recommendations can take any form, from a movie review to a restaurant rating to a product pitch. What makes these findings surprising is that experts predicted the opposite effect when the internet first became widely adopted. Experts declared the death of distance. In theory, this makes sense: people don’t need to meet in person to share their opinions, reviews, and purchases when they can do so electronically. What the experts who envisioned the end of geography may have overlooked, however, is how people decide whose online opinion to trust. This is where cues that indicate a person’s identity, such as where that person lives in the real world, come into play. We may be more likely to trust the online opinion from someone who lives in the same city as us than from someone who lives farther away, simply because we have location in common. Known as the social identity theory, this process explains how individuals form perceptions of belonging to and relating to a community. Who we identify with can affect the degree to which we are influenced, even when this influence occurs online. Our findings imply that technology and electronic communications do not completely overcome the forces that govern influence in the real world. Geographical proximity still matters, even in the digital space. The findings also suggest that information and cues about an individual’s identity online, such as where he/she lives, may affect his/her influence on others through the extent to which others feel they can relate to him/her. These findings on how spatial proximity may still be a tie that binds even in an online world affirm what some companies have long suspected. Local influencers may have a leg up in the influence game and are worth their weight in location. For these reasons, companies may want to work with influencers who have more proximal connections to increase the persuasiveness of their online advertising, product recommendation, and referral programs. Government officials and not-for-profit organizations may similarly want to partner with local ambassadors to more effectively raise awareness of—and change attitudes and behaviors towards—important social issues. Goizueta faculty members Vilma Todri, assistant professor of Information Systems & Operations Management, Panagiotis (Panos) Adamopoulos, assistant professor of Information Systems & Operations Management, and Michelle Andrews, assistant professor of marketing, shared the following article with the American Marketing Association to highlight their new study published in the Journal of Marketing. To contact any of the experts for an interview regarding this topic, simply click on their icon to arrange a time to talk today.

Vilma TodriPanagiotis (Panos) Adamopoulos
4 min. read

Tracking down those who tried to capture the Capitol buildings – our expert can explain how they’re doing it

On January 06, America watched with shock as a mob of protesters stormed the gates in Washington, D.C. and invaded the Capitol buildings. For hours, the rioters looted and occupied America’s halls of power and though some were apprehended, many found a way to get out and get back home avoiding arrest. However, media coverage was substantial and some of the protesters were even bold enough to be caught posing for social media. Slowly, authorities are tracking them down, and Dr. Derek Riley, an expert at Milwaukee School of Engineering (MSOE) in the areas of computer science and deep learning, has been explaining how artificial intelligence (AI) technology that’s taught at MSOE is capable of enabling law enforcement's efforts to identify individuals from pictures. "With these AI systems, we’ll show it example photos and we’ll say, 'OK, this is a nose, this is an ear, this is Billy, this is Susie,'" Riley said. "And over lots and lots of examples and a kind of understanding if they guess right or wrong, the algorithm actually tunes itself to get better and better at recognizing certain things." Dr. Riley says this takes huge amounts of data and often needs a supercomputer—like MSOE's "Rosie"— to process it. To get a computer or software to recognize a specific person takes more fine-tuning, Riley says. He says your smartphone may already do this. "If you have a fingerprint scan or facial recognition to open up your phone, that’s exactly what’s happening," Riley said. "So, they’ve already trained a really large model to do all the basic recognition, and then you provide a device with a fingerprint scanning or pictures of your face at the end to be able to fine-tune that model to recognize exactly who you are." Riley says this technology isn't foolproof—he says human intelligence is needed at every step. He added we might be contributing to the data sources some of the technology needs by posting our pictures to social media. "Folks are uploading their own images constantly and that often is the source of the data that is used to train these really, really large systems," Riley said. January 14 – WTMJ, Ch. 4, NBC News The concept of facial recognition and the use of this technology in law enforcement (and several other applications) is an emerging topic – and if you are a reporter looking to cover this topic or speak with an expert, then let us help. Dr. Derek Riley is an expert in big data, artificial intelligence, computer modeling and simulation, and mobile computing/programming. He’s available to speak with media about facial recognition technology and its many uses. Simply click on his icon now to arrange an interview today.

Derek Riley, Ph.D.
2 min. read

Let our experts explain the value of AI and Process Automation. Join us at Directions 2019 on May 02 to find out!

Just how big of a deal is AI? At this year’s Directions 2019, IDC Canada experts will be speaking to a variety of topics that are reshaping the digital visions and tactics modern companies are using to compete. Explore how AI encompasses a huge spectrum of technologies for the enterprise and how at the center of it all is data.   On May 02, join Warren Shiau, Research Vice-President with IDC Canada as he presents a highly anticipated talk on AI: Process Animation at 11:20 AM. Warren will look at what’s being adopted by Canadian enterprise under the banner of AI; and why AI can generate significant business value even in the absence of large data science teams and enterprise-wide high-quality data. Deep learning may rule the future but “small AI” targeting things like process automation rules the day. Organizations are rethinking digital transformation – join us May 02 to learn more. Location: St. James Cathedral Centre: Snell Hall, 65 Church Street | Toronto Date: May 2, 2019 Time: 8:00 AM - 8:30 AM - Registration & Networking Breakfast | 8:30 AM - 3:30 PM Conference Program Register Today before it's too late!  If you're a member of the media and would like to attend this event, please contact Cristina Santander at csantander@idc.com

1 min. read