Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

Researchers fight cybercrime with new digital forensic tools and techniques
Irfan Ahmed, Ph.D., associate professor of computer science, provides digital forensic tools — and the knowledge to use them — to the good guys fighting the never-ending cyber-security war. Ahmed is director of the Security and Forensics Engineering (SAFE) Lab within the Department of Computer Science and VCU Engineering. He leads a pair of interrelated projects funded by the U.S. Department of Homeland Security (DHS) aimed at keeping important industrial systems safe from the bad guys — and shows the same tools crafted for investigating cyber attacks can be used to probe other crimes. The goal of cyber attacks on physical infrastructure may be to cause chaos by disrupting systems and/or to hold systems for ransom. The SAFE lab focuses on protecting industrial control systems used in the operation of nuclear plants, dams, electricity delivery systems and a wide range of other elements of critical infrastructure in the U.S. The problem isn’t new: In 2010, the Stuxnet computer worm targeted centrifuges at Iranian nuclear facilities before getting loose and infecting “innocent” computers around the world. Cyber attacks often target a portion of the software architecture known as the control logic. Control logic is vulnerable in that one of its functions is to receive instructions from the user and hand them off to be executed by a programmable logic controller. For instance, the control logic monitoring a natural gas pipeline might be programmed to open a valve if the system detects pressure getting too high. Programmers can modify the control logic — but so can attackers. One of Ahmed’s DHS-supported projects, called “Digital Forensic Tools and Techniques for Investigating Control Logic Attacks in Industrial Control Systems,” allows him to craft devices and techniques that cyber detectives can use in their investigations of attacks on sensitive critical infrastructure. Their investigation capabilities, he explains, is an under-researched area, as most of the emphasis to date has been on the prevention and detection of their cyber attacks. “The best scenario is to prevent the attacks on industrial systems,” Ahmed said. “But if an attack does happen, then what? This is where we try to fill the gap at VCU. And the knowledge that we gain in a cyber attack investigation can further help us to detect or even prevent similar attacks.” In the cat-and-mouse world of cyber security, the way cybercriminals work is in constant evolution, and Ahmed’s SAFE lab pays close attention to the latest developments by malefactors. For instance, an attacker may go for a more subtle approach than modifying the original control logic. An attack method called return-oriented programming sees the malefactor using the existing control logic code, but artfully switching the execution sequence of the code. Other attackers might insert their malware into another area of the controller, programmed to run undetected until it can replace the function of the original control logic. Attackers are always coming up with new methods, but each attack leaves evidence behind. The SAFE lab examines possible attack scenarios through simulations. Scale models of physical systems, including an elevator and a belt conveyor system, are housed at the SAFE lab to help facilitate this. The elevator is a four-floor model with inside and outside buttons feeding into a programmable logic controller. The conveyor belt is more advanced, equipped with inductive, capacitive and photoelectric sensors and able to sort objects. The tools and methods applied in cybercrime can be useful in tracking down other malefactors. That’s where Ahmed’s second DHS-funded project comes in. It’s called “Data Science-integrated Experiential Digital Forensics Training based-on Real-world Case Studies of Cybercrime Artifacts.” Ahmed is the principal investigator, working with co-PI Kostadin Damevski, Ph.D., associate professor of computer science. The goal is to keep law enforcement personnel abreast of the latest trends in the field of cybercrime investigation and to equip them with the latest tools and techniques, including those developed in the SAFE lab. “For example, investigators often have to go through thousands of images, or emails or chats, looking for something very specific,” Ahmed said. “We believe the right data science tools can help them to narrow down that search.” The FBI and other law enforcement agencies already have dedicated cybersleuthing units; the Virginia State Police have a computer evidence recovery section in Richmond. Ahmed and Damevski are arranging sessions showing investigators how techniques from data science and machine learning can make investigations more efficient by sorting through the mounds of digital evidence that increasingly is a feature of modern crime.

Ask an Expert: Is the "AI Moratorium" too far reaching?
Recent responses to chatGPT have featured eminent technologists calling for a six-month moratorium on the development of “AI systems more powerful than GPT-4.” Dr. Jeremy Kedziora, PieperPower Endowed Chair in Artificial Intelligence at Milwaukee School of Engineering, supports a middle ground approach between unregulated development and a pause. He says, "I do not agree with a moratorium, but I would call for government action to develop regulatory guidelines for AI use, particularly for endowing AIs with actions." Dr. Kedziora is available as a subject matter expert on the recent "AI moratorium" that was issued by tech leaders. According to Dr. Kedziora: There are good reasons to call for additional oversight of AI creation: Large deep or reinforcement learning systems encode complicated relationships that are difficult for users to predict and understand. Integrating them into daily use by billions of people implies some sort of complex adaptive system in which it is even more difficult for planners to anticipate, predict, and plan. This is likely fertile ground for unintended – and bad – outcomes. Rather than outright replacement, a very real possibility is that AI-enabled workers will have sufficiently high productivity that we’ll need less workers to accomplish tasks. The implication is that there won’t be enough jobs for those who want them. This means that governments will need to seriously consider proposals for UBI and work to limit economic displacement, work which will require time and political bargaining. I do not think it is controversial that we would not want a research group at MIT or CalTech, or anywhere developing an unregulated nuclear weapon. Given the difficulty in predicting its impact, AI may well be in the same category of powerful, suggesting that its creation should be subject to the democratic process. At the same time, there are some important things to keep in mind regarding chatGPT-like AI systems that suggest there are inherent limits to their impact: Though chatGPT may appear–at times–to pass the famous Turing test, this does not imply these systems ’think,’ or are ’self-aware,’ or are ’alive.’ The Turing test aims to avoid answering these questions altogether by simply asking if a machine can be distinguished from a human by another human. At the end of the day, chatGPT is nothing more than a bunch of weights! Contemporary AIs–chatGPT included–have very limited levers to pull. They simply can’t take many actions. Indeed, chatGPT’s only action is to create text in response to a prompt. It cannot do anything independently. Its effects, for now, are limited to passing through the hands of humans and to the social changes it could thereby create. The call for a moratorium emphasizes ‘control’ over AI. It is worth asking just what this control means. Take chatGPT as an example–can its makers control responses to prompts? Probably only in a limited fashion at best, with less and less ability as more people use it. There simply aren’t resources to police its responses. Can chatGPT’s makers ‘flip the off switch?’ Absolutely – restricting access to the API would effectively turn chatGPT off. In that sense, it is certainly under the same kind of control humans subjected to government are. Keep in mind that there are coordination problems – just because there is an AI moratorium in the US does not mean that other countries–particularly US adversaries– will stop development. And as others have said: “as long as AI systems have objectives set by humans, most ethics concerns related to artificial intelligence come from the ethics of the countries wielding them.” There are definitional problems with this sort of moratorium – who would be subject to it? Industry actors? Academics? The criterion those who call for the moratorium use is “AI systems more powerful than GPT-4.” What does “powerful” mean? Enforcement requires drawing boundaries around which AI development is subject to a moratorium – without those boundaries how would such a policy be enforced? It might already be too late – some already claim that they’ve recreated chatGPT. There are two major groups to think about when looking for develop regulatory solutions for AI: academia and industry. There may already be good vehicles for regulating academic research, for example oversight of grant funding. Oversight of AI development in industry is an area that requires attention and application of expertise. If you're a journalist covering Artificial Intelligence, then let us help. Dr. Kedziora is a respected expert in Data Science, Machine Learning, Statistical Modeling, Bayesian Inference, Game Theory and things AI. He's available to speak with the media - simply click on the icon now to arrange an interview today.

Aston University and asbestos consultancy to use AI to improve social housing maintenance
• Aston University and Thames Laboratories enter 30-month Knowledge Transfer Partnership • Will use machine-learning and AI to create a maintenance prioritisation system • Collaboration will reduce costs, emissions, enhance productivity and improve residents' satisfaction. Aston University is teaming up with asbestos consultancy, Thames Laboratories (TL) to improve efficiency of social housing repairs. There are over 1,600 registered social housing providers in England, managing in excess of 4.4 million homes. Each of these properties requires statutory inspections to check gas, asbestos and water hygiene, in addition to general upkeep. However, there is not currently a scheduling system available that offers integration between key maintenance and safety contractors, resulting in additional site visits and increased travel costs and re-work. Aston University computer scientists will use machine-learning and AI to create a maintenance prioritisation system that will centralise job requests and automatically allocate them to the relevant contractors. The collaboration is through a Knowledge Transfer Partnership (KTP) - a collaboration between a business, an academic partner and a highly-qualified researcher, known as a KTP associate. This partnership builds on the outcomes of TL’s first collaboration with Aston University, by expanding the system developed for the company’s in-house use - which directs its field staff to jobs. The project team will improve the system developed during the current KTP to enable it to interact with client and contractor systems, by combining an input data processing unit, enhanced optimisation algorithms, customer enhancements and third-party add-ons into a single dynamic system. The Aston University team will be led by Aniko Ekart, professor of artificial intelligence. She said: “It is a privilege to be involved in the creation of this system, which will select the best contractor for each job based on their skill set, availability and location and be reactive to changing priorities of jobs." TL, based in Fenstanton, just outside Cambridge, provides asbestos consultancy, project management and training to businesses, local authorities, social housing and education facilities, using a fleet of mobile engineers across the UK. John Richards, managing director at Thames Laboratories, said: “This partnership will allow us to adopt the latest research and expertise from a world-leading academic institute to develop an original solution to improving the efficiency of social housing repairs, maintenance and improvements to better meet the needs of social housing residents.” Professor Ekart will be joined by Dr Alina Patelli as academic supervisor. Dr Patelli brings experience of software development in the commercial sector as well as expertise in applying optimisation techniques with focus on urban systems. She said: “This is a great opportunity to enhance state-of-the-art optimisation and machine learning in order to fit the needs of the commercial sector and deliver meaningful impact to Thames Laboratories.”

How Colorism Impacts Professional Achievement
Melissa J. Williams is associate professor of organization and management at Emory University’s Goizueta Business School. She investigates what happens when social identities collide with workplace hierarchies, and the consequences of putting people in positions of power and leadership. Here she looks at something less documented: the extent to which our appearance is stereotypically Black or white. And what that means for our prospects. Rosa Parks made history on December 1, 1955, when she refused to relinquish her bus seat to a white passenger. Her simple gesture of defiance ignited a city-wide bus boycott in Montgomery, Alabama, and has gone down in the annals as a pivotal moment for the social justice movement in the United States. However, Parks was not the only African American to make a stand against racial segregation. Nor was she the first. In March of the same year in the same city, 15-year-old Claudette Colvin also refused to give up her seat to a white woman on a Montgomery bus. So why isn’t she a household name? In part, Colvin’s age was a factor. The National Association for the Advancement of Colored People and other Black civil rights groups got behind Parks, reasoning that an older woman would be better equipped to withstand the controversy. But as Colvin herself stated, there were other factors at play. There was something about Parks’ appearance that gave her more leverage, reasons Colvin explained in Philip Hoose’s award-winning book on the civil rights movement. She had the “right hair and the right look.” Not only that, but her appearance “was the kind that people associate with the middle class. She fit that profile.” Success isn’t black or white. It’s shades of…white. Colorism has long been documented in the U.S. and elsewhere. Discrimination against human beings on the basis of their facial features, hair, and skin color transcends race—it is prevalent even within groups that share the same ethnic identity, where lighter skin tones are perceived to be more valuable than dark. Research over the years has shed light on the nefarious effects of colorism or shadeism in terms of equity and access to opportunity. But a new landmark study by Associate Professor of Organization & Management Melissa Williams, and Goizueta colleagues, PhD student Tosen Nwadei and Roberto C. Goizueta Chair of Organization & Management Anand Swaminathan, looks at just how Black or white someone appears—and how this shapes the way others see their potential; as well as the kinds of professional outcomes they can expect. What Williams and her co-authors, who also include James B. Wade from George Washington University and C. Keith Harrison and Scott Bukstein of University of Central Florida, find in their studies, is that Black professionals are less likely to be promoted to leadership roles. What’s more, for Black professionals whose physical appearance is more Black-stereotypical, their chances drop from 12 percent to a mere seven percent. For white professionals, on the other hand, having a more white-stereotypical appearance is an advantage for leadership – looking more stereotypical as a white person increased their chances of holding a leadership role from 32 percent to 43 percent. Williams and colleagues ran both an archival study and a lab experiment with volunteers to discover the extent to which degrees of ethnicity in appearance influence perceptions of a person’s potential for leadership and actually predict their likelihood of success in an industry. While the science unequivocally shows that white people enjoy advantages over Black people in opportunity and outcome across the board, Williams et al. were also interested in exploring what she calls the “continuum of race:” the more nuanced racial characteristics and differences that shape how the world sees us. There’s an assumption that everyone within the same ethnic group—Black or white—will experience the same degree of bias and prejudice, or acceptance and success. And we wanted to push back on that idea to really explore how degrees of whiteness or Blackness play out in people’s minds and shape how they read you physically. -Associate Professor of Organization & Management Melissa Williams Previous research shows the link between persisting in STEM-based majors in college and how much students are perceived to look “like their race,” she says. Those who are perceived to look less typically Black tend to make more friends outside their ethnic group—a boundary-crossing behavior that can help drive careers. To test these ideas, Williams and co-authors ran two studies. First, they accessed publicly available data including photographs, professional background, and positions from one large industry within the U.S.: American college football. College football is really rich in data. You can access job titles, photos, leadership, and non-leadership roles; and you can separate individuals out into head coaches and position coaches who have overseeing roles but who are not leaders per se. Separately, Williams et al. recruited a group of volunteers to look at the images of the football coaches: a mix of Black and white head and position coaches. These volunteers were asked to rate how typical they perceived each individual’s appearance to be of European or white Americans, or of Black Americans, ascribing each person a score out of five based on features such as their skin color, hair, eyes, nose, cheeks, and lips. These scores were then regressed—or cross-referenced—with the position held by the individuals in the photos to determine the relationship between their racial stereotypicality and their leadership role. Crunching the numbers, Williams found a direct correlation between the degree of perceived whiteness or Blackness of the coaches and how likely they actually were to be successful leaders. “We do find a kind of consensus in people’s view of what it means to be Black or white straight off,” says Williams. “So we do all seem to agree on the physical attributes of race. But it gets really interesting when you regress the scores that these photos get and compare them with the actual jobs these guys hold.” What we see is that, controlling for their age, attractiveness, and professional experience, the white guys who look less stereotypically white are 32 percent likely to occupy leadership roles. This rises to 43 percent with the men who look more like a stereotypical white guy. For Black professionals, the inverse is true, she notes. The more typically Black an individual looks, the less probability there is that he occupies a leadership job. Specifically, that figure drops from 12 to seven percent. So benchmark leadership probability is not only already lower for Black individuals, but drops even further when people are deemed to look “more typically Black,” says Williams. A follow-up experiment invited volunteer football fans to compare how they saw the potential future success of two same-race college football players—one more stereotypical in appearance than the other. The results confirm what Williams et al. suspect: 70 percent of the time, participants chose the more-typical white individual over the less-typical white individual as having greater leadership potential. In other words, the more white a white person looks, the more they are seen as leadership material. These findings should translate into an imperative, says Williams; and that is to think more broadly about race and how it impacts life outcomes. Because race is not a uniform experience, she says. “Organizations might want to look beyond just ticking the box when it comes to diversity and inclusion, and give deeper thought to who they want to recruit, support and push forward in representation. For white people, paying attention to whiteness—the types of white people who enjoy advantages in leadership—can be useful in reframing certain questions. A good place to start might be for leaders to ask: do I want to support people who look like me? Because the face you choose can ultimately help disrupt, or reinforce, the stereotype.” Interested in learning more or connecting with Melissa J. Williams, associate professor of organization and management at Emory University’s Goizueta Business School? She's available to speak about this subject - Simply click on her icon now to arrange an interview today.

#Expert Insight: Price Image Formation: When is HILO low?
When consumers choose where to shop, they often consider a store’s price image —does the store have a reputation for having lower or higher prices than its competitors? A store’s reputation for lower prices doesn’t happen by chance. Choosing a pricing strategy is one of the biggest pricing decisions a retailer makes. In “When is HILO Low? Price Image Formation Based on Frequency versus Depth Pricing Strategies,” a recently published paper in the Journal of Consumer Research, co-authors Ryan Hamilton, associate professor of marketing, Ramnath Chellappa, associate dean and Goizueta term professor of information systems and operations management, and Daniel Sheehan, associate professor of marketing and supply chain at the University of Kentucky’s Gatton College of Business and Economics, explore a gap in existing pricing strategy research. “Our research doesn’t threaten the validity of the previous research,” said Hamilton, “but what it does do is point to the limited generalizability of the previous research.” This is because previous pricing strategy research used the same research paradigm: It emphasized consumers’ perspectives as they compared prices simultaneously across multiple stores. Hamilton, Chellappa, and Sheehan wondered what would happen if they studied consumers as they compared prices of products within a store, instead of across stores. When they did so, the authors found that “many of the prominent findings of previous research are reversed,” they wrote. “We propose that when stores’ prices are evaluated one at a time, or in isolation, consumers will rely on the most salient contextual clues available—within-category price information—when forming a price image.” For example, rather than research the price of peanut butter across multiple grocery stores, shoppers often evaluate the price of peanut butter by comparing the prices of the brands on the shelf in front of them. To illustrate their point, the authors explore two basic pricing strategies: a frequency pricing strategy and a depth pricing strategy. Every Day Low Pricing (EDLP) is a frequency strategy where stores offer small price advantages over their competitors on many items. Walmart employs an EDLP strategy. A common depth strategy is a high-low (HILO) pricing strategy. HILO offers infrequent, but deep, price advantages over competitors. Macy’s utilizes this strategy. “The conventional wisdom is that EDLP equals low price,” explained Hamilton. But he and his co-authors argue that in a non-theoretical environment, the effectiveness of EDLP strategies is less clear. The trio hypothesized that the context in which consumers encounter prices has important implications. Specifically, that the frequency advantage of EDLP identified in earlier research was limited to those scenarios where customers were able to simultaneously compare prices across multiple stores. In contrast, they argue that a depth advantage, one resulting from HILO pricing, will be more likely when consumers evaluate store prices separately. “Without simultaneous comparisons across stores, consumers shift from using across-store prices as reference points to using within-category reference prices. As a result of this shift, deep price advantages are easier to evaluate than frequent price advantages and therefore more influential on customers’ formation of price image,” they write. “Because our theoretical account is based on within-category external reference prices, we predict that a depth store is likely to be evaluated as having a lower price image than a frequency store even when consumers are exposed to the prices of just one store,” they write. The authors tested their hypothesis using six separate experiments. All but one of the experiments studied national brands commonly found in grocery stores. (The other experiment used televisions.) In the experiments where participants saw store prices simultaneously, the experiment replicated the frequency advantage noted in previous research. But when participants did not have simultaneous price information across stores, the previous findings didn’t hold “What we found is that if you distance those prices comparisons even a little bit -showing a price on one webpage and then seeing a price on another webpage - that’s enough to completely reverse the findings,” explained Hamilton. In an isolated setting, “a couple of really low prices” will better communicate a store’s low-price image, said Hamilton. “That’s the big story.” While excited about the findings of their research, Hamilton is quick to point out the limits of their hypothesis, such as when pricing information isn’t readily available or when the consumer isn’t familiar with the brands of the product they wish to buy. “People want a simple answer that works everywhere, but it’s more nuanced than that,” said Hamilton. “This [hypothesis] is going to work better under certain set of circumstances than others because people process price information differently.” The insights aren’t only useful for retailers. While using a store’s price image to shop can be efficient from a consumer standpoint, assuming that the prices are low solely because the store has a reputation for low prices isn’t always the case. A retailer’s price image has vulnerabilities. Not everything at Costco is cheaper than it is at Whole Foods. Southwest Airlines may not always be cheaper than Delta Air Lines. “If you’re shopping for things you really care about,” advised Hamilton, “it might be worth doing more across-store price comparisons.” Chellappa is excited about how the paper addresses gaps in traditional economic models of pricing. “While much research in economics and information systems focuses on the availability of information for price comparison, the cognitive aspect of ‘how’ consumers compare and process such information is only explicated by studies such as ours. Looking at pricing through a behavioral lens, capturing consumers’ real shopping behavior reveals great insights that will be useful for firms,” he said. Interested in learning more about consumer behavior and Price Image Formation Based on Frequency versus Depth Pricing Strategies? Then let us help with your coverage and questions. Ryan Hamilton and Ramnath Chellappa are both available to speak regarding this important topic - simply click on either expert's icon now to arrange an interview today.

Goizueta Faculty Member Uncovers Impact of Remote Learning on Educational Inequality
In 2020, the world went into lockdown. Learning in school became learning from the couch. Rather than listening to teachers in-person behind a desk, high school students had to find a computer to stream their lectures and lessons. What happens to educational inequality in a digital-first, remote-learning environment? Whereas students are traditionally bound by their brick-and-mortar schools and the limitations of funding in those areas, what happens when the walls are removed and students have access to the teachers, knowledge, and peers from other areas? Ruomeng Cui and co-researchers, Zhanzhi Zheng from University of North Carolina at Chapel Hill and Shenyang Jiang from Tongji University, decided to find out. In their 2022 paper, currently under review, Cui and her colleagues looked at the performance of high school students in developing and developed regions of China. We thought that remote learning might reduce the inequality gap in education because when students are learning off-line, they’re restricted by their local resources. “It’s quite obvious that developing regions don’t have good resources, experienced teachers, or competitive peers—they often have inferior educational resources in comparison to developed regions,” explains Cui, associate professor of information systems and operations management. “We thought the accessibility of remote learning could help reduce this knowledge gap and help students in developing regions improve their learning outcomes.” Analyzing Education in Developed and Developing Areas The idea for the paper, “Remote Learning and Educational Inequality,” published earlier this year, stemmed from another of Cui’s papers, which looked at the academic productivity of women as a result of the COVID-19 lockdowns. “We wanted to study whether the switch to remote learning impacts educational inequality. Does it make it better or worse?” says Cui. “We are the first ones to offer empirical evidence on such a granular level about a large-scale data set.” The group analyzed the Chinese college entrance exam from 2018 through 2020, which students take during the last few weeks of high school; the test score is a requirement for undergraduate admission in China. It’s common for high schools to announce the number of students who scored 600 or higher (out of 750 total points). Using 1,458 high school exam results from 20 provinces, the group found that in 2020, when remote learning became the norm, “the number of students scoring above 600 points in developing regions increased by 22.22 percent,” in comparison to developed regions. Remote learning significantly improved learning outcomes of students in developing regions. We should think about encouraging the adoption of remote learning in education However, Cui and her co-researchers wanted to go a step further. Because the entrance exams are summaries of student data, they surveyed 1,198 students to drill down and ensure that these results came from remote learning rather than other factors. Respondents were asked to rate aspects of their remote-learning experience, such as access to digital devices, their proficiency in using software, how reliable their internet was, how they interacted with peers and teachers, and their access to online educational resources. The researchers found that students in developing regions were able to better connect with peers and teachers, and the students believed that “their learning efficiency was greater” because of the remote learning. Education inequality is not only a problem in China. It’s everywhere. It’s across the world. Having access to better educational resources online can be applied anywhere. However, the one caveat to their findings: Remote learning is beneficial, but students need devices and the infrastructure to support online learning, which is often lacking in developing regions or underserved areas. “We need to support, build, and develop the digital technology capability that enables the effectiveness of remote learning,” says Cui. Are you a reporter looking to know more about the impact COVID had on education and how inequality plays a role in how we educate students during a pandemic? Then let us help with your coverage and questions. Ruomeng Cui is an Associate Professor of Information Systems & Operations Management at Emory University's Goizueta School of Business. Ruomeng is available to speak with media regarding this topic - simply click on her icon now to arrange an interview today.

AI-Generated Content is a Game Changer for Marketers, but at What Cost?
Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

Georgia Southern University’s College of Science and Mathematics has been awarded a six-year grant of $493,065 from the Howard Hughes Medical Institute (HHMI) Inclusive Excellence 3 (IE3) initiative. Georgia Southern is among a select group of 104 schools that have received an HHMI IE3 grant to support diversity, equity and inclusion. “Science is about building, witnessing and collaborating with each other, which is why Georgia Southern is so proud to receive this grant,” said Georgia Southern Provost and Vice President of Academic Affairs Carl Reiber, Ph.D. “Our inclusive initiatives are breaking down the walls that have separated science from interested students.” The HHMI IE3 initiative challenges U.S. colleges and universities to increase student participation in sciences, focusing especially on populations who have been historically underrepresented in the field. The Georgia Southern IE3 leadership team includes: Karelle Aiken, Ph.D, (program director), Tricia Muldoon Brown, Ph.D., (co-director) Sara Gremillion, Ph.D., (co-director), Checo Colón-Gaud, Ph.D., Issac Taylor and Delana Schartner, Ph.D. “The IE3 initiative is tapping into the transformative power of collaboration; the ability of a critical mass to drive cultural change. As such, an ingenious mandate of this grant is that its 104 institutional awardees work on their goals in teams of Learning Community Clusters (LCC),” said Aiken. Georgia Southern’s IE3 LCC hub, LCC4, includes 13 other institutions from across the U.S., all of whom are seeking to answer a specific challenge: How can an institution evaluate effective inclusive teaching, and then use the evaluation in the rewards system including faculty promotion and tenure? The IE3 initiative has been rolled out in two phases. The most recent award marks the beginning of the second phase and so, the work of the Georgia Southern team and their LCC4 colleagues is ongoing. At Georgia Southern, over the next six years the IE3 initiative will support training for faculty and student-leaders centered on inclusive teaching strategies and the effective evaluation of those strategies; an annual COSM IE3 Spring Speaker Series (established in 2022); the continued development of a new faculty mentorship program piloted in 2022 by Inclusive Excellence Faculty Fellows; student- and faculty-led initiatives geared toward cultivating inclusive learning environments; and more. Looking to know more - then let us help. For more information or to arrange an interview with Carl Reiber or anyone from the Georgia Southern IE3 leadership team - simply reach out to Georgia Southern Director of Communications Jennifer Wise at jwise@georgiasouthern.edu to arrange an interview today.

A.I. and Higher Education: The Rise of ChatGPT
ChatGPT. Maybe you’ve heard of it. Colleges and universities certainly have. It’s the chatbot that uses artificial intelligence (A.I.) technology to generate sentences based only on a brief prompt, writing anything from college-level papers to fanfiction. And as one might expect, the popular chatbot is taking the academic world by storm, raising questions about trust, academic integrity and even the future of college admissions. We turned to Seth Matthew Fishman, PhD, Assistant Dean of Curriculum and Assessment and associate teaching professor in the Department of Education and Counseling at Villanova University, to get his thoughts. Q: What makes ChatGPT different and why is it causing such a stir? Dr. Fishman: The use of chatbots is not a new debate in higher education. But ChatGPT and other similar free software certainly add a complex layer that we are only just now starting to have conversations about. There will be an ongoing debate about trust—Who wrote the material we are reading? To what extent if any, will it impact faculty members? There are also A.I. digital images, graphics, and design—To what extent do these programs impact our creative arts and design programs? I think these fields will mostly embrace A.I., though I can see issues of copyright infringement and artist control/attribution. Q: How are other chatbots being used in academic settings? DF: A.I. use already impacts higher education. If you ask any faculty member teaching a foreign language that requires a translation, they will have tales of work submitted by students who use online translation software. But benefits do exist for students and faculty regardless—we’re able to interact a bit more with others, reducing some language barriers. I expect we will see hundreds of articles about ChatGPT’s impact on education; there are likely several dissertations underway, and I expect to see ChatGPT and similar software cited in papers and likely even in authorship groups. Q: What will the impact of ChatGPT be on the college application and admissions process? DF: I think we’ll see conversations from college admissions professionals on the impact of ChatGPT on higher education admissions. For example, key components of college applications such as essays and writing samples may be impacted. And ChatGPT may also be used to write some rather good letters of recommendation. Q: What does the future hold? Will ChatGPT and similar A.I. programs maintain popularity? DF: I’m curious if A.I. will be used to generate employment cover letters. Additionally, many corporations already use A.I. to sift through candidate applications to narrow down their applicant pools. It may continue to transcend academia. I also expect to hear more from our philosophy and ethics experts to help us better understand the societal and educational implications of using A.I. in these ways. And these kinds of conversations will be had with our students to engage them as partners in the learning experience. We will probably generate new ideas and different perspectives from doing just that.

Aston University appoints new pro-vice-chancellor and executive dean of business and social sciences
Professor Zoe Radnor has been appointed as Pro-Vice-Chancellor and Executive Dean of the College of Business and Social Sciences She has had a successful career in higher education for over 25 years Professor Radnor will be joining Aston University in Spring 2023. Aston University has appointed Professor Zoe Radnor as the new Pro-Vice-Chancellor and Executive Dean of the College of Business and Social Sciences. Professor Radnor will succeed Professor George Feiger, who will be standing down after 10 years of leadership of Aston Business School and the College of Business and Social Sciences. With a successful career in higher education spanning over 25 years, Professor Radnor will be joining Aston University from The University of Law (ULaw), where she is currently Provost and Deputy Vice-Chancellor, specifically focused on leading the diversification of the academic portfolio, including building an academic model for the provision of high quality, innovative teaching and thought leadership. In addition, she is leading the TEF submission at the institution Prior to her executive role at ULaw, she was Vice-President for Strategy and Planning; Equality, Diversity and Inclusion and Professor of Service Operations Management at City, University of London, leading the development of the University EDI strategy. In this role she also led the creation of the new enabling Civic Strategy and established the new institution-wide Change Support Unit. Before City, Professor Radnor was the founding Dean of the School of Business at the University of Leicester, and prior to that, as Associate Dean Teaching and Learning, she led the development of new curriculum offerings for the Loughborough University campus in London. Professor Zoe Radnor is a Fellow of the Academy of Social Sciences (FAcSS) and the British Academy of Management (FBAM). She is also a member of the Athena Swan Governance Committee for Advance HE. Her main research interests are in performance, process improvement and service value within public sector organisations. She has led research projects for a number of Government and healthcare organisations, evaluating the use of ‘lean’ and associated techniques and continues to maintain a strong ongoing research profile. Professor Aleks Subic, Vice-Chancellor and Chief Executive of Aston University, said: “I am looking forward to welcoming Professor Radnor to the Executive Team at what is a hugely exciting period of development for the University and to working with her as we shape our Aston University 2030 Strategy. Zoe brings significant leadership experience to the team and ambition in line with our bold vision. “I would also like to take this opportunity to acknowledge the significant contribution made by Professor George Feiger during his leadership of Aston Business School and the College of Business and Social Sciences over the last 10 years.” Professor Radnor said: “I am delighted to be joining such a prestigious and forward-thinking University and College. “The reputations of the College of Business and Social Sciences and of Aston University generally and the strategic vision of the new Vice-Chancellor and University leadership are what attracted me to this exciting role. I can’t wait to get started working with so many talented and innovative new colleagues.” Professor Radnor will be taking up her post in Spring 2023.






