Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Expert Insight: Fake News, Fake Reviews: Building Trust with Online Shoppers

Online customer reviews have become a critically important cog in the sales conversion process in recent years. Studies show that 97 percent of consumers read product reviews and ratings, and that positive reviews can almost triple the likelihood of making a purchase. As customers do more and more of their shopping online, they are turning in droves to the likes of Yelp, TripAdvisor, and Google Reviews to seek out opinions, recommendations, and feedback from other users before pushing through the final part of the sales funnel. As a result, these third-party review sites have experienced exponential growth. But there’s a caveat: and it’s trust. The success of Yelp and its competitors is wholly contingent on how trustworthy their users perceive them to be; on the transparency and authenticity of the content published and the sources of that content. In an era of disinformation with fake reviews and AI mass-generated content precipitously on the rise, securing—and keeping—user trust is paramount. The Five Keys to Fighting Fakery Goizueta Business School’s Sandy Jap has some suggestions. Together with colleagues Ben Beck of Brigham Young University’s Marriott School of Business and Stefan Wuyts of Penn State’s Smeal College of Business, Jap, who is the Sarah Beth Brown Professor of Marketing, put together a series of studies to test the kinds of measures and mechanisms that platforms can deploy to win user confidence. And it turns out there’s one tactic that works more effectively than any other: actively monitoring the authenticity of user reviews. That and being open and transparent about doing so. Jap and her colleagues scoured the latest research and data on marketing, governance, and identity disclosure to pinpoint the mechanisms that best mitigate online fakery, while simultaneously building trust among platform users. They identified five. “We worked through the literature and were able to whittle these down to five core practices that are robustly effective at building trust,” says Jap. “They are monitoring, exposure, community building, status endowment and identity disclosure. Doing these five things can signal to your users that you are committed to being a guardian of their trust, so to speak.” Monitoring or evaluating reviews for their authenticity and exposing firms that pay for and propagate fake content are mechanisms directed at the rogue firms that spread fakery and misinformation, explains Jap. Meanwhile community building and status endowment focus on reviewers. Community building is about enabling authentic, transparent interactions between consumers and reviewers. An example of this might be allowing consumers to ask questions and reviewers to respond directly. “Status endowment is where a platform verifies and acknowledges the credibility or helpfulness of a reviewer in some way. Yelp and others use things like badges or reviewer ratings which are earned over time and which make it hard for fake reviewers to game their systems,” says Jap. Identity disclosure is the practice of having reviewers provide personal information—their name, picture, or location, for instance—before they can post content. And while this approach can keep fabrication and false profiles in check, it also raises certain tradeoffs, says Jap. “Anonymity online has long been understood as something of an un-inhibitor—a factor that enables users to speak more freely and openly. It can be democratizing in the sense that it removes or lessens prejudice and bias around things like race, social class, or physical appearance,” she says. “Of course, having people share personal data on your platform can also open up a can of worms around privacy and identity theft which are major considerations; so there’s a balancing act needed with this.” To test the efficacy of all five trust building policies, including identity disclosure, Jap and her colleagues ran a series of experiments and studies. They invited volunteers to rate how the presence or absence of these mechanisms impacted the trustworthiness of a platform. One study saw them parse things like domain authority and traffic across 25 online review sites against how many (or few) of the five mechanisms each deployed. Elsewhere, the team used surveys to assess how users ranked the different mechanisms in terms of platform trust, above and beyond other factors such as the quantity of reviews published say, or the expertise of different reviewers. The Bottom Line: Bust Bogus Reviews After crunching the data, Jap and her co-authors found that while all five trust-building mechanisms were valued and important to platform users, the practice of monitoring for fake reviews and reviewers—and broadcasting the fact clearly—was by far the most effective. “Doing all of these five things—monitoring, exposing, community building, status endowment and ID disclosure—are important if you want to earn and keep the trust of your users,” says Jap. “We found that the more of these mechanisms that platforms incorporate, the better their domain authority, Alexa site ranking, backlinks, and organic site traffic.” Based on our findings, monitoring your content and communicating that you’re doing this is by far the most powerful cue that you are trustworthy. So that’s where we’d say platforms might want to focus their spend. Many of the biggest review platforms have already taken note of these insights. Yelp recently shared a post to its official blog welcoming the finding that of the 25 sites analyzed in Jap’s study, theirs is one of two platforms that actively implement all five mechanisms. “After examining 25 review platforms, the study found that Yelp is one of two platforms that applies all five mechanisms and as the research states, has become a guardian of trust for review information.” Meanwhile, Jap stresses that these findings should be relevant to any business that is focused on “combating online review fakery.” “All businesses today face the challenge of managing their word-of-mouth reputation. Any firm interested in sharing and leveraging points of view around its products or services, be it a small online retail store or an Amazon, is going to want to go the distance—and be seen to do so—in going to war on fakery and disinformation.” Are you a journalist interested in learning more about the importance and trustworthiness of online reviews? Sandy Jap is available to speak with media - simply click on her icon now to arrange an interview today.

Sandy Jap
5 min. read

Finding Truth among the Tweets. Our expert weighs in on the role social media has during war.

With the Israel-Hamas war raging on, social media provides a source of information for many individuals to stay up to date. Across platforms there are reliable sources but there are also those with an agenda to spread false truth, blatant lies and sew doubt with doses of 'mis' and disinformation. It's a topic Goizueta Business School professor David Schweidel is watching closely. "We are seeing once again the need for the regulation of social media platforms," says Schweidel. " Platforms have a financial incentive to serve up the most provocative and arousing content and content moderation is often at odds with financial goals." Social media is being flooded with content, much of it misinformation, and social platforms are unwilling or unable to effectively moderate what’s being posted. "Beyond the likely reduction in revenue, implementing content moderation at scale is expensive and difficult. If viewed from a short-term financial perspective, allowing for a free for all is less costly and will result in more user engagement, which drives revenue," Schweidel adds. And it is not as if legislators and lawmakers are not aware. As of today, social media platforms aren’t liable for the content posted on them (under the FCC’s Section 230). Two recent lawsuits sought to challenge section 230, but the Supreme Court declined to take such action. These challenges were based on platforms actively promoting content through their algorithms, thereby going beyond simply being intermediaries providing access to content posted online by others. Some, such as the ACLU, view this as allowing for free speech online. There's a lot more to know, such as: The challenges in identifying real vs. fake content Which platforms are being effective in moderating content How US and EU laws vary in terms of regulating misinformation on social media platforms And that's where we can help. David A. Schweidel is Professor of Marketing at Emory University’s Goizueta Business School. He's a renowned marketing analytics expert focused on the opportunities at the intersection of marketing and technology. David is available to speak with media regarding this important topic, simply click on his icon now to arrange an interview today.

David Schweidel
2 min. read

Sorting through the socials: Augusta University expert explains why students need more literacy and awareness when it comes to social media

In this day and age, people of all ages are often on social media. While most of the platforms can be engaging for the good, there are always bad actors out there passing along misinformation. That’s the type of content younger students need to be aware of, according to an Augusta University faculty expert. Stacie Pettit, PhD, program director of the Master of Education in Instruction in the College of Education and Human Development, suggests there needs to be more media literacy and awareness of social media taught to students. With so many videos and posts claiming to be informative, how is one supposed to discern what is factual and what is not? Pettit feels people need to be more aware of how to tell when something is legit as opposed to something that is inaccurate. “Knowing what legitimate research is and what’s not, especially in this political climate, it can be tough to tell,” said Pettit. “More can be done in them understanding how deep it goes and what you search for, you’re going to get things that are skewing your mind to what you already want to believe. I feel like that component can be deeper.” Pettit realizes younger students know how to use social media, but using it in a responsible way can be just as important. People may post videos claiming one thing, but without fact checking, it may be inaccurate and can be a dangerous tool to mold a younger person’s mind. “If you already have your mind made up about something, you’re going to find things. It’s like the old phrase, ‘If you’re looking for a yellow cab, you’re going to find a yellow cab.’ This may be your context, your culture that you’re coming from, but put yourself in this place, how might they feel? Knowing there isn’t just one way to think about something, it’s not just a black and white answer to all these critical issues is important,” Pettit added. She knows it’s of the utmost importance for students to realize that every talking head they see in a video on social media isn’t always speaking the truth. Fact checking, finding another source to support a view and paying attention to the source in the first place can be key pieces of the puzzle students can use to find out the legitimacy of a post from the start. Amid all the misinformation, there are still plenty of legitimate uses for social media platforms. “There’s definitely educational and helpful things on YouTube. I encourage my kids a lot to go there because I’m trying to teach them to be more independent. She’s often like, ‘I don’t know how to do that’ but I tell her to find a video; this is what you’re going to have to do in college,” she said. If you're a journalist covering education and the impacts social media has on students,  then let us help. Stacie Pettit, PhD, is a respected leader in middle level teacher education and meeting the needs of marginalized young adolescents. She's available to speak with media; simply click on her icon now to arrange an interview today.

3 min. read

AI-Generated Content is a Game Changer for Marketers, but at What Cost?

Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

David Schweidel
6 min. read

UConn Expert, 10 Years after Sandy Hook, on the Lies that 'Plague the U.S.'

UConn professor and journalist Amanda J. Crawford considers the misinformation that spread like wildfire after tragic school shooting at Sandy Hook Elementary School to be "the first major conspiracy theory of the modern social media age."  Ten years after 26 young students and school staff were killed in the massacre, the impact of that day in 2012 continues to reverberate in America today. On this solemn anniversary, Crawford writes about the aftermath of Sandy Hook misinformation in a new essay for The Conversation:   Conspiracy theories are powerful forces in the U.S. They have damaged public health amid a global pandemic, shaken faith in the democratic process and helped spark a violent assault on the U.S. Capitol in January 2021. These conspiracy theories are part of a dangerous misinformation crisis that has been building for years in the U.S. While American politics has long had a paranoid streak, and belief in conspiracy theories is nothing new, outlandish conspiracy theories born on social media now regularly achieve mainstream acceptance and are echoed by people in power. Recently, one of the most popular American conspiracy theorists faced consequences in court for his part in spreading viral lies. Right-wing radio host Alex Jones and his company, Infowars, were ordered by juries in Connecticut and Texas to pay nearly $1.5 billion in damages to relatives of victims killed in a mass shooting at Sandy Hook Elementary School a decade ago. Jones had falsely claimed that the shooting was a hoax. As a journalism professor at the University of Connecticut, I have studied the misinformation that surrounded the mass shooting in Newtown, Connecticut, on Dec. 14, 2012 – including Jones’ role in spreading it to his audience of millions. I consider it the first major conspiracy theory of the modern social media age, and I believe we can trace our current predicament to the tragedy’s aftermath. Ten years ago, the Sandy Hook shooting demonstrated how fringe ideas could quickly become mainstream on social media and win support from various establishment figures – even when the conspiracy theory targeted grieving families of young students and school staff killed during the massacre. Those who claimed the tragedy was a hoax showed up in Newtown and harassed people connected to the shooting. This provided an early example of how misinformation spread on social media could cause real-world harm. Amanda J. Crawford is a veteran political reporter, literary journalist, and expert in journalism ethics, misinformation, conspiracy theories, and the First Amendment. Click on her icon now to arrange an interview with her today.

Amanda J. Crawford
2 min. read

Aston University forensic linguistics experts partner in $11.3 million funding for authorship attribution research

Aston Institute for Forensic Linguistics (AIFL) is part of the project to infer authorship of uncredited documents based on writing style AIFL’s Professor Tim Grant and Dr Krzysztof Kredens are experts in authorship analysis Applications may include identifying counterintelligence risks, combating misinformation online, fighting human trafficking and even deciphering authorship of ancient religious texts. Aston University’s Institute for Forensic Linguistics (AIFL) is part of the AUTHOR research consortium which has won an $11.3 million contract to infer authorship of uncredited documents based on the writing style. The acronym stands for ‘Attribution, and Undermining the Attribution, of Text while providing Human-Oriented Rationales’. Worth $1.3 million, the Aston University part of the project is being led by Professor Tim Grant and Dr Krzysztof Kredens, who both are recognised internationally as experts in authorship analysis and who both engage in forensic linguistic casework as expert witnesses. In addition to their recognised general expertise and experience in this area, Professor Grant has specific expertise in using linguistic analysis to enhance online undercover policing and Dr Kredens has led projects to develop authorship identification techniques involving very large numbers of potential authors. The AUTHOR team is led by Charles River Analytics and is one of six teams of researchers that won The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) programme sponsored by the Intelligence Advanced Research Projects Activity (IARPA). The programme uses natural language processing techniques and machine learning to create stylistic fingerprints that capture the writing style of specific authors. On the flip side is authorship privacy - mechanisms that can anonymize identities of authors, especially when their lives are in danger. Pitting the attribution and privacy teams against each other will hopefully motivate each, says Dr Terry Patten, principal scientist at Charles River Analytics and principal investigator of the AUTHOR consortium. “One of the big challenges for the programme and for authorship attribution in general is that the document you’re looking at may not be in the same genre or on the same topic as the sample documents you have for a particular author,” Patten says. The same applies to languages: We might have example articles for an author in English but need to match the style even if the document at hand is in French. Authorship privacy too has its challenges: users must obfuscate the style without changing the meaning, which can be difficult to execute.” In the area of authorship attribution, the research and casework experience from Aston University will assist the team in identifying and using a broad spectrum of authorship markers. Authorship attribution research has more typically looked for words and their frequencies as identifying characteristics. However, Professor Grant’s previous work on online undercover policing has shown that higher-level discourse features - how authors structure their interactions - can be important ‘tells’ in authorship analysis. The growth of natural language processing (NLP) and one of its underlying techniques, machine learning, is motivating researchers to harness these new technologies in solving the classic problem of authorship attribution. The challenge, Patten says, is that while machine learning is very effective at authorship attribution, “deep learning systems that use neural networks can’t explain why they arrived at the answers they did.” Evidence in criminal trials can’t afford to hinge on such black-box systems. It’s why the core condition of AUTHOR is that it be “human-interpretable.” Dr Kredens has developed research and insights where explanations can be drawn out of black box authorship attribution systems, so that the findings of such systems can be integrated into linguistic theory as to who we are as linguistic individuals. Initially, the project is expected to focus on feature discovery: beyond words, what features can we discover to increase the accuracy of authorship attribution? The project has a range of promising applications – identifying counterintelligence risks, combating misinformation online, fighting human trafficking, and even figuring out the authorship of ancient religious texts. Professor Grant said: “We were really excited to be part of this project both as an opportunity to develop new findings and techniques in one of our core research areas, and also because it provides further recognition of AIFL’s international reputation in the field. Dr Kredens added: “This is a great opportunity to take our cutting-edge research in this area to a new level”. Professor Simon Green, Pro-Vice-Chancellor for Research, commented: “I am delighted that the international consortium bid involving AIFL has been successful. As one of Aston University’s four research institutes, AIFL is a genuine world-leader in its field, and this award demonstrates its reputation globally. This project is a prime example of our capacities and expertise in the area of technology, and we are proud to be a partner.” Patten is excited about the promise of AUTHOR as it is poised to make fundamental contributions to the field of NLP. “It’s really forcing us to address an issue that’s been central to natural language processing,” Patten says. “In NLP and artificial intelligence in general, we need to find a way to build hybrid systems that can incorporate both deep learning and human-interpretable representations. The field needs to find ways to make neural networks and linguistic representations work together.” “We need to get the best of both worlds,” Patten says. The team includes some of the world’s foremost researchers in authorship analysis, computational linguistics, and machine learning from Illinois Institute of Technology, Aston Institute for Forensic Linguistics, Rensselaer Polytechnic Institute, and Howard Brain Sciences Foundation.

4 min. read

UConn's Amanda J. Crawford on one Sandy Hook family's 'epic fight'

Since December 14, 2012, the families of 18 children and six adults murdered at Sandy Hook Elementary School have been forced to live amidst a tidal wave of conspiracy theorists and their constant lies, threats, and harassment. As lawsuits challenging some of the most vocal purveyors of that misinformation are working through the court system, the stories of the hardships faced by some of these families -- endured while they have tried to grieve their unimaginable loss -- have brought new attention to the profound and negative effects that the wildfire spread of misinformation has on the lives of the people most impacted. Amanda J. Crawford, an assistant professor with the UConn Department of Journalism who studies misinformation and conspiracy theories, offers the story of one family in an in-depth and heartbreaking, but critically important, story for the Boston Globe Magazine: Lenny knew online chatter about the shadow government or some such conspiracy was all but inevitable. When a neuroscience graduate student killed 12 and injured dozens of moviegoers with a semiautomatic assault rifle in Aurora in July, five months prior, there had been allegations about government mind control. When Lenny searched his son’s name in early January 2013, he was disgusted at the speculation about the shooting. People called it a false flag. Mistakes in news coverage had become “anomalies” that conspiracy theorists claimed as proof of a coverup. Why did the shooter’s name change? Why did the guns keep changing? Press conferences were analyzed for clues. Vance’s threat to prosecute purveyors of misinformation was taken as an indication they were onto something. But what concerned Lenny most was their callous scrutiny of the victims and their families. Some people claimed a photo of a victim’s little sister with Obama really showed the dead girl still alive. Others speculated the murdered children never existed at all. They called parents and other relatives “crisis actors” paid to perform a tragedy. And yet, they also criticized them for not performing their grief well enough. There were even claims specifically about Veronique. Lenny needed to warn her. “There are some really dark, twisted people out there calling this a hoax,” he told her. Veronique didn’t understand. There was so much news coverage, so many witnesses. “How could that possibly be?” “If you put yourself out there, people will question your story,” Lenny cautioned. Veronique thought he must be exaggerating a few comments from a dark corner of the Web. This can’t possibly gain traction, she thought. No, no, no! Truth matters. If I tell my story, people will be able to see that I am a mother who is grieving. Amanda J. Crawford is a veteran political reporter, literary journalist, and expert in journalism ethics, misinformation, conspiracy theories, and the First Amendment. Click on her icon now to arrange an interview with her today.

Amanda J. Crawford
2 min. read

Are You an Expert? Here’s How to Tell

Have you ever wondered whether or not you are an expert? When asked this question about what defines expertise, you will hear a variety of answers. Many will reference key requirements such as an expert must have extensive knowledge in their field. Others will see education, published work, or years of experience as key qualifiers. Yet there are so many other dimensions of expertise that contribute to how visible, influential and authoritative they are within their community of practice or with the general public. Who Qualifies as an Expert? I started looking closer at this topic for two reasons. The first is my personal work with experts. Having worked with thousands of them across a variety of sectors I've observed that many are driven to develop themselves professionally as an expert to meet a variety of objectives. Often these are focused on raising one's profile and reputation among peers or with the broader market to inform the public. Some see media coverage being an essential part of their strategy while others are more interested in developing a larger audience for their research or client work, by speaking at conferences or on podcasts. Others have a focus on improving their PageRank on search engines. All these activities can enable important objectives such as attracting new clients, research funding or talent. The second reason for this deeper dive into expertise is a need to better organize how we look at experts within organizations. My work with communications departments in knowledge-based sectors reveals that they are keen to learn more about how they can better engage their experts to build reputation, relationships and revenue. However, better engagement starts with a better understanding of what qualifies someone as an expert - what attributes can we objectively look at that define expertise? With that knowledge, we can first better appreciate the amount of work experts have put into establishing themselves in their field. Then organizations can nurture this expertise in a more collaborative way to accomplish shared goals. My observation is that with a little more insight, empathy, and alignment, both experts and their organizations can accomplish incredible things together. And there has never been a more important time for experts to "show their smarts." By definition, an expert is someone with comprehensive or authoritative knowledge in a particular area of study. While formal education and certifications are a starting point for expertise, many disciplines don’t have a set list of criteria to measure expertise against. It’s also important to recognize other dimensions of expertise that relate not just to the working proficiency in a field but also to the degree of influence and authority they have earned within their profession or community of practice. Because of this, expertise is often looked at as a person’s cumulative training, skills, research and experience. What are the Key Attributes of Expertise? In evaluating your accomplishments and the various ways you can contribute as an expert to both your community of practice and the public, here are some key questions that can help you assess how you are developing your expertise: Have you completed any formal education or gained relevant experience to achieve proficiency in your chosen field? Are you actively building knowledge in a specific discipline or practice area by providing your services as an expert? Are you generating unique insights through your research or fieldwork? Are you publishing your work to establish your reputation and reach a broader audience such as publications or books? Are you teaching in the classroom or educating and inspiring audiences through speaking at conferences? Do you demonstrate a commitment to impact your community of practice and help advance your field and generate an impact on society by informing the public? Have you established a reputation as a go-to source for well-informed, unique perspectives? Some Additional Tips to Help you Develop Your Expertise To further the discussion, I’ve also shared further thoughts about the meaning of “expertise”. As you think about developing your own personal skills, or if you are a communicator who is responsible for engaging with your organizations experts, here are a few additional principles to keep in mind. Experts Aren't Focused on Some“Magic Number” Related to Hours of Experience Malcolm Gladwell’s book “Outliers” (2008), outlined the now famous “10,000-hour rule” as the magic number of greatness for the time it takes to master a given field. As the rule goes, you could become a genuine expert in a field with approximately 10,000 hours of practice — roughly 3 hours a day, every day for a consecutive decade. But is that what it really takes to become an expert? But is that what it really takes to become an expert? Or did Gladwell oversimplify the concept of expertise? Some of his assumptions for “Outliers” (which became a major bestseller) relied on research from Dr. Anders Ericsson at Florida State University who made expertise the focus of his research career. Contrary to how Gladwell outlined it, Ericsson argued that the way a person practised mattered just as much, if not more, than the amount of time they committed to their discipline. It also depends on the field of research or practice one is involved in. Some disciplines take decades to achieve expertise and many experts will admit they are just scratching the surface of what they are studying, well after they have passed the 10,000-hour mark. That might be just the first stage of proficiency for some disciplines. Experts are Continuously Learning It’s difficult to claim proficiency as an expert if you are not staying current in your field. The best experts are constantly scouring new research and best practices. Dr. Anders Ericsson observed in his work that "deliberate practice" is an essential element of expertise. His reasoning was that one simply won’t progress as an expert unless they push their limits. Many experts aren’t satisfied unless they are going beyond their comfort zone, opening up new pathways of research, focusing on their weaknesses, and broadening their knowledge and skills through avenues such as peer review, speaking, and teaching. The deliberate practice occurred “at the edge of one’s comfort zone” and involved setting specific goals, focusing on technique, and obtaining immediate feedback from a teacher or mentor. Experts Apply their Knowledge to Share Unique Perspectives While many experts conduct research, simply reciting facts isn't enough. Those who can provide evidence-based perspectives, that objectively accommodate and adapt to new information will have more impact. Expertise is also about developing unique, informed perspectives that challenge the status quo, which can at times be controversial. Experts know that things change. But they don’t get caught up in every small detail in ways that prevent them from seeing the whole picture. They don't immediately rush toward new ideas. They consider historical perspectives and patterns learned from their research that provide more context for what's happening today. And these experts have the patience and wisdom to validate their perspectives with real evidence. That's why expert sources are so valuable for journalists when they research stories. The perspectives they offer are critical to countering the misinformation and uninformed opinions found on social media. Experts Connect with a Broader Audience Many experts are pushing past traditional communication formats, using more creative and visual ways to translate their research into a wider audience. We conducted research with academics in North America and in Europe who are trying to balance their research (seen in traditional peer-reviewed journals) with other work such as blogs, social media, podcasts and conferences such as TEDx - all with the goal to bring their work to a wider audience. While that's an essential part of public service, it pays dividends for the expert and the organization they represent. Experts Are Transparent More than ever, credible experts are in demand. The reason for this is simple. They inspire trust. And the overnight success some have seemingly achieved has come from decades of work in the trenches. They have a proven record that is on display and they make it easy to understand how they got there. They don't mask their credentials or their affiliations as they didn't take shortcuts. They understand that transparency is a critical part of being seen as credible. Experts Don’t Take “Fake It Till You Make It” Shortcuts The phrase “fake it till you make it,” is a personal development mantra that speaks to how one can imitate confidence, competence, and an optimistic mindset, and realize those qualities in real life. While this pop psychology construct can be helpful for inspiring personal development, it gets problematic when it becomes a strategy for garnering trust with a broader audience to establish some degree of authority - especially when this inexperience causes harm to others who may be influenced by what they see. When self-appointed experts take shortcuts, promoting themselves as authorities on social media without the requisite research or experience, this blurs the lines of expertise and erodes the public trust. Experts Are Generous The best experts are excited about the future of their field, and that translates to helping others become experts too. That's why many openly share their valuable time, through speaking, teaching and mentorship. In the end, they understand that these activities are essential to developing the scale and momentum necessary to tackle the important issues of the day. How Do You Show your Smarts? How do you personally score on this framework? Or if you are in a corporate communications or academic affairs role in an institution how does this help you better understand your experts so you can better develop your internal talent and build your organization’s reputation? As always we welcome your comments as we further refine this and other models related to expertise. Let us know what you think. Helpful Resources Download our Academic Experts and the Media (PDF) This report, based on detailed interviews with some of the most media-experienced academics across the UK and United States draws on their experiences to identify lessons they can share in encouraging other academics to follow in their path. Download the UK Report Here Download the US Report Here The Complete Guide to Expertise Marketing for Higher Education (PDF) Expertise Marketing is the next evolution of content marketing. Build value by mobilizing the hidden people, knowledge and content you already have at your fingertips. This win-win solution not only gives audiences better quality content, but it also lets higher ed organizations show off their smarts. Download Your Copy

Peter Evans
7 min. read

Under new ownership - what's next for Twitter and the social media landscape

He said he'd do it - and he did. Billionaire, innovator and always the controversial CEO, Elon Musk scratched together over 40 billion dollars and has taken the reigns of social media giant Twitter. But now that thew deed is done - a lot of people have some concerns about Musk's intentions for the platform and its hundreds of thousands of users. Will there be an edit button? Will there finally be an end to 'bots'? And what about moderation and free speech? These are all valid and important questions to ask, and that's why Michigan State University's Anjana Susarla penned a recent piece in The Conversation to tackle those topics. What lies ahead for Twitter will have people talking and reporters covering for the days and weeks to come.  And if you're a journalist looking for insight and expert opinion - then let us help with your stories. Anjana Susarla is the Omura Saxena Professor of Responsible AI at Michigan State University. She's available to speak with media, simply click on her icon now to arrange an interview today.

Anjana Susarla
1 min. read

Journalism, Libel, and Political Messaging in America - UConn's Expert Weighs in

Former Alaska governor and vice presidential candidate Sarah Palin didn't cause the deadly 2011 shooting in Tuscon, Arizona, that injured Congresswoman Gabrielle Giffords, says former journalist and UConn expert Amanda Crawford in a new essay for Nieman Reports. Palin is asking for a new trial after a jury in February rejected her libel lawsuit against the New York Times. Palin sued the newspaper after it published a 2017 editorial that erroneously claimed she was responsible for the shooting. The Times quickly issued a correction.  But Crawford says that, in her opinion, Palin has contributed to increased vitriol in American politics today, and that libel laws protecting freedom of the press need to be guarded: Palin, the 2008 Republican vice presidential nominee known for her gun-toting right-wing invective, is now asking for a new trial in the case that hinges on an error in a 2017 Times editorial, “America’s Lethal Politics.” The piece, which bemoaned the viciousness of political discourse and pondered links to acts of violence, was published after a man who had supported Sen. Bernie Sanders opened fire at congressional Republicans’ baseball practice, injuring House Majority Whip Steve Scalise. The Times editorial noted that Palin’s political action committee published a campaign map in 2010 that used a graphic resembling the crosshairs of a rifle’s scope to mark targeted districts. It incorrectly drew a link between the map and the shooting of Rep. Gabrielle Giffords, the Democratic incumbent in one of those districts, while she was at a constituent event in a grocery store parking lot in Tucson less than a year later. I was a political reporter in Arizona at the time, and I remember how Giffords herself had warned that the map could incite violence. “We’re on Sarah Palin’s targeted list,” she said in a 2010 interview, according to The Washington Post, “but the thing is that the way that she has it depicted has the crosshairs of a gun sight over our district, and when people do that, they’ve got to realize there are consequences to that action.” In the wake of the mass shooting in Tucson, some officials and members of the media suggested that political rhetoric, including Palin’s, may be to blame. In fact, no link between the campaign map and the shooting was ever established. As the judge said, the shooter’s own mental illness was to blame. That is where the Times blundered. An editor inserted language that said, “the link to political incitement was clear.” (The Times promptly issued a correction.) This was an egregious mistake and the product of sloppy journalism, but both the judge and the jury agreed that it was not done with actual malice or reckless disregard for the truth. That’s the standard that a public figure like Palin must meet because of the precedent set in the Sullivan case and subsequent decisions. Even if Palin is granted a new trial and loses again, she is likely to appeal. Her lawsuit is part of a concerted effort by critics of the “lamestream media,” including former President Donald Trump, to change the libel standard to make it easier for political figures to sue journalists and win judgments for unintentional mistakes. They want to inhibit free debate and make it harder for journalists to hold them accountable. -- Nieman Reports, March 14, 2022 If you are a reporter who is interested in covering this topic, or who would like to discuss the intersection between politics and media, let us help. Amanda Crawford is a veteran political reporter, literary journalist, and expert in journalism ethics, misinformation, conspiracy theories, and the First Amendment. Click on her icon now to arrange an interview today.

Amanda J. Crawford
3 min. read