Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

Reddit Shares Expected to Commence Trading on NYSE | Media Advisory
In a move emblematic of the digital age's intersection with traditional finance, Reddit, the vast online community platform, is poised to debut its shares on the New York Stock Exchange (NYSE). This event marks a significant milestone for the company, celebrated for its user-generated content and vibrant forums that span every conceivable interest. For investors, tech enthusiasts, and users alike, Reddit's transition from a private to a public entity opens up discussions on the valuation of online communities, the future of digital platforms, and the implications for the broader stock market. Key sub-topics include: Initial Public Offering (IPO) Details: Insights into Reddit's valuation, share pricing, and the IPO process. Impact on the Tech Industry: What Reddit's public listing means for the tech sector and other social media platforms. User Community Reaction: How Reddit's dedicated user base perceives the move to go public and potential changes to the platform. Market Performance and Investor Sentiment: Analysis of investor interest, market trends, and the potential for Reddit's stock. Corporate Governance and Strategy: The shift in Reddit's management approach post-IPO and strategic plans for growth. The Role of Digital Platforms in Modern Investing: How Reddit and similar platforms influence investor decisions and market dynamics. For journalists seeking research or insights for their coverage on this topic, here is a select list of experts. Scott Stratten President & CEO · UnMarketing Samantha Bradshaw Doctor of Philosophy Candidate · Oxford Internet Institute David Meerman Scott Marketing Strategist, Keynote Speaker, Bestselling Author Sean Thoennes, Ph.D. Associate Faculty - Media Psychology · Fielding Graduate University To search our full list of experts visit www.expertfile.com Photo by Brett Jordan

Municipalities around the world have invested significant resources to develop connected smart cities that use the Internet of Things (IoT) to improve sustainability, safety and efficiency. With this increased demand for IoT experience, the VCU College of Engineering formed the OpenCyberCity testbed in 2022. The 1:12 scale model city provides a realistic, small-scale cityscape where students and researchers can experiment with new and existing smart city technology. Sherif Abdelwahed, Ph.D., electrical and computer engineering professor, is director of OpenCyberCity. He recently answered some questions about new developments within the testbed. The OpenCyberCity is a smart city testbed, but are there any real-life cities that one could call a smart city? Several real-life locales are considered smart cities due to their extensive use of technology and data-driven initiatives to optimize infrastructure and services. Dubai is one of the most notable. They have implemented smart transportation systems, buildings and artificial intelligence to transform the city’s operations and make them more efficient. Other reputable smart cities include Singapore and Seoul, which utilize smart energy management, smart transportation and comprehensive data analytics for improved urban planning and services. Seoul, in particular, has an initiative with smart grids and connected street lights, which VCU Engineering’s own OpenCyberCity test bed is working to implement. How does the OpenCyberCity address privacy? With so much technology related to monitoring, how are individual citizens protected from these technologies? Privacy is a major concern for smart cities and it is one of the main research directions for VCU Engineering’s OpenCyberCity. We are developing several techniques to prevent unwanted surveillance of personal information. Sensitive data is protected by solid protocols and access restrictions that only allow authorized users to view the data. Our aim is to find a reasonable middle ground between technological progress and privacy rights, staying within legal and ethical bounds. Some techniques to address privacy concerns include: Data Anonymization: This makes it difficult to trace back information to individual identities. Within the testbed, we will evaluate how to protect individual privacy while maintaining data utility and assess the impact on data quality. Secure Data Storage and Transmission: Encrypt data to protect it from unauthorized access. In the smart city testbed, these access control mechanisms will be implemented within the testbed’s infrastructure. We will also test different data handling processes and access control models to determine their ability to safeguard sensitive data. Privacy Impact Assessments: Regularly evaluate potential privacy risks of new smart city projects in order to mitigate them and ensure the ethical handling of data by those with access. Policy and Regulation Development: Data and insights generated from OpenCyberCity experiments can inform the development of cybersecurity policies and regulations for smart cities. How is the College of Engineering’s OpenCyberCity test bed different from similar programs at other institutions? While other universities have similar smart-city-style programs, each has their own specialty. The VCU College of Engineering’s OpenCyberCity test bed focuses on real-world contexts, creating a physical space where new technologies, infrastructure, energy-efficient transportation and other smart city services can be tested in a controlled environment. Our lab monitors real-time data and develops smart buildings, smart hospitals and smart manufacturing buildings to enhance the city’s technologies. Recent additions to the OpenCyberCity allow for expanded research opportunities like: Advanced Manufacturing: Students can apply advanced manufacturing techniques in a controlled environment. They can also test new materials, processes and automation technologies to improve efficiency and product quality. Energy Efficiency Testing: Environmental engineers and sustainability experts can evaluate energy consumption patterns within the smart manufacturing unit to implement energy-saving measures and assess their impact on sustainability. Production Optimization: Manufacturers can use real-time data from the smart manufacturing unit to optimize production schedules, minimize downtime and reduce waste. Predictive maintenance algorithms also help prevent equipment breakdowns. Education and Training: Hands-on experience with state-of-the-art manufacturing technologies helps train the workforce of the future. Integration with Smart City Services: Data generated by the manufacturing unit can be integrated with smart city services. For example, production data can inform supply chain management and energy consumption data can contribute to overall city energy efficiency initiatives. How has the OpenCyberCity changed in the last year? Is the main focus still data security? What started with research examining, analyzing and evaluating the security of next-generation (NextG) applications, smart city operations and medical devices has expanded. Data security is now only one aspect of OpenCyberCity. Its scope has grown to encompass more expansive facets of cybersecurity like automation and data analytics in the domain of smart manufacturing systems. The implementation of a smart manufacturing system in 2023 is something students really enjoy. Thanks to the vendor we used, undergraduate students had the option to develop functionality for various features of the manufacturing plant. Graduate students were also able to research communications protocols and cybersecurity within the smart manufacturing system. What does the smart manufacturing system entail and what kind of work is occurring within that system? An automated system is there for students to work with. Robot arms, microcontrollers, conveyor belts, ramps, cameras and blocks to represent cargo form an environment that emulates a real manufacturing setting. We’re currently brainstorming an expansion of the smart manufacturing system in collaboration with the Commonwealth Cyber Initiative (CCI). We plan to set up two building models, one for manufacturing and one for distribution, linked by a sky bridge conveyor system that moves items between the locations. Students work to leverage convolutional neural networks that use images to facilitate machine learning. When paired with the advanced cameras, it forms a computer vision system that can accurately place blocks in a variety of lighting conditions, which can be a challenge for other systems. By having to optimize the communication protocols that command the smart manufacturing system’s robotic arms, students also get a sense for real-world constraints . The Raspberry Pi that functions as the controller for the system is limited in power, so finding efficiencies that also enable stability and precision with the arms is key. Is there an aspect of cybersecurity for these automated systems? Yes. Devices, sensors and communication networks integral to the IoT found in smart manufacturing systems and smart cities generate and share vast amounts of data. This makes them vulnerable to cybersecurity threats. Some of the issues we look to address include: Data Privacy: Smart systems collect and process vast amounts of data, including personal and sensitive information. Protecting this data from unauthorized access and breaches is a top priority. Device Vulnerabilities: Many IoT devices used in smart systems have limited computational resources and may not receive regular security updates, making them vulnerable to exploitation. Interconnectedness: The interconnected nature of smart city components increases the attack surface. A breach in one system can potentially compromise the entire network. Malware and Ransomware: Smart systems are susceptible to malware and ransomware attacks, which can disrupt services and extort organizations for financial gain. Insider Threats: Employees with malicious intent or negligence can pose significant risks to cybersecurity. Potential solutions to these problems include data encryption, frequent software updates, network segmentation with strict access controls, real-time intrusion detection (with automated responses to detected threats), strong user authentication methods, security training for users and the development of well-designed incident response plans.

Since 2022, the U.S. Food and Drug Administration has been actively urging consumers to avoid purchasing or consuming tianeptine -- a synthetic drug commonly called "gas station heroin" that can mimic the actions of opioids like fentanyl. Now, the FDA is upping the urgency of it's warnings as vendors continue to market the drug as a so-called "dietary supplement." UConn's C. Michael White, a Distinguished Professor of Pharmacy Practice, spoke with The Conversation about the problem with tianeptine in a must-read Q-and-A: What is tianeptine and why is it risky? Tianeptine stimulates the same receptors as well-known opioids such as fentanyl, heroin and morphine. When these drugs make their way from the blood to the brain, they bind to the “mu” type opioid receptor that triggers the sought-after pain relief and euphoria of those drugs as well as the dangerous effects like slowed or stopped breathing. High doses of tianeptine can bring euphoric effects similar to heroin and can also bring about the dissociative effect – the perception of your mind being disconnected from your surroundings and body – that is reminiscent of ketamine, an anesthetic that has a role in treating post-traumatic stress disorder and depression but has also commonly been abused as a street drug. Products containing tianeptine are often called “legal high drugs” – sometimes dubbed “gas station drugs” – a term used for all non-FDA-approved synthetic drugs that are sold casually in gas stations, online and elsewhere. What are the major adverse effects that people can experience? Data from clinical trials, case reports and poison control centers shows that tianeptine commonly induces agitation. This is typically accompanied by a fast heart rate and high blood pressure, confusion, nightmares, drowsiness, dry mouth and nausea, among other conditions. The most serious adverse events are slowed or stopped breathing, coma, heart arrhythmia and death. When long-term users try to stop tianeptine use, they often experience withdrawal symptoms reminiscent of opioid withdrawal. Consumers need to be aware that products containing tianeptine may not adhere to good manufacturing practices. This means they could contain lead or have other heavy metal contamination or be contaminated by microorganisms such as salmonella or mold. They could also contain other drug ingredients that are not disclosed. Knowingly or unknowingly combining active ingredients can increase the risk of adverse events. Additionally, the amount of the active ingredient contained in the product can vary widely, even with the same manufacturer. So past use does not guarantee that using the same amount will provide the same effect. How are these drugs sold in the US if they are not FDA-approved? If a drug product is not FDA-approved for prescription or over-the-counter-use, it is the Drug Enforcement Agency that is responsible for controlling market access. Before the DEA can ban an active ingredient in a drug product, it must be designated Schedule I, meaning the drug has no legitimate medical purpose and has high abuse potential. Manufacturers do not have to alert the DEA before selling their products to U.S. citizens. This means the DEA must detect an issue, identify the products causing the issue, identify the active ingredients in the product in question and do a full scientific review before designating it as Schedule I. Tianeptine came to market masquerading as a dietary supplement in gas stations and smoke shops, even though it is a synthetic compound. Tianeptine is also sold online allegedly for research purposes and not for human consumption. Tianeptine is undergoing clinical trials for the treatment of pain and depression, but sellers do nothing to make this type of labeling clear to consumers or to restrict purchases to researchers. What can people do to protect themselves and their families? Non-FDA-approved products containing synthetic drugs are very risky to use and should be avoided. FDA-approved drugs are available by a prescription from a health professional or over the counter with active ingredients on an approved list. If someone in a gas station, smoke shop or over the internet touts the benefits of a non-FDA-approved drug product – for pain or anxiety relief, to increase energy or for a buzz – be aware. It could be dangerous the first time you use it, but using it successfully once also doesn’t mean the experience will be the same the next time, and continued use can cause addiction. If a product is being sold “not for human consumption” or “for research purposes only,” you are at a high risk if you take it. Before you take any dietary supplement, make sure you check the active ingredient to be sure that it is, in fact, a natural product and not a synthetic chemical. If someone you know has bags with unmarked powder, a product labeled for research use or not for human consumption, or tablets or capsules not in standard drug bottles, that is a sign of a potentially dangerous situation. Standard drug tests sold over the counter are not designed to pick up tianeptine. One of the main reasons that people use these alternative substances of abuse over regular opioids, cannabis or amphetamines is that they are much harder to detect through work- or at-home drug screens by parents, schools, employers, probation officers and so on. If the DEA is not responding to emerging threats quickly enough, individual states can also act to ban sales of dangerous active ingredients in products. As of January 2024, at least 12 states have banned the sale of tianeptine, according to the FDA, although people in those states can still illegally procure it from the internet. So contacting your state legislators could be a place to start exercising your power to help prevent the harms from these products. This is an important piece, and if you are looking to know about tianeptine and the threat it poses to consumers in America, then let us help. Dr. C. Michael White is an expert in the areas of comparative effectiveness and preventing adverse events from drugs, devices, dietary supplements, and illicit substances. Dr. White is available to speak with media -- click on his icon now to arrange an interview today.

The internet has completely changed how we shop and now artificial intelligence (AI) is changing how we decide what to purchase. AI platforms have created tools to help people find the perfect gift for someone special. The technology helps brands to better learn about their customer and suggests products that fit to that customer. Goizueta Professor David Schweidel can walk you through the different platforms and how they can help shoppers find the perfect gift. Entering some key details about the gift recipient can uncover a whole new world of ideas for that hard to shop for person on the gift list. David A. Schweidel is Professor of Marketing at Emory University’s Goizueta Business School and an acclaimed author. He is available to speak with media regarding the latest advances in AI and how it is changing what we purchase. Simply click on his icon now to arrange an interview today.

Reinventing the laser diode: free public lecture by Professor Richard Hogg
Professor Richard Hogg joined Aston University in spring 2023 His inaugural lecture is about laser diodes, the tiny components that are a vital part of everyday life The free event will take place on Tuesday 28 November. The latest inaugural lecture at Aston University will explore the laser diode and what’s in store for it in the future. Professor Richard Hogg will explain how his future research might make laser diodes do some of the things that they currently can’t do. The laser diode turned 61 years old this month and the tiny components are a critical part of everyday life. Professor Hogg said: “They are now at the heart of the continuous transformation of society. “They transmit data to allow instantaneous, ubiquitous communication and data access. “They allow light to be used for cutting and welding, for sensing and imaging, for displays and illumination, and data storage. “And in the guise of a laser pointer they can even be used to entertain your cat!” He will discuss different classes of laser diode and their operation and applications. Professor Hogg joined Aston University in spring 2023 and is based at Aston Institute of Photonic Technologies (AIPT). It is one of the world’s leading photonics research centres and its scientific achievements range from medical lasers and bio-sensing for healthcare, to the high-speed optical communications technology that underpins the internet and the digital economy. The professor is also chief technology officer at III-V Epi, which provides compound semiconductor wafer foundry services. The free event will take place on the University campus at Conference Aston, on Tuesday 28 November from 6pm to 8pm and will be followed by a drinks reception. It can also be viewed online. To sign up for a place in person visit https://www.eventbrite.co.uk/e/717822585677?aff=oddtdtcreator To sign up for a place online visit https://www.eventbrite.co.uk/e/717824260687?aff=oddtdtcreator

Cyber threats have become one of the leading issues for corporations, governments, and public institutions across America. With ransomware attacks, hackers, and other nefarious threats, the issue is becoming a daily occurrence and leading news story. Rensselaer Polytechnic Institute’s James Hendler, director of the Future of Computing Institute, Tetherless World Professor of Computer, Web, and Cognitive Sciences, and director of the RPI-IBM Artificial Intelligence Research Collaboration, weighs in on what we should all know about cybersecurity. Overview Think about cybersecurity the way you think about home security – the more valuables you have, the more security you need. A normal user needs the equivalent of a lock on the door, which most of our computers provide out of the box. However, a user with a fair amount of personal information, who keeps financial records or runs a small business, probably wants a firewall or other additional protection. We used to tell people to protect their computers with firewalls, malware detectors, and the like, but now it is much more important to protect your web access, be wary of external sites, and keep your passwords secure and not easily guessed. Use of a password manager program can be really helpful for people who use a lot of different accounts. Threats The biggest threat facing individuals is identity theft caused by someone getting into an account that you don’t control. Most malware or password stealing comes via a phishing attack (a fake email that convinces you to click a bad link), so if you see an offer that looks too good to be true, don’t believe it. Never give out a password or personal information without confirming that it is legitimate. We also recommend not using major accounts (like Google, Facebook, etc.) to log in to new apps where you aren’t completely sure of the reliability – you’re safer if you use a separate password. It’s also worth noting that these kinds of attacks are now happening on cell phones – if you get a text saying your Amazon, Netflix, or other services have been shut off, be very careful. These companies almost never send out such messages, and if they do, they come via email, not text. For businesses, ransomware is becoming an increasing challenge. Frequent backups and dual authentication are absolute musts for small businesses. Large businesses, and especially those with cyber-physical connections such as a manufacturing device, must have someone on the team who understands internet technology. Outside audits done annually, at least, are also highly recommended. The biggest danger in cybersecurity is that people, especially in businesses, think that the software industry will fix things and that they don’t have to worry. That’s like expecting auto manufacturers to stop car theft, or the government to prevent all crime – these organizations certainly need to help, but they cannot be perfect. So while there definitely needs to be a role for manufacturers and government, people need to understand that the threats are now coming from social interactions such as phishing, or serious criminal enterprises such as ransomware attackers, and not just maladjusted teenagers. They must be ready to pay for some security if they have things on their network that need protection. The Cloud Cloud-based services are a major boon to cybersecurity for individuals and small businesses if, and only if, people protect their access. If a breach is reported to you by a company, don’t ignore it, change your password, and, whenever possible, use dual authentication. The cloud companies can afford to spend more on security than you can and thus your information stored in these services tends to be quite secure. However, people need to be careful in using the cloud. Just as you may trust a bank with your money, you want to be sure not to be robbed on your way there. Future Computing Systems and Cybersecurity New technologies, such as artificial intelligence (AI), are arising all the time in today’s fast-moving cyber world. As these technologies arise, they can create new opportunities for cybersecurity, but can also create new challenges. Cybercrime will never disappear, and each new capability comes with a price. Increased education and awareness of emerging computing technologies (blockchain, quantum, etc.) are important not just for the expert, but also for the general public. It is important to stay informed and pay attention to what is being reported. Just as buying a new appliance can be a great advantage at home (I love my new air-fryer), you also have to be sure to be using it appropriately (used wrong, it can cause fires). Looking to learn more or connect with an expert for your questions and coverage? James Hendler is the director of the Rensselaer Future of Computing Institute, Tetherless World Professor of Computer, Web, and Cognitive Sciences, and director of the RPI-IBM Artificial Intelligence Research Collaboration. Hendler has authored over 400 books, technical papers, and articles in the areas of Semantic Web, artificial intelligence, cybersecurity, and high-performance processing. Hendler is available to speak with media - simply click on his icon now to arrange an interview today.

Goizueta Faculty Member Uncovers Impact of Remote Learning on Educational Inequality
In 2020, the world went into lockdown. Learning in school became learning from the couch. Rather than listening to teachers in-person behind a desk, high school students had to find a computer to stream their lectures and lessons. What happens to educational inequality in a digital-first, remote-learning environment? Whereas students are traditionally bound by their brick-and-mortar schools and the limitations of funding in those areas, what happens when the walls are removed and students have access to the teachers, knowledge, and peers from other areas? Ruomeng Cui and co-researchers, Zhanzhi Zheng from University of North Carolina at Chapel Hill and Shenyang Jiang from Tongji University, decided to find out. In their 2022 paper, currently under review, Cui and her colleagues looked at the performance of high school students in developing and developed regions of China. We thought that remote learning might reduce the inequality gap in education because when students are learning off-line, they’re restricted by their local resources. “It’s quite obvious that developing regions don’t have good resources, experienced teachers, or competitive peers—they often have inferior educational resources in comparison to developed regions,” explains Cui, associate professor of information systems and operations management. “We thought the accessibility of remote learning could help reduce this knowledge gap and help students in developing regions improve their learning outcomes.” Analyzing Education in Developed and Developing Areas The idea for the paper, “Remote Learning and Educational Inequality,” published earlier this year, stemmed from another of Cui’s papers, which looked at the academic productivity of women as a result of the COVID-19 lockdowns. “We wanted to study whether the switch to remote learning impacts educational inequality. Does it make it better or worse?” says Cui. “We are the first ones to offer empirical evidence on such a granular level about a large-scale data set.” The group analyzed the Chinese college entrance exam from 2018 through 2020, which students take during the last few weeks of high school; the test score is a requirement for undergraduate admission in China. It’s common for high schools to announce the number of students who scored 600 or higher (out of 750 total points). Using 1,458 high school exam results from 20 provinces, the group found that in 2020, when remote learning became the norm, “the number of students scoring above 600 points in developing regions increased by 22.22 percent,” in comparison to developed regions. Remote learning significantly improved learning outcomes of students in developing regions. We should think about encouraging the adoption of remote learning in education However, Cui and her co-researchers wanted to go a step further. Because the entrance exams are summaries of student data, they surveyed 1,198 students to drill down and ensure that these results came from remote learning rather than other factors. Respondents were asked to rate aspects of their remote-learning experience, such as access to digital devices, their proficiency in using software, how reliable their internet was, how they interacted with peers and teachers, and their access to online educational resources. The researchers found that students in developing regions were able to better connect with peers and teachers, and the students believed that “their learning efficiency was greater” because of the remote learning. Education inequality is not only a problem in China. It’s everywhere. It’s across the world. Having access to better educational resources online can be applied anywhere. However, the one caveat to their findings: Remote learning is beneficial, but students need devices and the infrastructure to support online learning, which is often lacking in developing regions or underserved areas. “We need to support, build, and develop the digital technology capability that enables the effectiveness of remote learning,” says Cui. Are you a reporter looking to know more about the impact COVID had on education and how inequality plays a role in how we educate students during a pandemic? Then let us help with your coverage and questions. Ruomeng Cui is an Associate Professor of Information Systems & Operations Management at Emory University's Goizueta School of Business. Ruomeng is available to speak with media regarding this topic - simply click on her icon now to arrange an interview today.

AI-Generated Content is a Game Changer for Marketers, but at What Cost?
Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

Can you be hacked while using your mobile device? In a word, yes — here’s how to protect your data
October is Cybersecurity Awareness Month and being aware of all your devices is as important as ever before. Most people are online every day, which opens themselves up to a threat of being hacked. Whether it be a mobile device, laptop, or personal computer, everyone needs to have cyber awareness. Steven Weldon, director of the Cyber Institute at Augusta University’s School of Computer and Cyber Sciences said many straightforward things that can be done to protect devices, such as having lock screens, making sure operating systems are up to date and simply recognizing how, when and where devices are being used. “Smart phones today are probably the most capable computing device that we have and we have it on us all the time,” said Weldon. “The data that can be extracted from these devices can be put together to build a pattern of life on us: where we go, what we do and when we do it. All of this data is potentially at risk if we’re not being careful about who gets access to our smart phones. That’s a great reason to lock the screen and require at least a password or pin to unlock the phone.” Gokila Dorai, PhD, assistant professor in the School of Computer and Cyber Sciences, suggests using biometrics to enhance security. “I would strongly recommend for women, young adults even teenagers, if it’s possible for you to have biometrics as a way to unlock your device, then go for that. These unique ways of unlocking a device would add a layer of protection,” said Dorai. Dorai is one of the growing experts in the field of mobile forensics and her research projects are federally funded. In addition, several SCCS faculty are mentoring undergraduate and graduate students working on cutting edge research related to mobile device security and digital forensics. She also suggested adding a two-factor authentication or multi-factor authentication to add an extra layer of security. When out in the public, it’s easy to connect a mobile device to an unprotected Wi-Fi network. Doing so could open up sites you visit to a hacker. Weldon suggests people should be careful of what apps are used when on public Wi-Fi, since they may expose a lot of personally identifiable information. His suggestion is to use a virtual private network to help protect data that’s being transmitted and received. “We should recognize the data on our smart phones and protect them accordingly,” added Weldon. “Recognizing the value and sensitivity of the data on our smart phones can guide us in how we protect these devices. We may not think as much about the security and privacy of our smart phones as we do about our laptops and desktops. When we think about everything we use our smartphones for, how ubiquitous they are in our lives, we come to realize just how central they are to today’s lifestyle in the digital age.” It’s tough to identify when a mobile device has been hijacked, so both Weldon and Dorai suggest paying close attention to any unusual behavior, even small things such as a battery draining faster than usual. Both are indicators you may need to take corrective actions. Dorai added the government can do more to protect a person’s privacy. “With the introduction of more and more Internet of Things devices in the market, with several different manufacturers, there’s a lot of user data that’s actually getting exchanged. These days, the most valuable thing in the world is data. So stricter measures are required,” she said. She indicated it needs to be a collaborative effort between industry, academia, government, and practitioners to come together and work on ideas to strengthen security. “Yes we want security. We are willing to put up with a little bit of friction for additional security. We want it easy and we generally want it free,” said Weldon. “We don’t read licensing agreements, but we would generally be willing to take certain actions, make certain tradeoffs, to be more secure.” One other major concern are apps in general. While Google Play Store and Apple routinely remove some apps that may be out of date or have security vulnerabilities, they may still be running on a user’s device. “Mobile applications may also hide from you in plain sight in the sense the app icons may not be showing up on the screen, but still they are running in the background,” added Dorai. In essence, the device user is the first line of defense. Taking all the necessary steps to prevent a third party from getting your information is of the utmost importance in the digital age. “I believe a big part of it this discussion is about user awareness. We want that free app but that app is asking for a lot of permissions. There’s an old saying in cybersecurity: if you are not paying for the product, you are the product. There’s also another saying: if it’s smart, it’s vulnerable,” said Weldon. Are you a reporter covering Cybersecurity Awareness Month? If so - then let us help with your stories. Steven Weldon is the Director of Cyber Institute at the School of Computer and Cyber Sciences at Augusta University and is an expert in the areas of cellular and mobile technology, ethics in computer science, scripting and scripting and automation. Gokila Dorai is an Assistant Professor in the School of Computer and Cyber Sciences at Augusta University and is an expert in the areas is mobile/IoT forensics research. Both experts are available for interviews - simply click on either icon to arrange a time today.

Sharing photos of your kids online? Here's what you should consider.
By Emma Richards Today’s parents are the first to raise children alongside social media and in this era of likes, comments and shares, they must also decide when to post images of their children online and when to hold off to protect their privacy. The practice of “sharenting” – parents posting images of their children on social media platforms — has drawn attention to the intersection between the rights of parents and the rights of their children in the online world. Stacey Steinberg, a professor in UF’s Levin College of Law, author and mother of three, says parents need to weigh the right to post their child’s milestones and accomplishments online against the right of a child to dictate their own digital footprint and maintain their privacy. Steinberg, like many parents, avidly posted photographs of her children online to document their childhoods. When she left her job as a child welfare attorney to become a professor, Steinberg also began writing about her motherhood experiences. She also began rethinking posting about her children online, realizing that it could be doing more harm than good. And yet, there was little guidance for parents on to consider when posting images and how to do so with their children’s safety in mind. Among the problematic issues: Machine learning and artificial intelligence allow for the collection of information about people from online posts but there is little control over or understanding of how that stored information is being used or how it will future impact on the next generation. According to Steinberg, a Barclays study found that by the year 2030, nearly two-thirds of all identity theft cases will be related to sharenting. There are also concerns pedophiles may collect and save photographs of children shared online. For example, one article she reviewed reported that 50% of pedophile image-sharing sites had originated on family blogs and on social media. Steinberg says parents should model appropriate social media behavior for their children, such as asking permission before taking and posting an image and staying present in the moment rather than living life through a lens or being fixated with what’s online. “I think it’s a danger that we’re not staying in the moment, that we’re escaping to our newsfeed or that we’re constantly posting and seeing who’s liked our images and liked what we’ve said instead of focusing on real connections with the people in front of us,” Steinberg said in an episode of the From Florida Podcast. While parents serve as the primary gatekeepers for children’s access to the online world, tech companies and policymakers also have roles to play in setting parameters and adopting law that protect children’s safety. Numerous European countries have already moved in this direction with such concepts as the “right to be forgotten,” which allows people to get information that is no longer relevant or is inaccurate removed to protect their name or reputation on platforms such as Google. “The United States really would have a hard time creating a right to be forgotten because we have really strong free speech protections and we really value parental autonomy Steinberg said. Google has, however, created a form that allows older kids to request that old photographs and content about them be removed from the internet, which Steinberg says is a promising step. Steinberg would love to see other mechanisms adopted to minimize the amount of data that is collected about children and ensure artificial intelligence is used responsibly and ethically when collecting online data. In the meantime, parents can proactively make online privacy issues a topic of discussion with their children and take proactive steps to limit their digital footprints, such as deleting old childhood photos. “One thing that I really want to encourage families to do is not to fear the technology, but to try to learn about it,” Steinberg said.








