Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Small buildings, big impact: OpenCyberCity Director Sherif Abdelwahed, Ph.D., talks about smart city research and the new capabilities of VCU Engineering’s miniature city featured image

Small buildings, big impact: OpenCyberCity Director Sherif Abdelwahed, Ph.D., talks about smart city research and the new capabilities of VCU Engineering’s miniature city

Municipalities around the world have invested significant resources to develop connected smart cities that use the Internet of Things (IoT) to improve sustainability, safety and efficiency. With this increased demand for IoT experience, the VCU College of Engineering formed the OpenCyberCity testbed in 2022. The 1:12 scale model city provides a realistic, small-scale cityscape where students and researchers can experiment with new and existing smart city technology. Sherif Abdelwahed, Ph.D., electrical and computer engineering professor, is director of OpenCyberCity. He recently answered some questions about new developments within the testbed. The OpenCyberCity is a smart city testbed, but are there any real-life cities that one could call a smart city? Several real-life locales are considered smart cities due to their extensive use of technology and data-driven initiatives to optimize infrastructure and services. Dubai is one of the most notable. They have implemented smart transportation systems, buildings and artificial intelligence to transform the city’s operations and make them more efficient. Other reputable smart cities include Singapore and Seoul, which utilize smart energy management, smart transportation and comprehensive data analytics for improved urban planning and services. Seoul, in particular, has an initiative with smart grids and connected street lights, which VCU Engineering’s own OpenCyberCity test bed is working to implement. How does the OpenCyberCity address privacy? With so much technology related to monitoring, how are individual citizens protected from these technologies? Privacy is a major concern for smart cities and it is one of the main research directions for VCU Engineering’s OpenCyberCity. We are developing several techniques to prevent unwanted surveillance of personal information. Sensitive data is protected by solid protocols and access restrictions that only allow authorized users to view the data. Our aim is to find a reasonable middle ground between technological progress and privacy rights, staying within legal and ethical bounds. Some techniques to address privacy concerns include: Data Anonymization: This makes it difficult to trace back information to individual identities. Within the testbed, we will evaluate how to protect individual privacy while maintaining data utility and assess the impact on data quality. Secure Data Storage and Transmission: Encrypt data to protect it from unauthorized access. In the smart city testbed, these access control mechanisms will be implemented within the testbed’s infrastructure. We will also test different data handling processes and access control models to determine their ability to safeguard sensitive data. Privacy Impact Assessments: Regularly evaluate potential privacy risks of new smart city projects in order to mitigate them and ensure the ethical handling of data by those with access. Policy and Regulation Development: Data and insights generated from OpenCyberCity experiments can inform the development of cybersecurity policies and regulations for smart cities. How is the College of Engineering’s OpenCyberCity test bed different from similar programs at other institutions? While other universities have similar smart-city-style programs, each has their own specialty. The VCU College of Engineering’s OpenCyberCity test bed focuses on real-world contexts, creating a physical space where new technologies, infrastructure, energy-efficient transportation and other smart city services can be tested in a controlled environment. Our lab monitors real-time data and develops smart buildings, smart hospitals and smart manufacturing buildings to enhance the city’s technologies. Recent additions to the OpenCyberCity allow for expanded research opportunities like: Advanced Manufacturing: Students can apply advanced manufacturing techniques in a controlled environment. They can also test new materials, processes and automation technologies to improve efficiency and product quality. Energy Efficiency Testing: Environmental engineers and sustainability experts can evaluate energy consumption patterns within the smart manufacturing unit to implement energy-saving measures and assess their impact on sustainability. Production Optimization: Manufacturers can use real-time data from the smart manufacturing unit to optimize production schedules, minimize downtime and reduce waste. Predictive maintenance algorithms also help prevent equipment breakdowns. Education and Training: Hands-on experience with state-of-the-art manufacturing technologies helps train the workforce of the future. Integration with Smart City Services: Data generated by the manufacturing unit can be integrated with smart city services. For example, production data can inform supply chain management and energy consumption data can contribute to overall city energy efficiency initiatives. How has the OpenCyberCity changed in the last year? Is the main focus still data security? What started with research examining, analyzing and evaluating the security of next-generation (NextG) applications, smart city operations and medical devices has expanded. Data security is now only one aspect of OpenCyberCity. Its scope has grown to encompass more expansive facets of cybersecurity like automation and data analytics in the domain of smart manufacturing systems. The implementation of a smart manufacturing system in 2023 is something students really enjoy. Thanks to the vendor we used, undergraduate students had the option to develop functionality for various features of the manufacturing plant. Graduate students were also able to research communications protocols and cybersecurity within the smart manufacturing system. What does the smart manufacturing system entail and what kind of work is occurring within that system? An automated system is there for students to work with. Robot arms, microcontrollers, conveyor belts, ramps, cameras and blocks to represent cargo form an environment that emulates a real manufacturing setting. We’re currently brainstorming an expansion of the smart manufacturing system in collaboration with the Commonwealth Cyber Initiative (CCI). We plan to set up two building models, one for manufacturing and one for distribution, linked by a sky bridge conveyor system that moves items between the locations. Students work to leverage convolutional neural networks that use images to facilitate machine learning. When paired with the advanced cameras, it forms a computer vision system that can accurately place blocks in a variety of lighting conditions, which can be a challenge for other systems. By having to optimize the communication protocols that command the smart manufacturing system’s robotic arms, students also get a sense for real-world constraints . The Raspberry Pi that functions as the controller for the system is limited in power, so finding efficiencies that also enable stability and precision with the arms is key. Is there an aspect of cybersecurity for these automated systems? Yes. Devices, sensors and communication networks integral to the IoT found in smart manufacturing systems and smart cities generate and share vast amounts of data. This makes them vulnerable to cybersecurity threats. Some of the issues we look to address include: Data Privacy: Smart systems collect and process vast amounts of data, including personal and sensitive information. Protecting this data from unauthorized access and breaches is a top priority. Device Vulnerabilities: Many IoT devices used in smart systems have limited computational resources and may not receive regular security updates, making them vulnerable to exploitation. Interconnectedness: The interconnected nature of smart city components increases the attack surface. A breach in one system can potentially compromise the entire network. Malware and Ransomware: Smart systems are susceptible to malware and ransomware attacks, which can disrupt services and extort organizations for financial gain. Insider Threats: Employees with malicious intent or negligence can pose significant risks to cybersecurity. Potential solutions to these problems include data encryption, frequent software updates, network segmentation with strict access controls, real-time intrusion detection (with automated responses to detected threats), strong user authentication methods, security training for users and the development of well-designed incident response plans.

Sherif Abdelwahed, Ph.D. profile photo
5 min. read
Exploring the Depths: How AI is Revolutionizing Seafloor Research featured image

Exploring the Depths: How AI is Revolutionizing Seafloor Research

In recent years, there has been a significant shift in the way seafloor research is conducted, all thanks to the groundbreaking advancements in artificial intelligence (AI) technology. The depths of our oceans have always been a mystery, but with the use of AI, scientists and researchers are now able to explore and uncover the hidden secrets that lie beneath the surface. With funding from the Department of Defense, University of Delaware oceanographer Art Trembanis and others are are using artificial intelligence and machine learning to analyze seafloor data from the Mid-Atlantic Ocean. The goal is to develop robust machine-learning methods that can accurately and reliably detect objects in seafloor data.  “You can fire up your phone and type dog, boat or bow tie into a search engine, and it's going to search for and find all those things. Why? Because there are huge datasets of annotated images for that,” he said. “You don't have that same repository for things like subway car, mine, unexploded ordnance, pipeline, shipwreck, seafloor ripples, and we are working to develop just such a repository for seabed intelligence.” Trembanis is able to talk about this research and the impact it could have on our day to day lives. He can be contacted by clicking his profile.  “You have commercial companies that are trying to track pipelines, thinking about where power cables will go or offshore wind farms, or figuring out where to find sand to put on our beaches,” said Trembanis. “All of this requires knowledge about the seafloor. Leveraging deep learning and AI and making it ubiquitous in its applications can serve many industries, audiences and agencies with the same methodology to help us go from complex data to actionable intelligence.” He has appeared in The Economic Times, Technical.ly and Gizmodo.

Arthur Trembanis profile photo
2 min. read
UC Irvine expert on metacognition: Megan Peters featured image

UC Irvine expert on metacognition: Megan Peters

How do our brains take in complex information from the world around us to help us make decisions? And what happens when there’s a mismatch between how well your brain thinks it’s performing this function and how well it’s actually doing? UC Irvine cognitive scientist Megan Peters takes a deep dive into metacognition - our ability to monitor our own cognitive processing. To reach Prof. Peters, contact Heather Ashbach at hashbach@uci.edu or 949-284-1577. “Our brains are fantastically powerful information processing systems. They take in information from the world around us through our eyes, ears, and other senses, and they process or transform that sensory information into rich internal representations — representations that we can then use to make useful decisions, to navigate effectively without running into things, and ultimately, to stay alive. And interestingly, our brains also can tell us when they’re doing a good job with all this processing, through a process called metacognition, or our ability to monitor our own cognitive processing. My name is Megan Peters, and I’m an associate professor in the department of Cognitive Sciences at UC Irvine. I’m also a Fellow in the Canadian Institute for Advanced Research Brain, Mind, & Consciousness program and I am president and chair of the board at Neuromatch. My research seeks to understand metacognition — how it works in the brain, and how it works at a computational or algorithm level — and it also seeks to understand what this metacognitive processing might have to do with the conscious experiences we have of our environments, of each other, and of ourselves. So in our research group, we use a combination of behavioral experiments with humans, brain imaging (like MRI scans), and computational approaches like mathematical modeling and machine learning or Artificial Intelligence, to try to unravel these mysteries. I think my favorite overall line of research right now has to do with cases where our brains’ self-monitoring sometimes seems to go wrong. So what I mean is, sometimes your brain “metacognitively” computes how well it thinks you’re doing at this “sensory information processing” task, but this ends up being completely different from how well you’re actually doing. Imagine it this way: you’re driving down a foggy road, at night in the dark. You probably can’t see very well, and you’d hope that your brain would also be able to tell you, “I can’t see super well right now, I should probably slow down.” And most of the time, your brain does this self-monitoring correctly, and you do slow down. But sometimes, under some kinds of conditions or visual information, your brain miscalculates, and it erroneously tells you, “Actually you can see just fine right now!” So this is a sort of “metacognitive illusion”: your brain is telling you “you’re doing great, you can see very clearly!” when in reality, the quality of the information that it’s receiving, and the processing it’s doing, is really poor, really bad — in essence, that means that you can feel totally confident in your abilities to accurately process the world around you, when in fact you’re interpreting the world totally incorrectly. Now normally, in everyday life, this doesn’t happen of course. But we can create conditions in the lab where this happens very robustly, which helps us understand when and how it might happen in the real world, too, and what the consequences might be. So this is fascinating both because it is a powerful tool for studying how your brain constructs that metacognitive feeling of confidence, and also because — in theory — it means that your subjective, conscious feeling of confidence might be doing something really different than just automatically or directly reading out how reliably you brain is processing information. And that could eventually provide a better way to investigate how our so-called phenomenological or conscious experiences can arise from activity patterns in your brain at all.” To reach Prof. Peters, contact Heather Ashbach at hashbach@uci.edu or 949-284-1577.

3 min. read
Optical research illuminates a possible future for computing technology featured image

Optical research illuminates a possible future for computing technology

Nathaniel Kinsey, Ph.D., Engineering Foundation Professor in the Department of Electrical and Computer Engineering (ECE), is leading a group to bring new relevance to a decades-old computing concept called a perceptron. Emulating biological neuron functions of the messenger cells within the body’s central nervous system, perceptrons are an algorithmic model for classifying binary input. When combined within a neural network, perceptrons become a powerful component for machine learning. However, instead of using traditional digital processing, Kinsey seeks to create this system using light with funding from the Air Force Office of Scientific Research. This “nonlinear optical perceptron” is an ambitious undertaking that blends advanced optics, machine learning and nanotechnology. “If you put a black sheet outside on a sunny day, it heats up, causing properties such as its refractive index to change,” Kinsey said. “That’s because the object is absorbing various wavelengths of light. Now, if you design a material that is orders of magnitude more complex than a sheet of black plastic, we can use this change in refractive index to modify the reflection or transmission of individual colors – controlling the flow of light with light.” Refractive index is an expression of a material’s ability to bend light. Researchers can harness those refractive qualities to create a switch similar to the binary 1-0 base of digital silicon chip computing. Kinsey and collaborators from the U.S. National Institute of Standards and Technology, including his former VCU Ph.D. student Dhruv Fomra, are currently working to design a new kind of optically sensitive material. Their goal is to engineer and produce a device combining a unique nonlinear material, called epsilon-near-zero, and a nanostructured surface to offer improved control over transmission and reflection of light. Kinsey’s prior research has demonstrated that epsilon-near-zero materials combine unique features that allow their refractive index to be modified quite radically – from 0.3 to 1.3 under optical illumination – which is roughly equivalent to the difference between a reflective metal and transparent water. While an effective binary switch, the large change in index requires a lot of energy (~1 milli-Joules per square centimeter). By combining epsilon-near-zero with a specifically designed nanostructure exhibiting surface lattice resonance, Kinsey hopes to achieve a reduction in the required energy to activate the response. The unique response of a nanostructure exhibiting surface lattice resonance allows light to effectively be bent 90 degrees, arriving perpendicular to the surface while being split into two waves that travel along the surface. When a large area of the nanostructure is illuminated, the waves traveling along the surface mix, where they interfere constructively or destructively with each other. This interference can produce strong modification to reflection and transmission that is very sensitive to the geometry of the nanostructure, the wavelength of the incident light and the refractive index of the surrounding materials. The mixing of optical signals along the surface can also selectively switch regions of the epsilon-near-zero material thereby performing processing operations. A key aspect of Kinsey’s work is to build nonlinear components, like diodes and transistors, that use optical signals instead of electrical ones. Transistors and other traditional electronic components are nonlinear by default because electrical charges strongly interact with each other (for example, two electrons will tend to repel each other). Creating optical nonlinear components is challenging because photons do not strongly interact, they just pass through each other. To correct for this, Kinsey employs materials whose properties change in response to incident light, but the interaction is weak and thus requires large energies to utilize. Kinsey’s device aims to reduce that energy requirement while simultaneously shaping light to perform useful operations through the use of the nanostructured surface and lightwave interference. The United States Department of Defense sees optical computing as the next step in military imaging. Kinsey’s work, while challenging, has potential to yield an enormous payoff. “Let’s say you want to find a tank within an image,” Kinsey said, “Using a camera to capture the scene, translate that image into an electrical signal and run it through a traditional, silicon-circuit-based computer processor takes a lot of processing power. Especially when you try to detect, transfer, and process higher pixel resolutions. With the nonlinear optical perceptron, we’re trying to discover if we can perform the same kinds of operations purely in the optical domain without having to translate anything into electrical signals.” Linear optical systems, like metasurfaces and photonic integrated circuits, can already process information using only a fraction of the power of traditional tools. Building nonlinear optical systems would expand the functionality of these existing linear systems, making them ideal for remote sensing platforms on drones and satellites. Initially, the resolution would not be as sharp as traditional cameras, but optical processing built into the device would translate an image into a notification of tanks, troops on the move, for example. Kinsey suggests optical-computing surveillance would make an ideal early warning system to supplement traditional technology. “Elimination or minimization of electronics has been a kind of engineering holy grail for a number of years,” Kinsey said, “For situations where information naturally exists in the form of light, why not have an optical-in and optical-out system without electronics in the middle?” Linear optical computing uses minimal power, but is not capable of complex image processing. Kinsey’s research seeks to answer if the additional power requirement of nonlinear optical computing is worthwhile given its ability to handle more complex processing tasks. Nonlinear optical computing could be applied to a number of non-military applications. In driverless cars, optical computing could make better light detection and ranging equipment (better known as LIDAR). Dark field microscopy already uses related optical processing techniques for ‘edge detection’ that allows researchers to directly view details without the electronic processing of an image. Telecommunications could also benefit from optical processing, using optical neural networks to read address labels and send data packets without having to do an optical to electrical conversion. The concept of optical computing is not new, but interest (and funding) in theory and development waned in the 1980s and 1990s when silicon chip processing proved to be more cost effective. Recent years have seen many advancements in computing, but the more recent slowdown in scaling of silicon-based technologies have opened the door to new data processing technologies. “Optical computing could be the next big thing in computing technology,” Kinsey said. “But there are plenty of other contenders — such as quantum computing — for the next new presence in the computational ecosystem. Whatever comes up, I think that photonics and optics are going to be more and more prevalent in these new ways of computation, even if it doesn’t look like a processor that does optical computing.” Kinsey and other researchers working in the field are in the early stages of scientific exploration into these optical computing devices. Consumer applications are still decades away, but with silicon-based systems reaching the limit of their potential, the future for this light-based technology is bright.

5 min. read
#Expert Insight: US Firms 20 Years Out of Date on Customer Diversity featured image

#Expert Insight: US Firms 20 Years Out of Date on Customer Diversity

Diversity, equity, and inclusion have steadfastly risen to the top of corporate agendas in the U.S. and elsewhere over the course of the last few years. From 2022, all 100 of the Fortune 100 companies had clearly-defined diversity, equity, and inclusion (DEI) initiatives outlined on their websites—good news for their workforce, suppliers, and distributors. But what about their customers? A landmark new study by Goizueta Business School’s Omar Rodriguez-Vila finds that while intra-organizational DEI efforts are robust, many U.S. firms are lagging behind societal reality when it comes to fully representing diversity in their marketplace actions. Rodriguez-Vila finds that in terms of skin type, body type, and physical (dis)ability, actions by the top 50 American brands are a good 20 years behind the current demographic makeup of the country. Rodriguez-Vila, who is a professor in the practice of marketing at Goizueta, has teamed with Dionne Nickerson of the University of Indiana’s Kelley School of Business, and Sundar Bharadwaj of The University of Georgia’s Terry College of Business, to measure brand inclusivity; a term that he and his colleagues have coined to describe how well brands serve underrepresented consumer communities. Inclusive brands, he says, are those that “enhance consumers’ perceptions of acceptance, belonging, equity, and respect through their actions and market offerings.” To assess how well some of the biggest firms are doing in terms of this kind of marketplace inclusivity, Rodriguez-Vila worked with a team of full-time MBA and undergraduate students[1] to assess the 50 most valuable brands across 10 consumer-facing industries. Using machine learning and human coders they analyzed these brands’ social media posts on Facebook, Instagram, and TikTok, looking for patterns of representational diversity across four measures: skin type; body type; hair type; and physical ability. Altogether, they processed just short of 11,000 social media posts made between June 2021 and July 2022. What they find is stunning. “We used our data to apply the Simpson’s Diversity Index (SDI) to the population of social media posts by the largest brands in the United States. The SDI is a commonly used equation to measure the diversity of a population,” says Rodriguez-Vila. According to the 2020 U.S. Census, the racial diversity index of the country is 61 percent, and has been consistently increasing over the past 20 years. Applying the SDI calculation to measure the diversity in social media messages is a novel idea and one that provides clarity on the state of inclusion in brand communications, he adds. We found that the racial diversity index of social media messages by the top U.S. brands was just 41%. The last time the racial diversity index was in that range was in the year 2000. Omar Rodriguez-Vila In other words, the racial diversity these brands are collectively representing in their messages is 20 years behind the reality of the country. Interestingly, this lag between representation and demographic reality is common to brands in virtually all of the industries studied—from airlines to fashion, consumer packaged goods to financial services, hospitality to retail. The only sector that bucks the trend in any substantive way, he says, is beauty; even then this is likely only because beauty firms have come under fire for underrepresenting Black and non-white customers in the recent past. “Brands’ social media is typically more nuanced and comprehensive than advertising, so it’s more telling as a measure of what they prioritize. And by this measure, we’re seeing systemic bias across a majority of industries,” says Rodriguez-Vila. “Some, like beauty, fare better than others, but then beauty arguably has the strongest business case for diversity.” That being said, there is a robust business case for organizations across all industries to do better in marketplace inclusion. Not only does representational diversity have the potential to open up new markets, new customer bases, and areas for expansion, but “Feeling represented and included matters to everyone,” says Rodriguez-Vila. “To understand the importance of inclusion to customers we used a discrete choice model where people made trade-offs between price and a collection of product features in order to understand the factors that motivated them to make a purchase,” he explains. “We tested a sample of consumers looking to buy sportswear, and we added representation of diversity and inclusion as a characteristic, to see if it had any impact on their choices.” Again, the results are stunning. On average, 51 percent of customers took inclusion into account as a primary driver of athletic apparel choices. Inclusion was a priority driver of choice among 38 percent of consumers in historically well-represented communities—slim, white, able-bodied people. When Rodriguez-Vila and his colleagues expanded the analysis to other historically under-represented groups they found a significantly greater impact. Here, inclusion was a primary driver among 61 percent of plus-size, Black consumers and for 87 percent of consumers that identified as non-binary. In other words, inclusion can be a critically important factor to a majority of customers who are making decisions about whether to purchase products and services, or not. The marketplace is changing, says Rodriguez-Vila, and brands need new ways of understanding their customer base if they are to avoid missing out on opportunities. To this end, he, Nickerson and Bharadwaj are working with three of the firms in their study, piloting a range of interventions designed to accelerate marketplace inclusion. They have partnered with Sephora, Conde Nast, and Campbells to roll out specific practices both in the workplace and the marketplace—from advocacy to communication and commercial practices to things like greater diversity in marketing operations, and in talent recruitment practices. Early indicators are promising, says Rodriquez-Vila. “Our work is set to deliver tools that will help firms normalize and institutionalize marketplace inclusion as a function of their day-to-day operations. And it’s exciting to see a shift in thinking about DEI—from an exclusive focus on the workplace and how you eliminate bias within the organization, to practices that are geared also to eliminating bias in the way you serve markets.” Looking to know more?  Connect with Omar Rodriguez-Vila today.  Comply click on his icon now to arrange a time to talk.

Georgia Southern adding two engineering doctorates this fall featured image

Georgia Southern adding two engineering doctorates this fall

Georgia Southern University is launching two new engineering doctorates – a Ph.D. in applied computing degree and a Ph.D. in engineering – after approval of the programs this week from the University System of Georgia’s Board of Regents. With almost 4,000 students in its programs, Georgia Southern’s Allen E. Paulson College of Engineering and Computing identified the need for the new graduate degrees to sustain growth in the discipline, continue to aid workforce development in the region, add substantially to the university’s research capabilities, and provide additional teacher-scholars for Georgia. “In line with Georgia Southern’s Strategic pillars, the new Ph.D. programs will greatly enhance the University’s research capabilities and further advance key partnerships in the region,” said Carl Reiber, Ph.D., Georgia Southern’s provost and vice president for academic affairs. “A strong Ph.D. program improves faculty recruiting and is a prerequisite for applying for research grants from sources such as the National Science Foundation, the National Institutes of Health, the Department of Energy and the Department of Defense.” The proposed engineering Ph.D. program will have concentrations in civil, electrical, advanced manufacturing and mechanical engineering, and will fuel future multidisciplinary research synergies with other departments and centers within Georgia Southern in fields such as natural sciences, environmental sustainability, public health and education. Greater scholarly collaborations with sister institutions within the university system and beyond are also envisioned. The Ph.D. in engineering program will have a positive impact on the economic and technological development of Southeast Georgia, contributing significantly to the growth of the I-16 technology corridor. The Ph.D. in applied computing degree program will be offered jointly by the Department of Computer Science and the Department of Information Technology within the Allen E. Paulson College of Engineering and Computing at Georgia Southern Universit. The program will provide students with the requisite foundation to conduct basic and applied research to solve advanced technical problems in computing foundations, cybersecurity and machine learning. The program aims to promote the education of individuals who will become exceptional researchers, high-quality post-secondary educators, and innovative leaders and entrepreneurs in the field of applied computing. It will advance research and the generation of new knowledge in applied computing and support the growing knowledge-based economy in Southeast Georgia. The mission of the Ph.D. in applied computing degree program is to ensure student, graduate and faculty success by preparing graduates with the skills and depth of knowledge to advance the computing disciplines through application and scholarship. It will mentor students who will support faculty in their scholarly pursuits as they prepare to assume professional computing and computing-related positions that utilize their applied technical skills, problem-solving aptitude and scholarly abilities upon graduation. “The addition of these two new degree programs is part of Georgia Southern University’s commitment to be a world-class institution that provides a population of advanced graduates who can contribute to regional economic development and public-impact research,” Reiber said. “The programs will enhance the vitality and growth of the bachelor’s and master’s computer science and information technology degree programs by expanding the academic and research missions of the Allen E. Paulson College of Engineering and Computing." For more information about these new engineering doctorates coming to Georgia Southern this fall research or to speak with Carl Reiber, Ph.D., Georgia Southern’s provost and vice president for academic affairs — simply reach out to Georgia Southern's Director of Communications Jennifer Wise at jwise@georgiasouthern.edu to arrange an interview today.

3 min. read
Researchers fight cybercrime with new digital forensic tools and techniques featured image

Researchers fight cybercrime with new digital forensic tools and techniques

Irfan Ahmed, Ph.D., associate professor of computer science, provides digital forensic tools — and the knowledge to use them — to the good guys fighting the never-ending cyber-security war. Ahmed is director of the Security and Forensics Engineering (SAFE) Lab within the Department of Computer Science and VCU Engineering. He leads a pair of interrelated projects funded by the U.S. Department of Homeland Security (DHS) aimed at keeping important industrial systems safe from the bad guys — and shows the same tools crafted for investigating cyber attacks can be used to probe other crimes. The goal of cyber attacks on physical infrastructure may be to cause chaos by disrupting systems and/or to hold systems for ransom. The SAFE lab focuses on protecting industrial control systems used in the operation of nuclear plants, dams, electricity delivery systems and a wide range of other elements of critical infrastructure in the U.S. The problem isn’t new: In 2010, the Stuxnet computer worm targeted centrifuges at Iranian nuclear facilities before getting loose and infecting “innocent” computers around the world. Cyber attacks often target a portion of the software architecture known as the control logic. Control logic is vulnerable in that one of its functions is to receive instructions from the user and hand them off to be executed by a programmable logic controller. For instance, the control logic monitoring a natural gas pipeline might be programmed to open a valve if the system detects pressure getting too high. Programmers can modify the control logic — but so can attackers. One of Ahmed’s DHS-supported projects, called “Digital Forensic Tools and Techniques for Investigating Control Logic Attacks in Industrial Control Systems,” allows him to craft devices and techniques that cyber detectives can use in their investigations of attacks on sensitive critical infrastructure. Their investigation capabilities, he explains, is an under-researched area, as most of the emphasis to date has been on the prevention and detection of their cyber attacks. “The best scenario is to prevent the attacks on industrial systems,” Ahmed said. “But if an attack does happen, then what? This is where we try to fill the gap at VCU. And the knowledge that we gain in a cyber attack investigation can further help us to detect or even prevent similar attacks.” In the cat-and-mouse world of cyber security, the way cybercriminals work is in constant evolution, and Ahmed’s SAFE lab pays close attention to the latest developments by malefactors. For instance, an attacker may go for a more subtle approach than modifying the original control logic. An attack method called return-oriented programming sees the malefactor using the existing control logic code, but artfully switching the execution sequence of the code. Other attackers might insert their malware into another area of the controller, programmed to run undetected until it can replace the function of the original control logic. Attackers are always coming up with new methods, but each attack leaves evidence behind. The SAFE lab examines possible attack scenarios through simulations. Scale models of physical systems, including an elevator and a belt conveyor system, are housed at the SAFE lab to help facilitate this. The elevator is a four-floor model with inside and outside buttons feeding into a programmable logic controller. The conveyor belt is more advanced, equipped with inductive, capacitive and photoelectric sensors and able to sort objects. The tools and methods applied in cybercrime can be useful in tracking down other malefactors. That’s where Ahmed’s second DHS-funded project comes in. It’s called “Data Science-integrated Experiential Digital Forensics Training based-on Real-world Case Studies of Cybercrime Artifacts.” Ahmed is the principal investigator, working with co-PI Kostadin Damevski, Ph.D., associate professor of computer science. The goal is to keep law enforcement personnel abreast of the latest trends in the field of cybercrime investigation and to equip them with the latest tools and techniques, including those developed in the SAFE lab. “For example, investigators often have to go through thousands of images, or emails or chats, looking for something very specific,” Ahmed said. “We believe the right data science tools can help them to narrow down that search.” The FBI and other law enforcement agencies already have dedicated cybersleuthing units; the Virginia State Police have a computer evidence recovery section in Richmond. Ahmed and Damevski are arranging sessions showing investigators how techniques from data science and machine learning can make investigations more efficient by sorting through the mounds of digital evidence that increasingly is a feature of modern crime.

Irfan Ahmed, Ph.D. profile photoKostadin Damevski, Ph.D. profile photo
3 min. read
Ask an Expert: Is the "AI Moratorium" too far reaching? featured image

Ask an Expert: Is the "AI Moratorium" too far reaching?

Recent responses to chatGPT have featured eminent technologists calling for a six-month moratorium on the development of “AI systems more powerful than GPT-4.” Dr. Jeremy Kedziora, PieperPower Endowed Chair in Artificial Intelligence at Milwaukee School of Engineering, supports a middle ground approach between unregulated development and a pause. He says, "I do not agree with a moratorium, but I would call for government action to develop regulatory guidelines for AI use, particularly for endowing AIs with actions." Dr. Kedziora is available as a subject matter expert on the recent "AI moratorium" that was issued by tech leaders. According to Dr. Kedziora: There are good reasons to call for additional oversight of AI creation: Large deep or reinforcement learning systems encode complicated relationships that are difficult for users to predict and understand. Integrating them into daily use by billions of people implies some sort of complex adaptive system in which it is even more difficult for planners to anticipate, predict, and plan. This is likely fertile ground for unintended – and bad – outcomes. Rather than outright replacement, a very real possibility is that AI-enabled workers will have sufficiently high productivity that we’ll need less workers to accomplish tasks. The implication is that there won’t be enough jobs for those who want them. This means that governments will need to seriously consider proposals for UBI and work to limit economic displacement, work which will require time and political bargaining. I do not think it is controversial that we would not want a research group at MIT or CalTech, or anywhere developing an unregulated nuclear weapon. Given the difficulty in predicting its impact, AI may well be in the same category of powerful, suggesting that its creation should be subject to the democratic process. At the same time, there are some important things to keep in mind regarding chatGPT-like AI systems that suggest there are inherent limits to their impact: Though chatGPT may appear–at times–to pass the famous Turing test, this does not imply these systems ’think,’ or are ’self-aware,’ or are ’alive.’ The Turing test aims to avoid answering these questions altogether by simply asking if a machine can be distinguished from a human by another human. At the end of the day, chatGPT is nothing more than a bunch of weights! Contemporary AIs–chatGPT included–have very limited levers to pull. They simply can’t take many actions. Indeed, chatGPT’s only action is to create text in response to a prompt. It cannot do anything independently. Its effects, for now, are limited to passing through the hands of humans and to the social changes it could thereby create. The call for a moratorium emphasizes ‘control’ over AI. It is worth asking just what this control means. Take chatGPT as an example–can its makers control responses to prompts? Probably only in a limited fashion at best, with less and less ability as more people use it. There simply aren’t resources to police its responses. Can chatGPT’s makers ‘flip the off switch?’ Absolutely – restricting access to the API would effectively turn chatGPT off. In that sense, it is certainly under the same kind of control humans subjected to government are. Keep in mind that there are coordination problems – just because there is an AI moratorium in the US does not mean that other countries–particularly US adversaries– will stop development. And as others have said: “as long as AI systems have objectives set by humans, most ethics concerns related to artificial intelligence come from the ethics of the countries wielding them.” There are definitional problems with this sort of moratorium – who would be subject to it? Industry actors? Academics? The criterion those who call for the moratorium use is “AI systems more powerful than GPT-4.” What does “powerful” mean? Enforcement requires drawing boundaries around which AI development is subject to a moratorium – without those boundaries how would such a policy be enforced? It might already be too late – some already claim that they’ve recreated chatGPT. There are two major groups to think about when looking for develop regulatory solutions for AI: academia and industry. There may already be good vehicles for regulating academic research, for example oversight of grant funding. Oversight of AI development in industry is an area that requires attention and application of expertise. If you're a journalist covering Artificial Intelligence, then let us help. Dr. Kedziora is a respected expert in Data Science, Machine Learning, Statistical Modeling, Bayesian Inference, Game Theory and things AI. He's available to speak with the media - simply click on the icon now to arrange an interview today.

Jeremy Kedziora, Ph.D. profile photo
4 min. read
Aston University and asbestos consultancy to use AI to improve social housing maintenance featured image

Aston University and asbestos consultancy to use AI to improve social housing maintenance

• Aston University and Thames Laboratories enter 30-month Knowledge Transfer Partnership • Will use machine-learning and AI to create a maintenance prioritisation system • Collaboration will reduce costs, emissions, enhance productivity and improve residents' satisfaction. Aston University is teaming up with asbestos consultancy, Thames Laboratories (TL) to improve efficiency of social housing repairs. There are over 1,600 registered social housing providers in England, managing in excess of 4.4 million homes. Each of these properties requires statutory inspections to check gas, asbestos and water hygiene, in addition to general upkeep. However, there is not currently a scheduling system available that offers integration between key maintenance and safety contractors, resulting in additional site visits and increased travel costs and re-work. Aston University computer scientists will use machine-learning and AI to create a maintenance prioritisation system that will centralise job requests and automatically allocate them to the relevant contractors. The collaboration is through a Knowledge Transfer Partnership (KTP) - a collaboration between a business, an academic partner and a highly-qualified researcher, known as a KTP associate. This partnership builds on the outcomes of TL’s first collaboration with Aston University, by expanding the system developed for the company’s in-house use - which directs its field staff to jobs. The project team will improve the system developed during the current KTP to enable it to interact with client and contractor systems, by combining an input data processing unit, enhanced optimisation algorithms, customer enhancements and third-party add-ons into a single dynamic system. The Aston University team will be led by Aniko Ekart, professor of artificial intelligence. She said: “It is a privilege to be involved in the creation of this system, which will select the best contractor for each job based on their skill set, availability and location and be reactive to changing priorities of jobs." TL, based in Fenstanton, just outside Cambridge, provides asbestos consultancy, project management and training to businesses, local authorities, social housing and education facilities, using a fleet of mobile engineers across the UK. John Richards, managing director at Thames Laboratories, said: “This partnership will allow us to adopt the latest research and expertise from a world-leading academic institute to develop an original solution to improving the efficiency of social housing repairs, maintenance and improvements to better meet the needs of social housing residents.” Professor Ekart will be joined by Dr Alina Patelli as academic supervisor. Dr Patelli brings experience of software development in the commercial sector as well as expertise in applying optimisation techniques with focus on urban systems. She said: “This is a great opportunity to enhance state-of-the-art optimisation and machine learning in order to fit the needs of the commercial sector and deliver meaningful impact to Thames Laboratories.”

2 min. read
AI-Generated Content is a Game Changer for Marketers, but at What Cost? featured image

AI-Generated Content is a Game Changer for Marketers, but at What Cost?

Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

David Schweidel profile photo
6 min. read