Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.
Expert Research: The Fourth Industrial Revolution, Artificial Intelligence and Domestic Conflict
Artificial Intelligence is often framed as a driver of innovation. But it also has the power to disrupt the very foundations of our societies. In a recent study, experts Craig Albert, PhD, and Lance Hunter, PhD, from Augusta University explored how AI, as part of the Fourth Industrial Revolution, could reshape economies, politics and security within states. Here are three key takeaways from the research: AI brings breakthroughs in health care, logistics and engineering, but also disrupts jobs and economies. Unmanaged disruption can fuel instability, widening inequality and increasing risks of unrest or domestic conflict. Governments must act now with retraining, adaptive policies and strong governance to harness AI’s benefits while reducing risks. Lance Hunter, PhD, is an assistant professor of political science with a background in international relations. His research focuses on how terrorist attacks influence politics in democratic countries and how political decisions within countries affect conflicts worldwide. Hunter teaches courses in international relations, security studies and research methods. He received his PhD in Political Science from Texas Tech University in 2011. View his profile here. Craig Albert, PhD, is a professor of Political Science and the graduate director of the PhD in Intelligence, Defense, and Cybersecurity Policy and the Master of Arts in Intelligence and Security Studies at Augusta University. His areas of concentration include international security studies, cybersecurity policy, information warfare/influence operations/propaganda, ethnic conflict, cyberterrorism and cyberwar, and political philosophy. View his profile here. The question we face is not whether AI will transform society (it already is!) but how we will manage that transformation to strengthen rather than destabilize. What steps do you think policymakers should prioritize to prepare for this future? Here's the abstract from the paper in Research Gate: An emerging field of scholarship in Artificial Intelligence (AI) and computing posits that AI has the potential to significantly alter political and economic landscapes within states by reconfiguring labor markets, economies and political alliances, leading to possible societal disruptions. Thus, this study examines the potential destabilizing economic and political effects AI technology can have on societies and the resulting implications for domestic conflict based on research within the fields of political science, sociology, economics and artificial intelligence. In addition, we conduct interviews with 10 international AI experts from think tanks, academia, multinational technology companies, the military and cyber to assess the possible disruptive effects of AI and how they can affect domestic conflict. Lastly, the study offers steps governments can take to mitigate the potentially destabilizing effects of AI technology to reduce the likelihood of civil conflict and domestic terrorism within states. Read the full report here: Looking to know more? Let us help. Both Albert and Hunter are available to speak with media. Simply click on either experts icon now to arrange an interview today.

#Expert Perspective: When AI Follows the Rules but Misses the Point
When a team of researchers asked an artificial intelligence system to design a railway network that minimized the risk of train collisions, the AI delivered a surprising solution: Halt all trains entirely. No motion, no crashes. A perfect safety record, technically speaking, but also a total failure of purpose. The system did exactly what it was told, not what was meant. This anecdote, while amusing on the surface, encapsulates a deeper issue confronting corporations, regulators, and courts: What happens when AI faithfully executes an objective but completely misjudges the broader context? In corporate finance and governance, where intentions, responsibilities, and human judgment underpin virtually every action, AI introduces a new kind of agency problem, one not grounded in selfishness, greed, or negligence, but in misalignment. From Human Intent to Machine Misalignment Traditionally, agency problems arise when an agent (say, a CEO or investment manager) pursues goals that deviate from those of the principal (like shareholders or clients). The law provides remedies: fiduciary duties, compensation incentives, oversight mechanisms, disclosure rules. These tools presume that the agent has motives—whether noble or self-serving—that can be influenced, deterred, or punished. But AI systems, especially those that make decisions autonomously, have no inherent intent, no self-interest in the traditional sense, and no capacity to feel gratification or remorse. They are designed to optimize, and they do, often with breathtaking speed, precision, and, occasionally, unintended consequences. This new configuration, where AI acting on behalf of a principal (still human!), gives rise to a contemporary agency dilemma. Known as the alignment problem, it describes situations in which AI follows its assigned objective to the letter but fails to appreciate the principal’s actual intent or broader values. The AI doesn’t resist instructions; it obeys them too well. It doesn’t “cheat,” but sometimes it wins in ways we wish it wouldn’t. When Obedience Becomes a Liability In corporate settings, such problems are more than philosophical. Imagine a firm deploying AI to execute stock buybacks based on a mix of market data, price signals, and sentiment analysis. The AI might identify ideal moments to repurchase shares, saving the company money and boosting share value. But in the process, it may mimic patterns that look indistinguishable from insider trading. Not because anyone programmed it to cheat, but because it found that those actions maximized returns under the constraints it was given. The firm may find itself facing regulatory scrutiny, public backlash, or unintended market disruption, again not because of any individual’s intent, but because the system exploited gaps in its design. This is particularly troubling in areas of law where intent is foundational. In securities regulation, fraud, market manipulation, and other violations typically require a showing of mental state: scienter, mens rea, or at least recklessness. Take spoofing, where an agent places bids or offers with the intent to cancel them to manipulate market prices or to create an illusion of liquidity. Under the Dodd-Frank Act, this is a crime if done with intent to deceive. But AI, especially those using reinforcement learning (RL), can arrive at similar strategies independently. In simulation studies, RL agents have learned that placing and quickly canceling orders can move prices in a favorable direction. They weren’t instructed to deceive; they simply learned that it worked. The Challenge of AI Accountability What makes this even more vexing is the opacity of modern AI systems. Many of them, especially deep learning models, operate as black boxes. Their decisions are statistically derived from vast quantities of data and millions of parameters, but they lack interpretable logic. When an AI system recommends laying off staff, reallocating capital, or delaying payments to suppliers, it may be impossible to trace precisely how it arrived at that recommendation, or whether it considered all factors. Traditional accountability tools—audits, testimony, discovery—are ill-suited to black box decision-making. In corporate governance, where transparency and justification are central to legitimacy, this raises the stakes. Executives, boards, and regulators are accustomed to probing not just what decision was made, but also why. Did the compensation plan reward long-term growth or short-term accounting games? Did the investment reflect prudent risk management or reckless speculation? These inquiries depend on narrative, evidence, and ultimately the ability to assign or deny responsibility. AI short-circuits that process by operating without human-like deliberation. The challenge isn’t just about finding someone to blame. It’s about whether we can design systems that embed accountability before things go wrong. One emerging approach is to shift from intent-based to outcome-based liability. If an AI system causes harm that could arise with certain probability, even without malicious design, the firm or developer might still be held responsible. This mirrors concepts from product liability law, where strict liability can attach regardless of intent if a product is unreasonably dangerous. In the AI context, such a framework would encourage companies to stress-test their models, simulate edge cases, and incorporate safety buffers, not unlike how banks test their balance sheets under hypothetical economic shocks. There is also a growing consensus that we need mandatory interpretability standards for certain high-stakes AI systems, including those used in corporate finance. Developers should be required to document reward functions, decision constraints, and training environments. These document trails would not only assist regulators and courts in assigning responsibility after the fact, but also enable internal compliance and risk teams to anticipate potential failures. Moreover, behavioral “stress tests” that are analogous to those used in financial regulation could be used to simulate how AI systems behave under varied scenarios, including those involving regulatory ambiguity or data anomalies. Smarter Systems Need Smarter Oversight Still, technical fixes alone will not suffice. Corporate governance must evolve toward hybrid decision-making models that blend AI’s analytical power with human judgment and ethical oversight. AI can flag risks, detect anomalies, and optimize processes, but it cannot weigh tradeoffs involving reputation, fairness, or long-term strategy. In moments of crisis or ambiguity, human intervention remains indispensable. For example, an AI agent might recommend renegotiating thousands of contracts to reduce costs during a recession. But only humans can assess whether such actions would erode long-term supplier relationships, trigger litigation, or harm the company’s brand. There’s also a need for clearer regulatory definitions to reduce ambiguity in how AI-driven behaviors are assessed. For example, what precisely constitutes spoofing when the actor is an algorithm with no subjective intent? How do we distinguish aggressive but legal arbitrage from manipulative behavior? If multiple AI systems, trained on similar data, converge on strategies that resemble collusion without ever “agreeing” or “coordination,” do antitrust laws apply? Policymakers face a delicate balance: Overly rigid rules may stifle innovation, while lax standards may open the door to abuse. One promising direction is to standardize governance practices across jurisdictions and sectors, especially where AI deployment crosses borders. A global AI system could affect markets in dozens of countries simultaneously. Without coordination, firms will gravitate toward jurisdictions with the least oversight, creating a regulatory race to the bottom. Several international efforts are already underway to address this. The 2025 International Scientific Report on the Safety of Advanced AI called for harmonized rules around interpretability, accountability, and human oversight in critical applications. While much work remains, such frameworks represent an important step toward embedding legal responsibility into the design and deployment of AI systems. The future of corporate governance will depend not just on aligning incentives, but also on aligning machines with human values. That means redesigning contracts, liability frameworks, and oversight mechanisms to reflect this new reality. And above all, it means accepting that doing exactly what we say is not always the same as doing what we mean Looking to know more or connect with Wei Jiang, Goizueta Business School’s vice dean for faculty and research and Charles Howard Candler Professor of Finance. Simply click on her icon now to arrange an interview or time to talk today.
Ask an Expert: Augusta University's Gokila Dorai, PhD, talks Artificial Intelligence
Artificial Intelligence is dominating the news cycle. There's a lot to know, a lot to prepare for and also a lot of misinformation or assumptions that are making their way into the mainstream coverage. Recently, Augusta University's Gokila Dorai, PhD, took some time to answer some of the more important question's she's seeing being asked about Artificial Intelligence. Gokila Dorai, PhD, is an assistant professor in the School of Computer and Cyber Sciences at Augusta University. Dorai’s area of expertise is mobile/IoT forensics research. She is passionate about inventing digital tools to help victims and survivors of various digital crimes. View her profile here Q. What excites you most about your current research in digital forensics and AI? "I am most excited about using artificial intelligence to produce frameworks for practitioners make sense of complex digital evidence more quickly and fairly. My research combines machine learning with natural language processing incorporating a socio-technical framework, so that we don’t just get accurate results, but also understand how and why the system reached those results. This is especially important when dealing with sensitive investigations, where transparency builds trust." Q. How does your work help address today’s challenges around cybersecurity and data privacy? "Everyday life is increasingly digital, our phones, apps, and online accounts contain deeply personal information. My research looks at how we can responsibly analyze this data during investigations without compromising privacy. For example, I work on AI models that can focus only on what is legally relevant, while filtering out unrelated personal information. This balance between security and privacy is one of the biggest challenges today, and my work aims to provide practical solutions." Q. What role do you see artificial intelligence playing in shaping the future of digital investigations? "AI will be a critical partner in digital investigations. The volume of data investigators face is overwhelming, thousands of documents, chat messages, and app logs. AI can help organize and prioritize this information, spotting patterns that a human might miss. At the same time, I believe AI must be designed to be explainable and resilient against manipulation, so investigators and courts can trust its findings. The future isn’t about replacing human judgment, but about giving investigators smarter tools." Q. What is one misconception people often have about cybersecurity or digital forensics? "A common misconception is that digital forensics is like what you see on TV, instant results with a few keystrokes. In reality, it’s a painstaking process that requires both technical skill and ethical responsibility. Another misconception is that cybersecurity is only about protecting large organizations. In truth, individuals face just as many risks, from identity theft to app data leaks, and my research highlights how better tools can protect everyone." Are you a reporter covering Artificial intelligence and looking to know more? If so, then let us help with your stories. Gokila Dorai, PhD, is available for interviews. Simply click on her icon now to arrange a time today.

Before you scroll past thinking, “Oh, another scam alert,” please pause. This isn’t your average “don’t answer spam calls” notice. What follows is an examination of the growing sophistication of grandparent scams—complete with call centers, scripts, and even AI voice cloning. More importantly, it’s about how to protect yourself and, especially, the older members of your family. Read on—not just for awareness, but for fundamental tools to keep your loved ones safe. Even Elvis Isn't Safe From Scammers You know the world has gone topsy-turvy when even the King of Rock 'n' Roll isn't immune to fraud. I've written before about the recent attempt to scam Elvis Presley's Graceland estate, but a recent story about senior fraud really got my blood boiling. U.S. authorities in Boston just charged 13 people connected to what I can only describe as a "grandparent scam industrial complex" – a sophisticated operation that bilked over 400 elderly Americans out of more than $5 million. These weren't your run-of-the-mill phone scammers calling from their basement. Oh no. These criminals were running call centers with scripts, managers, and daily money-making leaderboards like they were selling insurance, not breaking hearts. The math alone should make you furious: $5 million divided by 400 victims equals about $12,500 per person. That's not pocket change – that's someone's emergency fund, their vacation savings, or money they've been carefully setting aside for healthcare costs. The Grandparent Scam: Emotional Manipulation 101 If you're not familiar with grandparent scams, buckle up. These predators have turned family love into their business model, and they're disgustingly good at it. Here's their playbook: Step 1: The Panic Call – "Grandma, it's me! I'm in jail and need bail money RIGHT NOW!" Step 2: The Identity Theft – Using social media details (yes, those cute Facebook posts about little Johnny's soccer game), they sound convincingly like your grandchild. Some are even using AI voice-cloning technology. Step 3: The Time Crunch – Everything's an emergency. No time to think, no time to verify. Just panic and send money. Real emergencies, by the way, allow time for a phone call to confirm details. Step 4: The Collection – Cash via courier, rideshare driver pickup, wire transfers, even Bitcoin. Anything except the legitimate ways actual legal systems collect bail money (spoiler alert: the good guys don't send Uber drivers to your house). The Boston Grandparent Fraud Case: Scamming at Scale The level of organization in this Boston case reads like a twisted business manual. These criminals weren't just winging it – they had: • Dedicated "Opener" staff who made initial contact with victims • Specialized "Closers" who pretended to be lawyers demanding payment • Management training programs for their scam employees • Daily performance systems (because nothing says "organized crime" quite like gamifying elderly financial abuse) A number of things bothered me about this case The fraudsters got over $5 million from 400 victims. The simple math shows that, on average, each victim would have lost $12,500 – that’s not “walking around” money. I suspect many would have had to tap into a variety of savings accounts or possibly borrow from others to source funds on short notice. This creates an extra degree of hardship for victims who are struggling to manage on a fixed income. The average age of the victims was 84. This breaks my heart. The oldest in this cohort are especially vulnerable. At this age, many seniors live alone or are more isolated, making them easier prey for these deceitful tactics. Many of them are still uninformed about how these scams operate. The scammers showed a very high level of sophistication. According to court documents from the U.S. Department of Justice, District of Massachusetts (2025), the scammers operated a sophisticated “call center” with technology at multiple sites, enabling them to place a massive number of calls to unsuspecting victims. • These scams would begin with an “Opener” employee, who would call victims and read a script (see below) pretending to be a grandson or granddaughter who was in an accident. • Then, a “Closer” would allegedly follow up with another call, pretending to be their grandchild’s attorney, asking for a sum of money to pay for their grandchild’s fees due to the accident. Each of these call center locations had managers overseeing staff who trained, supervised, and paid employees. The most sickening part? They kept detailed records of how much money they stole each day, treating vulnerable seniors like ATM machines with feelings. Here is an actual photo of their “Leaderboard” taken as evidence in the Boston case. When it came to handling cash, they also had a plan for that. Most often, they used unsuspecting rideshare drivers whom they ordered to do a package pickup at the victim’s house. And these heartless criminals often went back for seconds and thirds. Using lines designed to trigger seniors into emptying their bank accounts. They would say things like "Oh, there's been a mix-up," or "A pregnant woman's baby was lost in the crash" – any lie to squeeze more money from people who'd already been devastated once. Now, I’ve been in enough boardrooms to know that leaderboards usually track sales of widgets, mortgages, or, at worst, how many stale muffins are left in the breakroom. But imagine walking into work and your boss says, “Congratulations, you scammed the most grandmas today—you win Employee of the Month!” That’s not just evil, it’s the kind of thing that should earn you a permanent bunk bed in a tiny jail cell. And using Uber drivers to pick up cash? Please. The only thing Uber should be picking up is takeout and slightly tipsy people at 11 p.m.—not Grandma’s retirement savings. Some of These Scams Are Coming From Inside Canada Here's where this story hits close to home. While we might imagine these scams operating from some far-off location, some of the biggest operations have been running right here in Canada. In March 2025, Montreal police arrested 23 people connected to a massive network that allegedly defrauded seniors across 40 U.S. states of $30 million over three years. The suspected ringleader, Montreal developer Gareth West, allegedly ran call centers from Quebec properties and laundered the proceeds into luxury real estate. West remains at large, proving that sometimes the worst criminals are hiding in plain sight in Canadian suburbs. The Canadian Reality Check According to the Canadian Anti-Fraud Centre, emergency or 'grandparent scams' have become one of the fastest-growing crimes targeting seniors in Canada, with reported losses rising from $2.4 million in 2021 to over $11.3 million in 2023. Here's where it gets even more interesting. Those figures are just the losses for gradparent fraud that are reported – experts estimate the true losses are at least ten times higher since only 5-10% of fraud victims come forward. Let that sink in: we could be looking at over $100 million in actual losses annually in Canada alone. Here’s the part that really stings: no one is exempt. Not me, not you, not even that friend who insists they “don’t answer unknown numbers.” (Sure, Jan. We all know you still pick up when it says “potential spam.”) This isn’t just about losing money—it’s about losing confidence. The shame, the self-doubt, and the “How could I fall for that?” spiral are often worse than the financial loss. I’ve seen strong, capable people withdraw after being scammed, too embarrassed to tell their own families. And honestly—I get the same chill when I read these stories: Would I have caught it in time? It’s a reminder that vigilance is like flossing—we all know we should do it daily, and yet… sometimes we forget until it hurts. Supporting an Elder Who’s Been Scammed Here’s where we need to step up as families and communities Practical Support: • Help them file a report with the police and the Canadian Anti-Fraud Centre. • Contact their bank to determine if the funds can be recovered. • Lock down social media and adjust privacy settings so future scammers have less ammunition. Emotional Support: • Listen without judgment. Don’t say, “I would never have fallen for that.” (Trust me—you might.) or “you know better, Granddad”. • Normalize the experience: this can happen to anyone. If AI can clone voices and manipulate emotions, it’s not about intelligence—it’s about being human. • Follow up regularly. Shame makes people pull back, so check in to ensure they’re not withdrawing or losing confidence. Your Family’s Fraud Fighting Toolkit Look, I've spent over 30 years in the financial industry, and I can tell you that preventing fraud is always easier than recovering from it. Here's your family's defence strategy: The P-A-U-S-E Method Pause – Don't act immediately, no matter how urgent the request sounds. Ask questions only family members would immediately know ("What's Mom's maiden name?") Use known phone numbers to call your grandchild directly and verify information Set up systems to protect family members (like a secret family password) Explain to others – share this information widely with all family members Know the Red Flags • Demands for immediate action (real emergencies allow verification time) • Requests for secrecy ("Don't tell Mom and Dad!") • Payment via courier, rideshare, wire transfer, or cryptocurrency • Emotional manipulation ("I'm so scared, Grandma!") • Any request for cash payment to resolve legal issues Family Password System Set up a secret word or phrase that only your family knows. Make it something memorable but not guessable from social media. "Fluffy" (your childhood dog) is better than a pet name you posted on a recent social media post. What to Do If You're Targeted Stop. Don't. Send. Money. Instead: • Hang up immediately • Call your local police to file a report • Report to the Canadian Anti-Fraud Centre: 1-888-495-8501 or visit antifraudcentre-centreantifraude.ca • If you've already sent money, contact your bank immediately • Tell other family members what happened – you're not the only target These criminals exploit the most powerful human emotions: love, fear, and the desire to protect our families. They've turned grandparents' natural instinct to help their grandchildren into a multi-million-dollar crime operation. But here's what they're banking on (pun intended): that we'll be too embarrassed to talk about it, too confused to verify it, and too panicked to think clearly. Don't give them that satisfaction. Remember, the average age of victims in the Boston case was 84. These aren't people who have time to recover from financial mistakes. Every dollar stolen from a senior is a dollar that won't be there for healthcare, housing, or basic dignity in their final years. We Can Fight Back Knowledge is power, and conversation offers protection. The more we discuss these scams openly – around dinner tables, in community centres, at family gatherings – the more we hinder these criminals from succeeding. Share this post with the seniors in your life. Not because they're naive, but because they're caring. And because caring people deserve to know how heartless criminals are trying to exploit their love. What is your family doing to protect against fraud? What are your strategies and ideas for keeping our loved ones safe? I’m also particularly interested in what financial institutions and various government agencies are doing these days to combat fraud and protect this vulnerable group. As I research this topic more, I’d love to hear from you. Remember: Real grandchildren in genuine emergencies can wait five minutes for you to confirm who you're talking to. Scammers can't. Helpful Resources: • Canadian Anti-Fraud Centre: 1-888-495-8501 • Report online: antifraudcentre-centreantifraude.ca • For more retirement security tips, visit retirewithequity.ca Stay safe. Don't Retire - Rewire! Sue

In an age of fast-moving misinformation, our expert teaches students how to spot what’s credible
As the new academic year begins, and at a time when misinformation often travels faster than facts, University of Rochester’s Kevin Meuwissen offers educators and young learners clarity and practical strategies for identifying credible sources. As an associate professor and chair of teaching and curriculum at the Warner School of Education and Human Development, Meuwissen focuses on how children and teens learn about politics and history — and how they can be taught to critically evaluate what they consume. “Young people pay close attention to who’s been consistently accurate,” he says. “They’re more likely to trust someone over time if their information holds up.” To empower students in our complex information environment, Meuwissen champions the so-called SIFT method — an easy-to-remember acronym and evidence-based toolkit that breaks down like this: • Stop! Pause before reacting or sharing • Investigate the source • Find better coverage • Trace claims back to their origin He also warns about how emotional framing, AI-generated visuals, deep fakes, and repeated exposure can distort judgment through the illusory truth effect — making misinformation feel believable even when it isn’t. His "Ever Wonder: How Can You Tell If A Source Is Credible?" video is a handy teaching tool. Meuwissen and his colleagues encourage teachers grappling with resistance over topics like climate science to consider not just evidence depth, but also students’ identities — political, cultural, and otherwise — when designing lessons. His approach emphasizes building trust, modeling thoughtful verification, and nurturing classroom norms rooted in accuracy — traits essential for forming discerning digital citizens. Kevin Meuwissen is available for interviews about identifying misinformation. He can be contacted through Warner School of Education Director of Communications Theresa Danylak at tdanylak@warner.rochester.edu.
Are GCSEs delivering for students and society?
Ahead of the GCSEs results being released on Thursday 21 August Aston University work psychologist, Dr Paul Jones, discusses whether the exams are fit for purpose. He believes that our exam system narrows thinking, and GCSEs emphasise “right answers” and rote recall, creating risk-averse learners who are afraid to fail or think differently. Exams are harming wellbeing GCSEs were designed in the 1980s, when many left education at 16. Today, almost all young people continue to 18, yet they still face a stressful halfway checkpoint that often does more harm than good. Research shows GCSEs are linked to anxiety, sleeplessness and even self-harm. This isn’t about students being “less resilient”, t’s about a system that has prioritised bureaucracy, league tables, and exam statistics over wellbeing. GCSEs don’t prepare students for life Exams reward the ability to memorise and recall under pressure, but the world beyond school demands much more. Employers and universities want young people who can think critically, manage their own learning, collaborate, and adapt. By the time many reach university, students are burnt out from years of high-stakes testing. They often struggle with independence, risk-taking, and curiosity, the very qualities they need to succeed. Over-assessment stifles innovation Our exam system narrows thinking. GCSEs emphasise “right answers” and rote recall, creating risk-averse learners who are afraid to fail or think differently. Innovation, however, requires psychological safety: the freedom to explore, experiment, and make mistakes. In a world where AI can already handle routine tasks like recall and pattern analysis, the human edge lies in breaking moulds, challenging assumptions, and combining knowledge in new ways. Our current system suppresses exactly those skills. Moving GCSEs into the future We need fewer, smarter assessments and a curriculum that builds creativity, resilience, and innovation. Other countries use project-based learning, portfolios, and sampling tests to capture what young people can really do. Wales is already embedding wellbeing and digital skills into its new curriculum. England risks being left behind if it continues to cling to an exam-heavy model designed for a different era. The bottom line Our young people deserve an education that prepares them for life, not just for exams. We should be measuring what really matters: wellbeing, creativity, and the ability to thrive in a fast-changing, AI-driven world. To speak to Dr Jones or for any media inquiries in relation to this please contact Nicola Jones, Press and Communications Manager, Aston University on (+44) 7825 342091 or email: n.jones6@aston.ac.uk

Intangible assets now make up more than 90% of S&P 500 market value — yet many organizations still lack a dedicated executive role to manage them strategically. This is where the Chief Intellectual Property Officer (CIPO) comes in. In this expert-backed piece, J.S. Held's Chief Intellectual Property Officer James E. Malackowski, CPA, CLP, and his colleague David Ngo unpack the economic forces shaping this role, the skills CIPOs bring to the table, and why forward-thinking companies are making IP leadership a boardroom priority. What you’ll learn: • The economic forces driving the rise of CIPO leadership • How CIPOs bridge legal, technical, and commercial priorities to unlock value • The growing relevance of CIPOs in consulting, insurance, and AI-driven industries • Practical strategies for integrating IP leadership into portfolio and risk management • Why the next decade will define the CIPO’s role in corporate success With deep expertise in IP strategy, valuation, and litigation, Malackowski and Ngo offer a clear, compelling case for elevating IP leadership to the C-suite. Looking to connect with the experts? Click on their profiles to arrange an interview or gain deeper insights into intellectual property strategy, risk, and valuation. James E. Malackowski, CPA, CLP Chief Intellectual Property Officer, J.S. Held | Co-founder and Senior Managing Director, Ocean Tomo Global leader in intellectual property valuation, strategic advisory, and expert testimony. Recognized among IAM’s “World’s Leading IP Strategists” and a pioneer in IP exchange models. David Ngo Senior Analyst, Intellectual Property Disputes Financial Expert Testimony, Ocean Tomo, a part of J.S. Held Specialist in quantifying economic damages in IP disputes and valuing intangible assets, with expertise in applying economic and financial analysis to complex litigation. For any other media inquiries, contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com.
How AI will transform the economy: Predicting the next breakthroughs
AI is already revolutionizing the world around us. University of Delaware experts are at the forefront of this innovation, researching and inventing new ways to use AI in everyday life. Below are a number of UD experts who can discuss these topics and the breakthroughs being made. AI meets the edge – Weisong Shi, Alumni Distinguished Professor and Chair of Computer and Information Sciences, explains how AI and edge computing will transform everything from self-driving cars to real-time healthcare. AI’s energy appetite – Steven Hegedus, Professor, dives into the massive energy demands of AI, with expertise in photonics and chip-level signal processing. Building AI from the hardware – Sunita Chandrasekaran, Associate Professor and leader of the First State AI Institute, focuses on AI hardware innovations shaping the future of computing. Email mediarelations@udel.edu to speak to any of these experts.

First AI-powered Smart Care Home system to improve quality of residential care
Partnership between Lee Mount Healthcare and Aston University will develop and integrate a bespoke AI system into a care home setting to elevate the quality of care for residents By automating administrative tasks and monitoring health metrics in real time, the smart system will support decision making and empower care workers to focus more on people The project will position Lee Mount Healthcare as a pioneer of AI in the care sector and opening the door for more care homes to embrace technology. Aston University is partnering with dementia care provider Lee Mount Healthcare to create the first ‘Smart Care Home’ system incorporating artificial intelligence. The project will use machine learning to develop an intelligent system that can automate routine tasks and compliance reporting. It will also draw on multiple sources of resident data – including health metrics, care needs and personal preferences – to inform high-quality care decisions, create individualised care plans and provide easy access to updates for residents’ next of kin. There are nearly 17,000 care homes in the UK looking after just under half a million residents, and these numbers are expected to rise in the next two decades. Over half of social care providers still retain manual and paper-based approaches to care management, offering significant opportunity to harness the benefits of AI to enhance efficiency and care quality. The Smart Care Home system will allow for better care to be provided at lower cost, freeing up staff from administrative tasks so they can spend more time with residents. Manjinder Boo Dhiman, director of Lee Mount Healthcare, said: “As a company, we’ve always focused on innovation and breaking barriers, and this KTP builds on many years of progress towards digitisation. We hope by taking the next step into AI, we’ll also help to improve the image of the care sector and overcome stereotypes, to show that we are forward thinking and can attract the best talent.” Dr Roberto Alamino, lecturer in Applied AI & Robotics with the School of Computer Science and Digital Technologies at Aston University said: “The challenges of this KTP are both technical and human in nature. For practical applications of machine learning, it’s important to establish a common language between us as researchers and the users of the technology we are developing. We need to fully understand the problems they face so we can find feasible, practical solutions. For specialist AI expertise to develop the smart system, LMH is partnering with the Aston Centre for Artificial Intelligence Research and Application (ACAIRA) at Aston University, of which Dr Alamino is a member. ACAIRA is recognised internationally for high-quality research and teaching in computer science and artificial intelligence (AI) and is part of the College of Engineering and Physical Sciences. The Centre’s aim is to develop AI-based solutions to address critical social, health, and environmental challenges, delivering transformational change with industry partners at regional, national and international levels. The project is a Knowledge Transfer Partnership. (KTP). Funded by Innovate UK, KTPs are collaborations between a business, a university and a highly qualified research associate. The UK-wide programme helps businesses to improve their competitiveness and productivity through the better use of knowledge, technology and skills. Aston University is a sector leading KTP provider, ranked first for project quality, and joint first for the volume of active projects. For more information on the KTP visit the webpage.
Poll finds bipartisan agreement on a key issue: Regulating AI
This article is republished from The Conversation under a Creative Commons license. Read the original article here. In the run-up to the vote in the U.S. Senate on President Donald Trump’s spending and tax bill, Republicans scrambled to revise the bill to win support of wavering GOP senators. A provision included in the original bill was a 10-year moratorium on any state law that sought to regulate artificial intelligence. The provision denied access to US$500 million in federal funding for broadband internet and AI infrastructure projects for any state that passed any such law. The inclusion of the AI regulation moratorium was widely viewed as a win for AI firms that had expressed fears that states passing regulations on AI would hamper the development of the technology. However, many federal and state officials from both parties, including state attorneys general, state legislators and 17 Republican governors, publicly opposed the measure. In the last hours before the passage of the bill, the Senate struck down the provision by a resounding 99-1 vote. In an era defined by partisan divides on issues such as immigration, health care, social welfare, gender equality, race relations and gun control, why are so many Republican and Democratic political leaders on the same page on the issue of AI regulation? Whatever motivated lawmakers to permit AI regulation, our recent poll shows that they are aligned with the majority of Americans who view AI with trepidation, skepticism and fear, and who want the emerging technology regulated. Bipartisan sentiments We are political scientists who use polls to study partisan polarization in the United States, as well as the areas of agreement that bridge the divide that has come to define U.S. politics. In April 2025, we fielded a nationally representative poll that sought to capture what Americans think about AI, including what they think AI will mean for the economy and society going forward. The public is generally pessimistic. We found that 65% of Americans said they believe AI will increase the spread of false information. Fifty-six percent of Americans worry AI will threaten the future of humanity. Fewer than 3 in 10 Americans told us AI will make them more productive (29%), make people less lonely (21%) or improve the economy (22%). While Americans tend to be deeply divided along partisan lines on most issues, the apprehension regarding AI’s impact on the future appears to be relatively consistent across Republicans and Democrats. For example, only 19% of Republicans and 22% of Democrats said they believe that artificial intelligence will make people less lonely. Respondents across the parties are in lockstep when it comes to their views on whether AI will make them personally more productive, with only 29% − both Republicans and Democrats − agreeing. And 60% of Democrats and 53% Republicans said they believe AI will threaten the future of humanity. On the question of whether artificial intelligence should be strictly regulated by the government, we found that close to 6 in 10 Americans (58%) agree with this sentiment. Given the partisan differences in support for governmental regulation of business, we expected to find evidence of a partisan divide on this question. However, our data finds that Democrats and Republicans are of one mind on AI regulation, with majorities of both Democrats (66%) and Republicans (54%) supporting strict AI regulation. When we take into account demographic and political characteristics such as race, educational attainment, gender identity, income, ideology and age, we again find that partisan identity has no significant impact on opinion regarding the regulation of AI. State of anxiety In the years ahead, the debate over AI and the government’s role in regulating it is likely to intensify, on both the state and federal levels. As each day seems to bring new advances in AI’s capability and reach, the future is shaping up to be one in which human beings coexist – and hopefully flourish – alongside AI. This new reality has made the American public, both Democrats and Republicans, justifiably nervous, and our polling captures this widespread trepidation. Lawmakers and technology leaders alike could address this anxiety by better communicating the pitfalls and potential of AI, and take seriously the concerns of the public. After all, the public is not alone in its trepidation. Many experts in the field also have substantial worries about the future of AI. One of the fundamental political questions moving forward, then, will be to what degree regulators put guardrails on this emerging and transformative technology in order to protect Americans from AI’s negative consequences. Adam Eichen is a doctoral candidate in political science at UMass Amherst. Alexander Theodoridis is associate professor of political science and co-director of the UMass Amherst Poll at UMass Amherst. Sara M. Kirshbaum is a postdoctoral fellow and lecturer of political science at UMass Amherst. Tatishe Nteta is provost professor of political science and director of the UMass Amherst Poll at UMass Amherst.








