Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

Decoding the Future of AI: From Disruption to Democratisation and Beyond
The global AI landscape has become a melting pot for innovation, with diverse thinking pushing the boundaries of what is possible. Its application extends beyond just technology, reshaping traditional business models and redefining how enterprises, governments, and societies operate. Advancements in model architectures, training techniques and the proliferation of open-source tools are lowering barriers to entry, enabling organisations of all sizes to develop competitive AI solutions with significantly fewer resources. As a result, the long-standing notion that AI leadership is reserved for entities with vast computational and financial resources is being challenged. This shift is also redrawing the global AI power balance, with a decentralised approach to AI where competition and collaboration coexist across different regions. As AI development becomes more distributed, investment strategies, enterprise innovation and global technological leadership are being reshaped. However, established AI powerhouses still wield significant leverage, driving an intense competitive cycle of rapid innovation. Amid this acceleration, it is critical to distinguish true technological breakthroughs from over-hyped narratives, adopting a measured, data-driven approach that balances innovation with demonstrable business value and robust ethical AI guardrails. Implications of the Evolving AI Landscape The democratisation of AI advancements, intensifying competitive pressures, the critical need for efficiency and sustainability, evolving geopolitical dynamics and the global race for skilled talent are all fuelling the development of AI worldwide. These dynamics are paving the way for a global balance of technological leadership. Democratisation of AI Potential The ability to develop competitive AI models at lower costs is not only broadening participation but also reshaping how AI is created, deployed and controlled. Open-source AI fosters innovation by enabling startups, researchers, and enterprises to collaborate and iterate rapidly, leading to diverse applications across industries. For example, xAI has made a significant move in the tech world by open sourcing its Grok AI chatbot model, potentially accelerating the democratisation of AI and fostering innovation. However, greater accessibility can also introduce challenges, including risks of misuse, uneven governance, and concerns over intellectual property. Additionally, as companies strategically leverage open-source AI to influence market dynamics, questions arise about the evolving balance between open innovation and proprietary control. Increased Competitive Pressure The AI industry is fuelled by a relentless drive to stay ahead of the competition, a pressure felt equally by Big Tech and startups. This is accelerating the release of new AI services, as companies strive to meet growing consumer demand for intelligent solutions. The risk of market disruption is significant; those who lag, face being eclipsed by more agile players. To survive and thrive, differentiation is paramount. Companies are laser-focused on developing unique AI capabilities and applications, creating a marketplace where constant adaptation and strategic innovation are crucial for success. Resource Optimisation and Sustainability The trend toward accessible AI necessitates resource optimisation, which means developing models with significantly less computational power, energy consumption and training data. This is not just about cost; it is crucial for sustainability. Training large AI models is energy-intensive; for example, training GPT-3, a 175-billion-parameter model, is believed to have consumed 1,287 MWh of electricity, equivalent to an average American household’s use over 120 years1. This drives innovation in model compression, transfer learning, and specialised hardware, like NVIDIA’s TensorRT. Small language models (SLMs) are a key development, offering comparable performance to larger models with drastically reduced resource needs. This makes them ideal for edge devices and resource-constrained environments, furthering both accessibility and sustainability across the AI lifecycle. Multifaceted Global AI Landscape The global AI landscape is increasingly defined by regional strengths and priorities. The US, with its strength in cloud infrastructure and software ecosystem, leads in “short-chain innovation”, rapidly translating AI research into commercial products. Meanwhile, China excels in “long-chain innovation”, deeply integrating AI into its extended manufacturing and industrial processes. Europe prioritises ethical, open and collaborative AI, while the APAC counterparts showcase a diversity of approaches. Underlying these regional variations is a shared trajectory for the evolution of AI, increasingly guided by principles of responsible AI: encompassing ethics, sustainability and open innovation, although the specific implementations and stages of advancement differ across regions. The Critical Talent Factor The evolving AI landscape necessitates a skilled workforce. Demand for professionals with expertise in AI and machine learning, data analysis, and related fields is rapidly increasing. This creates a talent gap that businesses must address through upskilling and reskilling initiatives. For example, Microsoft has launched an AI Skills Initiative, including free coursework and a grant program, to help individuals and organisations globally develop generative AI skills. What does this mean for today’s enterprise? New Business Horizons AI is no longer just an efficiency tool; it is a catalyst for entirely new business models. Enterprises that rethink their value propositions through AI-driven specialisation will unlock niche opportunities and reshape industries. In financial services, for example, AI is fundamentally transforming operations, risk management, customer interactions, and product development, leading to new levels of efficiency, personalisation and innovation. Navigating AI Integration and Adoption Integrating AI is not just about deployment; it is about ensuring enterprises are structurally prepared. Legacy IT architectures, fragmented data ecosystems and rigid workflows can hinder the full potential of AI. Organisations must invest in cloud scalability, intelligent automation and agile operating models to make AI a seamless extension of their business. Equally critical is ensuring workforce readiness, which involves strategically embedding AI literacy across all organisational functions and proactively reskilling talent to collaborate effectively with intelligent systems. Embracing Responsible AI Ethical considerations, data security and privacy are no longer afterthoughts but are becoming key differentiators. Organisations that embed responsible AI principles at the core of their strategy, rather than treating them as compliance check boxes, will build stronger customer trust and long-term resilience. This requires proactive bias mitigation, explainable AI frameworks, robust data governance and continuous monitoring for potential risks. Call to Action: Embracing a Balanced Approach The AI revolution is underway. It demands a balanced and proactive response. Enterprises must invest in their talent and reskilling initiatives to bridge the AI skills gap, modernise their infrastructure to support AI integration and scalability and embed responsible AI principles at the core of their strategy, ensuring fairness, transparency and accountability. Simultaneously, researchers must continue to push the boundaries of AI’s potential while prioritising energy efficiency and minimising environmental impact; policymakers must create frameworks that foster responsible innovation and sustainable growth. This necessitates combining innovative research with practical enterprise applications and a steadfast commitment to ethical and sustainable AI principles. The rapid evolution of AI presents both an imperative and an opportunity. The next chapter of AI will be defined by those who harness its potential responsibly while balancing technological progress with real-world impact. Resources Sudhir Pai: Executive Vice President and Chief Technology & Innovation Officer, Global Financial Services, Capgemini Professor Aleks Subic: Vice-Chancellor and Chief Executive, Aston University, Birmingham, UK Alexeis Garcia Perez: Professor of Digital Business & Society, Aston University, Birmingham, UK Gareth Wilson: Executive Vice President | Global Banking Industry Lead, Capgemini 1 https://www.datacenterdynamics.com/en/news/researchers-claim-they-can-cut-ai-training-energy-demands-by-75/?itm_source=Bibblio&itm_campaign=Bibblio-related&itm_medium=Bibblio-article-related

Virtual reality training tool helps nurses learn patient-centered care
University of Delaware computer science students have developed a digital interface as a two-way system that can help nurse trainees build their communication skills and learn to provide patient-centered care across a variety of situations. This virtual reality training tool would enable users to rehearse their bedside manner with expectant mothers before ever encountering a pregnant patient in person. The digital platform was created by students in Assistant Professor Leila Barmaki’s Human-Computer Interaction Laboratory, including senior Rana Tuncer, a computer science major, and sophomore Gael Lucero-Palacios. Lucero-Palacios said the training helps aspiring nurses practice more difficult and sensitive conversations they might have with patients. "Our tool is targeted to midwifery patients,” Lucero-Palacios said. “Learners can practice these conversations in a safe environment. It’s multilingual, too. We currently offer English or Turkish, and we’re working on a Spanish demo.” This type of judgement-free rehearsal environment has the potential to remove language barriers to care, with the ability to change the language capabilities of an avatar. For instance, the idea is that on one interface the “practitioner” could speak in one language, but it would be heard on the other interface in the patient’s native language. The patient avatar also can be customized to resemble different health stages and populations to provide learners a varied experience. Last December, Tuncer took the project on the road, piloting the virtual reality training program for faculty members in the Department of Midwifery at Ankara University in Ankara, Turkey. With technical support provided by Lucero-Palacios back in the United States, she was able to run a demo with the Ankara team, showcasing the UD-developed system’s interactive rehearsal environment’s capabilities. Last winter, University of Delaware senior Rana Tuncer (left), a computer science major, piloted the virtual reality training program for Neslihan Yilmaz Sezer (right), associate professor in the Department of Midwifery, Ankara University in Ankara, Turkey. Meanwhile, for Tuncer, Lucero-Palacios and the other students involved in the Human-Computer Interaction Laboratory, developing the VR training tool offered the opportunity to enhance their computer science, data science and artificial intelligence skills outside the classroom. “There were lots of interesting hurdles to overcome, like figuring out a lip-sync tool to match the words to the avatar’s mouth movements and figuring out server connections and how to get the languages to switch and translate properly,” Tuncer said. Lucero-Palacios was fascinated with developing text-to-speech capabilities and the ability to use technology to impact patient care. “If a nurse is well-equipped to answer difficult questions, then that helps the patient,” said Lucero-Palacios. The project is an ongoing research effort in the Barmaki lab that has involved many students. Significant developments occurred during the summer of 2024 when undergraduate researchers Tuncer and Lucero-Palacios contributed to the project through funding support from the National Science Foundation (NSF). However, work began before and continued well beyond that summer, involving many students over time. UD senior Gavin Caulfield provided foundational support to developing the program’s virtual environment and contributed to development of the text-to-speech/speech-to-text capabilities. CIS doctoral students Fahim Abrar and Behdokht Kiafar, along with Pinar Kullu, a postdoctoral fellow in the lab, used multimodal data collection and analytics to quantify the participant experience. “Interestingly, we found that participants showed more positive emotions in response to patient vulnerabilities and concerns,” said Kiafar. The work builds on previous research Barmaki, an assistant professor of computer and information sciences and resident faculty member in the Data Science Institute, completed with colleagues at New Jersey Institute of Technology and University of Central Florida in an NSF-funded project focused on empathy training for healthcare professionals using a virtual elderly patient. In the project, Barmaki employed machine learning tools to analyze a nursing trainee’s body language, gaze, verbal and nonverbal interactions to capture micro-expressions (facial expressions), and the presence or absence of empathy. “There is a huge gap in communication when it comes to caregivers working in geriatric care and maternal fetal medicine,” said Barmaki. “Both disciplines have high turnover and challenges with lack of caregiver attention to delicate situations.” UD senior Rana Tuncer (center) met with faculty members Neslihan Yilmaz Sezer (left) and Menekse Nazli Aker (right) of Ankara University in Ankara, Turkey, to educate them about the virtual reality training tool she and her student colleagues have developed to enhance patient-centered care skills for health care professionals. When these human-human interactions go wrong, for whatever reason, it can extend beyond a single patient visit. For instance, a pregnant woman who has a negative health care experience might decide not to continue routine pregnancy care. Beyond the project’s potential to improve health care professional field readiness, Barmaki was keen to note the benefits of real-world workforce development for her students. “Perceptions still exist that computer scientists work in isolation with their computers and rarely interact, but this is not true,” Barmaki said, pointing to the multi-faceted team members involved in this project. “Teamwork is very important. We have a nice culture in our lab where people feel comfortable asking their peers or more established students for help.” Barmaki also pointed to the potential application of these types of training environments, enabled by virtual reality, artificial intelligence and natural language processing, beyond health care. With the framework in place, she said, the idea could be adapted for other types of training involving human-human interaction, say in education, cybersecurity, even in emerging technology such as artificial intelligence (AI). Keeping people at the center of any design or application of this work is critical, particularly as uses for AI continue to expand. “As data scientists, we see things as spreadsheets and numbers in our work, but it’s important to remember that the data is coming from humans,” Barmaki said. While this project leverages computer vision and AI as a teaching tool for nursing assistants, Barmaki explained this type of system can also be used to train AI and to enable more responsible technologies down the road. She gave the example of using AI to study empathic interactions between humans and to recognize empathy. “This is the most important area where I’m trying to close the loop, in terms of responsible AI or more empathy-enabled AI,” Barmaki said. “There is a whole area of research exploring ways to make AI more natural, but we can’t work in a vacuum; we must consider the human interactions to design a good AI system.” Asked whether she has concerns about the future of artificial intelligence, Barmaki was positive. “I believe AI holds great promise for the future, and, right now, its benefits outweigh the risks,” she said.

Off-channel communications (OCC) occur when employees use unapproved and inadequately protected devices – such as personal cellphones – or applications to communicate with co-workers, counterparties and / or clients. Many financial services firms are required to maintain copies of all communications regarding their business, supervise the same, and produce them in response to regulatory requests. Firms cannot meet those compliance obligations when employees resort to unauthorized OCC for business-related matters. In charging 15 broker-dealers and one affiliated investment advisor in September 2022 with record-keeping violations, the SEC noted that its investigation uncovered employees at all levels of these firms who routinely used text messaging apps on their personal devices to discuss business matters between January 2018 and September 2021 [1]. The firms settled the charges and agreed to pay penalties totaling more than $1.1 billion. Just as important, the firms also agreed to engage independent compliance consultants to ensure the use of OCC meets regulatory standards as part of the settlements. In a related move [2], the Commodity Futures Trading Commission (CFTC) ordered 11 financial institutions to pay more than $710 million for recordkeeping and supervision failures for widespread use of unapproved communication methods such as personal texts, WhatsApp, and Signal. Additionally, the Financial Industry Regulatory Authority (FINRA) has also taken action when it comes to OCC. Antonio Rega, digital forensics, data governance, privacy, security, emerging technology, and discovery expert with J.S. Held, observes, “While the current administration has loosened certain regulatory enforcement near-term, we continue to observe requests from clients in supporting management of “off-channel” communications, with a particular focus on 3rd party chat messaging platforms on mobile devices, such as Whatsapp. These inquiries include supporting corporate stakeholders with internal auditing of their organizational platforms, policies and procedures.” By implementing effective processes and utilizing software and outside experts to monitor and detect OCC, broker-dealers, investment advisers, and other financial institutions can reduce the risk of regulatory enforcement and penalties and ensure that they remain in compliance with regulations. Steve Strombelline, regulatory and enterprise risk management expert with J.S. Held adds, “Although concerns typically impact broker-dealers, firms outside of financial sectors are looking closely at their messaging processes as well, which is advisable." In addition to guaranteeing that these communications are properly documented and retained, the regulations are set up to prevent the use of OCC to manipulate securities transactions or commit fraud and to ensure that it is not used to violate any other securities laws. Firms’ supervisory procedures must be reasonably designed to detect for OCC when they monitor for such activity. The following article discusses the risks that OCC pose for financial services firms, especially as the SEC, FINRA, and the CFTC have made it clear that they are now targeting firms throughout the industry about their OCC to see if they are recording and preserving business information according to regulations. The piece also explains how firms, including broker-dealers of all sizes, should manage their OCC to ensure that they and their employees comply with federal securities laws and regulations. Finally, the authors address the complexity related to the collection of OCC in response to regulatory enforcement investigative requests. As the fines and settlements between those firms and the SEC exemplify, financial services firms of all sizes need to take this regulatory focus seriously and take the proactive step of engaging an independent third-party with expertise and experience in both digital forensics and compliance issues. To read the full article and learn more about the risk of off-channel communications and how companies should manage their OCC to remain compliant, click on the button below: To connect with Antonio Rega simply click on his icon now. To arrange a conversation with Steve Strombelline or any other media inquiries - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com References [1] https://www.sec.gov/news/press-release/2022-174 [2] https://www.cftc.gov/PressRoom/PressReleases/8599-22

How authorship language helped catch a domestic terrorist – new podcast
In the latest episode of Writing Wrongs, hosts Professor Tim Grant and Dr Nicci MacLeod interview Dr Isobelle Clarke to unravel a case where forensic linguistics helped track down and convict a dangerous individual. Episode three, Imposters Tending to the Wild with Dr Isobelle Clarke, dives into the chilling case of Nikolaos Karvounakis, a self-proclaimed anarchist who planted a viable explosive device in Princes Street Gardens, Edinburgh, in 2018. Karvounakis, a Greek national, evaded capture for years, hiding behind online anonymity and extremist rhetoric. However, forensic linguists stepped in to analyse his anonymous blog posts, revealing patterns in his language that ultimately helped Police Scotland link him to the crime. The case not only demonstrates how linguistic evidence can be a powerful forensic tool but also raises crucial questions about the role of language analysis in modern terrorism investigations. On 11 January 2018, a suspicious cardboard box was discovered in a public seating area in Edinburgh’s Princes Street Gardens. After a controlled explosion, investigators determined the device could have caused serious harm had it detonated. With no immediate leads, the investigation stalled - until an anonymous blog post surfaced, claiming responsibility for the attack. The post, written in both English and Spanish, was linked to an eco-anarchist group called Individualists Tending to the Wild, a Mexican-based extremist organisation advocating violent action against technological progress. Crucially, the post included an image of the bomb’s interior, a detail only the perpetrator or law enforcement could have known. Police Scotland sought the expertise of Professor Tim Grant, who analysed the text, producing a linguistic profile that suggested the writer was neither a native English nor Spanish speaker - but rather someone influenced by another language entirely. Two years later, police identified Nikolaos Karvounakis as a suspect. Using comparative authorship analysis, Professor Tim Grant compared his online writings - including song lyrics from his rock band - to the manifesto. By dissecting word patterns, grammatical structures and stylistic quirks, he established that Karvounakis was the likely author. This evidence -alongside forensic meteorology, which linked photos of clouds in Karvounakis’ blog posts to the same weather conditions on the day of the crime - was used to secure a warrant and seize computers containing known writings by Karvounakis. To eliminate inevitable bias that would result from having worked the case for more than two years, Professor Grant invited Dr Isabelle Clarke onto the case as an independent forensic linguist. Using a version of the General Imposters Method, a technique similar to a police lineup but for language, Dr Clarke confirmed that the writing style in the blog post was the closest to Karvounakis’ known writings. Police Scotland put the evidence in the case, including the linguistic evidence, to Karvounakis, and secured a guilty plea. In February 2022, Nikolaos Karvounakis was sentenced to over eight years in prison under the UK’s Terrorism Act. Tim Grant, professor of forensic linguistics at Aston University, said: “The case highlights the growing importance of forensic linguistics in solving crimes, particularly in an age where digital anonymity combines with extremist ideologies. “It also highlights the how different types of language analysis can assist as a case moves through different stages of investigation.” Dr Nicci MacLeod, deputy director of the Aston Institute for Forensic Linguistics, said: “This episode offers listeners a behind-the-scenes look at the forensic methods that expose deception, identify threats and ultimately bring criminals to justice.” Dr Isobelle Clarke, a lecturer in security and protection science at Lancaster University and one of the first graduates from the campus-based MA Forensic Linguistics programme at Aston University, said: “It was great to be back at Aston University talking about the Karvounakis case for the Writing Wrongs podcast. “It’s an interesting case to highlight, as it shows how different types of language analysis can help with police investigations.” Writing Wrongs is available on Spotify, Apple Podcasts and all major streaming platforms. Listeners are encouraged to subscribe, share and engage with the hosts by submitting their forensic linguistics questions. Whether it’s about this case or broader forensic linguistic techniques, Professor Grant and Dr MacLeod welcome inquiries from listeners.

In an age where social media promises to connect us, a new Baylor University study reveals a sobering paradox – the more time we spend interacting online, the lonelier we may feel. Researchers James A. Roberts, Ph.D., The Ben H. Williams Professor of Marketing in Baylor's Hankamer School of Business, and co-authors Philip Young, Ph.D., and Meredith David, Ph.D., analyzed a study that followed nearly 7,000 Dutch adults for nine years to understand how our digital habits shape well-being. Published in the journal Personality and Social Psychology Bulletin, the Baylor study – The Epidemic of Loneliness: A Nine-Year Longitudinal Study of the Impact of Passive and Active Social Media Use on Loneliness – investigated how social media use impacts loneliness over time. This eye-opening research suggests that the very platforms designed to bring people together contribute to an "epidemic of loneliness." The findings showed that both passive and active social media use were associated with increased feelings of loneliness over time. While passive social media use – like browsing without interaction – predictably led to heightened loneliness, active use – which involved posting and engaging with others – also was linked to increased feelings of loneliness. These results suggest that the quality of digital interactions may not fulfill the social needs that are met in face-to-face communication. “This research underscores the complexity of social media’s impact on mental health,” Roberts said. “While social media offers unprecedented access to online communities, it appears that extensive use – whether active or passive – does not alleviate feelings of loneliness and may, in fact, intensify them.” The study also found a two-way relationship between loneliness and social media use. "It appears that a continuous feedback loop exists between the two,” Roberts said. “Lonely people turn to social media to address their feelings, but it is possible that such social media use merely fans the flames of loneliness." The findings emphasize an urgent need for further research into the effects of digital interaction, underlining the essential role of in-person connections in supporting well-being. This study also adds a valuable perspective to the conversation on how digital habits influence mental health, offering insights to shape future mental health initiatives, policies and guidelines for healthier social media use. Are you covering social media and its impact on people? Then let us help. These experts are available to speak with media, simply click or contact Shelby Cefaratti-Bertin, M.A, Assistant Director of Media and Public Relations now to arrange an interview today.

Digital platforms have emerged as powerful tools for people impacted by the Russo-Ukrainian War. One professor at the University of Delaware has, for over two years, provided reading resources specifically for the children whose lives have been forever changed by this conflict. Roberta Michnick Golinkoff, the Unidel H. Rodney Sharp Chair and Professor at UD's College of Education and Human Development, has developed a website with free interactive e-books, games and other resources to Ukrainian children. A nationally known expert in childhood literacy, Golinkoff worked together with developers to stock the site, Stories with Clever Hedgehog, with materials in both Ukrainian and English. The multilingual platforms allows displaced families all over the world to engage in shared reading with their children, facilitate early literacy development and promote well-being during a time of stress. In addition to enhancing learning experiences, digital platforms provide an essential sense of community and connectivity for students isolated by conflict. Golinkoff, who has appeared in numerous national outlets including NPR, ABC News and The Conversation, is available for interviews on the site as well as literacy in general. Just click her profile to get in touch.

Insights: Cyber Risks & Opportunities in 2025
Managing cyber risk is no longer simply a technical necessity but also a strategic imperative in global business. With companies becoming more interconnected and reliant on artificial intelligence, the Internet of Things, and the rest of the digital ecosystem, they are exposed to greater opportunity and risk. In the video below, Senior Managing Director & cybersecurity expert Denis Calderone shares topics covered in the 2025 J.S. Held Global Risk Report focused on managing cyber risk in the year ahead. To view the report and learn more about cyber risks and opportunities, click on the button below: Looking to know more or connect with Denis Calderone Simply click on his icon to arrange an interview today.

Managing cyber risk is no longer a technical necessity but also a strategic imperative in global business. As companies are more interconnected and reliant on artificial intelligence (AI), the Internet of Things, and the rest of the digital ecosystem, they are exposed to greater opportunities and risks. In this video, Senior Managing Director and cybersecurity expert Denis Calderone shares topics covered in the 2025 J.S. Held Global Risk Report focused on managing cyber risk in the year ahead. The global regulatory landscape is evolving rapidly in response to the increasing severity of cyber threats. Governments and regulatory bodies, including the U.S. Securities and Exchange Commission (SEC), the European Union (EU), and the U.S. Transportation Security Administration (TSA), have introduced cybersecurity mandates that require businesses to strengthen their defenses, improve incident reporting, and ensure compliance with new industry standards. The 2025 Global Risk Report by J.S. Held provides perspectives on these regulatory shifts, helping businesses navigate the complexities of cyber risk and compliance. The growing frequency and severity of cyberattacks are reshaping how businesses approach risk management. The J.S. Held 2025 Global Risk Report explores key issues facing business today, including: Business Interruption from Cyber Incidents: High-profile cases like Change Healthcare’s 2024 breach demonstrate how cyberattacks can halt operations, lead to regulatory scrutiny, and result in massive financial losses. Reputational and Legal Fallout: Cyber incidents can trigger lawsuits and damage a company’s reputation, often leading to prolonged trust recovery periods with customers and investors. Loss of Sensitive Data: Data breaches can expose critical information, including personal, financial, and proprietary data, amplifying risks of identity theft and fraud. Tightening Regulatory Landscape: New cybersecurity laws, such as the EU’s NIS2 Directive and Cyber Resilience Act, alongside the US SEC’s disclosure rules, demand stricter compliance from businesses in key sectors. Complexities in Cyber Insurance: Many companies lack clarity on whether their policies cover ransomware or meet legal and operational needs, leaving them exposed to potential financial risks. Ransomware Dilemmas and Legal Risks: Paying a ransom may violate international sanctions, creating additional legal complications for organizations already dealing with cyberattacks. Proactive Cybersecurity Enhancements: Companies implementing advanced cybersecurity measures like MFA, EDR, and immutable backup systems improve their defenses and reduce risks of disruption. AI-Powered Threat Detection: Artificial intelligence enables companies to identify fraud and cyberattacks faster by analyzing patterns and anomalies in real time, minimizing damage, and reducing costs. Increased Demand for Cyber Insurance: As companies across industries seek better coverage, insurers have opportunities to innovate new products, though exclusionary clauses are becoming more common. Business Continuity and Resilience: Organizations with strong cyber hygiene, incident response plans, and dependency mapping are better prepared for attacks and may benefit from reduced insurance premiums. Cybersecurity risk is just one of the five key areas analyzed in the J.S. Held 2025 Global Risk Report. Other topics include sustainability, supply chain, cryptocurrency and digital assets, AI and data regulations. If you have any questions or would like to further discuss the risks and opportunities outlined in the report, email GlobalRiskReport@jsheld.com. To connect with Denis Calderone simply click on his icon now. For any other media inquiries - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com

Crypto & Digital Assets: Global Risks & Opportunities in 2025
The adoption of cryptocurrency and digital assets is expected to increase in all forms this year given the pro-crypto stance of the new Trump administration. Traditional financial institutions and fintech companies are bringing cryptocurrencies to their customers and searching for regulatory and legal clarity. In the video below, J.P. Brennan, Global Head of Fintech, Payments, Crypto Compliance and Investigations, discusses crypto and digital asset risks and opportunities covered in the 2025 J.S. Held Global Risk Report. To view the report and learn more about crypto and digital asset risks and opportunities click on the button below: Looking to know more or connect with J.P. Brennan about the adoption of cryptocurrency and digital assets? Simply click on his icon to arrange an interview today.

J.S. Held Experts Examine Crypto’s Pitfalls and Potential
The global cryptocurrency market has surged to a staggering USD 3.4 trillion. However, alongside this rapid expansion, significant challenges and risks continue to emerge. The J.S. Held 2025 Global Risk Report examines the evolving landscape of crypto and digital assets, highlighting both the potential and the pitfalls of this dynamic sector. The explosion of cryptocurrency adoption across industries—from gaming to decentralized finance (DeFi)—has led to increased regulatory scrutiny and security concerns. With the expected growth in the number of users to exceed 107.3 million in the market by 2025, every sector is looking at what crypto and this blockchain technology can do to transform their business. Even the gaming industry has entered the crypto space with bridging services offering “Play-to-Earn” (P2E) games. While anonymity remains a key feature in both the risk and success of cryptocurrency, the concept of “Know Your Customer” on centralized platforms is still required but continues to evolve because not all anonymity is evil. Despite regulatory, environmental, geopolitical, and other business risks, the J.S. Held 2025 Global Risk Report reveals how the crypto industry continues to evolve, offering new opportunities for businesses and investors around: Enhanced Transparency & Security Regulatory Clarity Education & Compliance Digital Identity Solutions “With regulatory frameworks tightening globally—from the European Union’s Markets in Crypto-Assets (MiCA) law to China’s outright ban—the future of crypto remains at a critical inflection point,” observes J.P. Brennan, Global Head of Fintech, Payments, Crypto Compliance and Investigations at J.S. Held. “As the industry matures, the balance between risk mitigation and innovation will shape the next phase of digital asset adoption,” J.P. Brennan adds. J.P. Brennan examines the crypto risks and opportunities outlined in the 2025 J.S. Held Global Risk Report in this video: Cryptocurrency and digital asset risk is just one of the five key areas analyzed in the J.S. Held 2025 Global Risk Report. Other topics include sustainability, supply chain, Artificial Intelligence (AI) and data regulations, and managing cyber risk. If you have any questions or would like to further discuss the risks and opportunities outlined in the report, please email GlobalRiskReport@jsheld.com. To connect with J.P. Brennan, simply click on his icon now. For any other media inquiries - contact : Kristi L. Stathis, J.S. Held +1 786 833 4864 Kristi.Stathis@JSHeld.com




