Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Decoding the Future of AI: From Disruption to Democratisation and Beyond

The global AI landscape has become a melting pot for innovation, with diverse thinking pushing the boundaries of what is possible. Its application extends beyond just technology, reshaping traditional business models and redefining how enterprises, governments, and societies operate. Advancements in model architectures, training techniques and the proliferation of open-source tools are lowering barriers to entry, enabling organisations of all sizes to develop competitive AI solutions with significantly fewer resources. As a result, the long-standing notion that AI leadership is reserved for entities with vast computational and financial resources is being challenged. This shift is also redrawing the global AI power balance, with a decentralised approach to AI where competition and collaboration coexist across different regions. As AI development becomes more distributed, investment strategies, enterprise innovation and global technological leadership are being reshaped. However, established AI powerhouses still wield significant leverage, driving an intense competitive cycle of rapid innovation. Amid this acceleration, it is critical to distinguish true technological breakthroughs from over-hyped narratives, adopting a measured, data-driven approach that balances innovation with demonstrable business value and robust ethical AI guardrails. Implications of the Evolving AI Landscape The democratisation of AI advancements, intensifying competitive pressures, the critical need for efficiency and sustainability, evolving geopolitical dynamics and the global race for skilled talent are all fuelling the development of AI worldwide. These dynamics are paving the way for a global balance of technological leadership. Democratisation of AI Potential The ability to develop competitive AI models at lower costs is not only broadening participation but also reshaping how AI is created, deployed and controlled. Open-source AI fosters innovation by enabling startups, researchers, and enterprises to collaborate and iterate rapidly, leading to diverse applications across industries. For example, xAI has made a significant move in the tech world by open sourcing its Grok AI chatbot model, potentially accelerating the democratisation of AI and fostering innovation. However, greater accessibility can also introduce challenges, including risks of misuse, uneven governance, and concerns over intellectual property. Additionally, as companies strategically leverage open-source AI to influence market dynamics, questions arise about the evolving balance between open innovation and proprietary control. Increased Competitive Pressure The AI industry is fuelled by a relentless drive to stay ahead of the competition, a pressure felt equally by Big Tech and startups. This is accelerating the release of new AI services, as companies strive to meet growing consumer demand for intelligent solutions. The risk of market disruption is significant; those who lag, face being eclipsed by more agile players. To survive and thrive, differentiation is paramount. Companies are laser-focused on developing unique AI capabilities and applications, creating a marketplace where constant adaptation and strategic innovation are crucial for success. Resource Optimisation and Sustainability The trend toward accessible AI necessitates resource optimisation, which means developing models with significantly less computational power, energy consumption and training data. This is not just about cost; it is crucial for sustainability. Training large AI models is energy-intensive; for example, training GPT-3, a 175-billion-parameter model, is believed to have consumed 1,287 MWh of electricity, equivalent to an average American household’s use over 120 years1. This drives innovation in model compression, transfer learning, and specialised hardware, like NVIDIA’s TensorRT. Small language models (SLMs) are a key development, offering comparable performance to larger models with drastically reduced resource needs. This makes them ideal for edge devices and resource-constrained environments, furthering both accessibility and sustainability across the AI lifecycle. Multifaceted Global AI Landscape The global AI landscape is increasingly defined by regional strengths and priorities. The US, with its strength in cloud infrastructure and software ecosystem, leads in “short-chain innovation”, rapidly translating AI research into commercial products. Meanwhile, China excels in “long-chain innovation”, deeply integrating AI into its extended manufacturing and industrial processes. Europe prioritises ethical, open and collaborative AI, while the APAC counterparts showcase a diversity of approaches. Underlying these regional variations is a shared trajectory for the evolution of AI, increasingly guided by principles of responsible AI: encompassing ethics, sustainability and open innovation, although the specific implementations and stages of advancement differ across regions. The Critical Talent Factor The evolving AI landscape necessitates a skilled workforce. Demand for professionals with expertise in AI and machine learning, data analysis, and related fields is rapidly increasing. This creates a talent gap that businesses must address through upskilling and reskilling initiatives. For example, Microsoft has launched an AI Skills Initiative, including free coursework and a grant program, to help individuals and organisations globally develop generative AI skills. What does this mean for today’s enterprise? New Business Horizons AI is no longer just an efficiency tool; it is a catalyst for entirely new business models. Enterprises that rethink their value propositions through AI-driven specialisation will unlock niche opportunities and reshape industries. In financial services, for example, AI is fundamentally transforming operations, risk management, customer interactions, and product development, leading to new levels of efficiency, personalisation and innovation. Navigating AI Integration and Adoption Integrating AI is not just about deployment; it is about ensuring enterprises are structurally prepared. Legacy IT architectures, fragmented data ecosystems and rigid workflows can hinder the full potential of AI. Organisations must invest in cloud scalability, intelligent automation and agile operating models to make AI a seamless extension of their business. Equally critical is ensuring workforce readiness, which involves strategically embedding AI literacy across all organisational functions and proactively reskilling talent to collaborate effectively with intelligent systems. Embracing Responsible AI Ethical considerations, data security and privacy are no longer afterthoughts but are becoming key differentiators. Organisations that embed responsible AI principles at the core of their strategy, rather than treating them as compliance check boxes, will build stronger customer trust and long-term resilience. This requires proactive bias mitigation, explainable AI frameworks, robust data governance and continuous monitoring for potential risks. Call to Action: Embracing a Balanced Approach The AI revolution is underway. It demands a balanced and proactive response. Enterprises must invest in their talent and reskilling initiatives to bridge the AI skills gap, modernise their infrastructure to support AI integration and scalability and embed responsible AI principles at the core of their strategy, ensuring fairness, transparency and accountability. Simultaneously, researchers must continue to push the boundaries of AI’s potential while prioritising energy efficiency and minimising environmental impact; policymakers must create frameworks that foster responsible innovation and sustainable growth. This necessitates combining innovative research with practical enterprise applications and a steadfast commitment to ethical and sustainable AI principles. The rapid evolution of AI presents both an imperative and an opportunity. The next chapter of AI will be defined by those who harness its potential responsibly while balancing technological progress with real-world impact. Resources Sudhir Pai: Executive Vice President and Chief Technology & Innovation Officer, Global Financial Services, Capgemini Professor Aleks Subic: Vice-Chancellor and Chief Executive, Aston University, Birmingham, UK Alexeis Garcia Perez: Professor of Digital Business & Society, Aston University, Birmingham, UK Gareth Wilson: Executive Vice President | Global Banking Industry Lead, Capgemini 1 https://www.datacenterdynamics.com/en/news/researchers-claim-they-can-cut-ai-training-energy-demands-by-75/?itm_source=Bibblio&itm_campaign=Bibblio-related&itm_medium=Bibblio-article-related

Alexeis Garcia Perez
5 min. read

Ethical Implications of AI in Business: Balancing Innovation with Responsibility

Artificial Intelligence (AI) has revolutionized the business landscape, driving innovation and reshaping industries. From automating routine tasks to enhancing customer experiences, AI's applications are vast and rapidly expanding. As businesses stand on the brink of unprecedented technological advancement, they must also navigate the complex web of ethical implications associated with AI deployment. This delicate balance between innovation and responsibility sets the stage for an ongoing dialogue that is crucial for sustainable growth and societal well-being. Dr. Jeremy Kedziora, associate professor and the PieperPower Endowed Chair in Artificial Intelligence at Milwaukee School of Engineering (and former CIA chief methodologist), is available to discuss how these new technologies are enhancing business operations along with their ethical implications: Automating tasks using AI Using large language models like ChatGPT Algorithmic bias in AI systems Transparency in AI decision-making processes Steps needed to create fair and equitable AI solutions

Jeremy Kedziora, Ph.D.
1 min. read

Changes to Philadelphia's Tax Structure Could Represent "Pivotal" Economic Shift

On March 14, Philadelphia mayor Cherelle Parker delivered her first budget proposal in a 75-minute address to City Council. Throughout her speech, the new mayor touched on subjects ranging from corridor cleaning and housing programs to police spending and anti-violence grants. However, one set of items was absent from her $6.29 billion plan and presentation. In a break from recent administrations, Mayor Parker abstained from calling for cuts to the city's wage or business taxes. She also refrained from speaking on adjustments to Philadelphia's tax structure, which depends more heavily than other municipalities on wage taxes and has a relatively light property tax burden. Theodore Arapis, PhD, chair of Villanova University’s Department of Public Administration and an expert on fiscal policy in local governance, believes that changes to how Philadelphia levies and handles taxes, particularly on the real estate front, should be discussed further. "[Having property taxes play a larger role] represents a pivotal shift towards creating a more resilient and efficient revenue system," said Dr. Arapis, after reviewing the mayor's plan. "The current reliance on wage taxes is subject to considerable volatility, undermining fiscal stability. In contrast, property taxes offer a more inelastic and predictable revenue stream, suggesting a strategic move towards them would be beneficial for the city." Dr. Arapis also maintains that, with Harrisburg's go-ahead, Philadelphia's real estate taxes could be structured in a way that effectively facilitates business growth, while ensuring that homeowners are not unduly burdened. "Differentiating tax rates between commercial and residential properties could strike a delicate balance—spurring economic development while maintaining equitable tax distribution," he stated. "This segmentation could stimulate business activity by creating favorable conditions for commercial enterprises, which is essential for Philadelphia's economic vitality." Additionally, Dr. Arapis contends that tweaks to the city's tax abatement policy, which is currently in the process of a gradual phaseout, could further provide for inclusive and sustainable growth. "Tax abatements have been utilized as a policy tool to stimulate property revitalization and neighborhood renewal. However, these measures often carry unintended consequences that disproportionately impact existing residents," he shared. "Specifically, such incentives can precipitate a rise in property values and, consequently, a hike in the tax burdens of non-abated properties. This dynamic can exacerbate gentrification, leading to the displacement of longstanding community members. "To address the complexities of tax abatement policies in fostering affordable [and accessible] housing, a nuanced strategy is vital. A more equitable distribution of housing affordability could be achieved by, say, mandating that at least 50% of units in new developments meet affordability criteria... [and diversifying] the approach to income targeting, perhaps through a tiered system that caters to various income levels [and indexes] these categories to local inflation and wage growth." Despite the content of her first budget proposal and address, Mayor Parker likely shares some similar perspectives on tax reform and structural adjustments. Prior to entering office, during her years as a City Council member and days on the campaign trail, the acting executive worked to lower Philadelphia's wage tax, acknowledged the untapped potential of property taxes and expressed her desire for a differentiation of property tax rates. Before pursuing these measures further, as The Philadelphia Inquirer reports, Mayor Parker is probably (1) holding off until the newly announced Tax Reform Commission shares its findings, (2) ensuring that there are no immediate, major disruptions to the city's flow of revenue, as she launches her "safer, cleaner, greener" agenda, and (3) waiting for state lawmakers to make greater progress on raising the minimum wage and restructuring the Commonwealth's tax legislation, namely the uniformity clause. The mayor did, however, make one notable tax-related recommendation in her budget plan: She proposed an increase to the school district's share of real estate tax revenue from 55% to 56%, which could boost funding for the district by $119 million over five years. "The redistribution of real estate taxes between the school district and the city is commendable as an initial measure," observed Dr. Arapis. "However, without a comprehensive reform of the real estate tax system, encompassing regular property reassessments and adjustments to mill rates, this change is likely to yield only ephemeral benefits."

Theodore  Arapis, PhD
3 min. read

AI-Generated Content is a Game Changer for Marketers, but at What Cost?

Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

David Schweidel
6 min. read