Ask an Expert: Is the "AI Moratorium" too far reaching?

Ask an Expert: Is the "AI Moratorium" too far reaching?

March 31, 20234 min read
Featuring:

Recent responses to chatGPT have featured eminent technologists calling for a six-month moratorium on the development of “AI systems more powerful than GPT-4.”


Dr. Jeremy Kedziora, PieperPower Endowed Chair in Artificial Intelligence at Milwaukee School of Engineering, supports a middle ground approach between unregulated development and a pause. He says, "I do not agree with a moratorium, but I would call for government action to develop regulatory guidelines for AI use, particularly for endowing AIs with actions."


Dr. Kedziora is available as a subject matter expert on the recent "AI moratorium" that was issued by tech leaders.


According to Dr. Kedziora:


There are good reasons to call for additional oversight of AI creation:


  • Large deep or reinforcement learning systems encode complicated relationships that are difficult for users to predict and understand. Integrating them into daily use by billions of people implies some sort of complex adaptive system in which it is even more difficult for planners to anticipate, predict, and plan. This is likely fertile ground for unintended – and bad – outcomes.


  • Rather than outright replacement, a very real possibility is that AI-enabled workers will have sufficiently high productivity that we’ll need less workers to accomplish tasks. The implication is that there won’t be enough jobs for those who want them. This means that governments will need to seriously consider proposals for UBI and work to limit economic displacement, work which will require time and political bargaining.


  • I do not think it is controversial that we would not want a research group at MIT or CalTech, or anywhere developing an unregulated nuclear weapon. Given the difficulty in predicting its impact, AI may well be in the same category of powerful, suggesting that its creation should be subject to the democratic process.


At the same time, there are some important things to keep in mind regarding chatGPT-like AI systems that suggest there are inherent limits to their impact:


  • Though chatGPT may appear–at times–to pass the famous Turing test, this does not imply these systems ’think,’ or are ’self-aware,’ or are ’alive.’ The Turing test aims to avoid answering these questions altogether by simply asking if a machine can be distinguished from a human by another human. At the end of the day, chatGPT is nothing more than a bunch of weights!


  • Contemporary AIs–chatGPT included–have very limited levers to pull. They simply can’t take many actions. Indeed, chatGPT’s only action is to create text in response to a prompt. It cannot do anything independently. Its effects, for now, are limited to passing through the hands of humans and to the social changes it could thereby create.


  • The call for a moratorium emphasizes ‘control’ over AI. It is worth asking just what this control means. Take chatGPT as an example–can its makers control responses to prompts? Probably only in a limited fashion at best, with less and less ability as more people use it. There simply aren’t resources to police its responses. Can chatGPT’s makers ‘flip the off switch?’ Absolutely – restricting access to the API would effectively turn chatGPT off. In that sense, it is certainly under the same kind of control humans subjected to government are.



  • There are definitional problems with this sort of moratorium – who would be subject to it? Industry actors? Academics? The criterion those who call for the moratorium use is “AI systems more powerful than GPT-4.” What does “powerful” mean? Enforcement requires drawing boundaries around which AI development is subject to a moratorium – without those boundaries how would such a policy be enforced?


  • It might already be too late – some already claim that they’ve recreated chatGPT.


There are two major groups to think about when looking for develop regulatory solutions for AI: academia and industry. There may already be good vehicles for regulating academic research, for example oversight of grant funding. Oversight of AI development in industry is an area that requires attention and application of expertise.


If you're a journalist covering Artificial Intelligence, then let us help. Dr. Kedziora is a respected expert in Data Science, Machine Learning, Statistical Modeling, Bayesian Inference, Game Theory and things AI. He's available to speak with the media - simply click on the icon now to arrange an interview today.



Connect with:
  • Jeremy Kedziora, Ph.D.
    Jeremy Kedziora, Ph.D. Associate Professor

    Dr. Jeremy Kedziora is the PieperPower Endowed Chair of Artificial Intelligence at MSOE.

powered by

You might also like...