December 24, 2024

Tech pioneers urge AI firms to be accountable for harm

4 min read

Authors and academics warn that progressing systems without safety checks is “utterly irresponsible.”

A consortium of senior experts, featuring two AI pioneers, has raised concerns about the potential threat potent AI systems pose to societal stability. Emphasizing the need for accountability, they assert that AI companies should be held responsible for any harm caused by their products. This caution was issued on Tuesday, just ahead of an upcoming AI safety summit at Bletchley Park, where international politicians, tech firms, scholars, and civil society leaders are set to gather next week.

A co-author of the policy recommendations, part of a group of 23 experts, voiced worry about the pursuit of more potent AI systems without a solid understanding of how to ensure their safety, deeming it “utterly reckless.”

Stuart Russell, a computer science professor at the University of California, Berkeley, stressed the importance of approaching advanced AI systems with seriousness, emphasizing that they are not mere toys. He contends that advancing their capabilities before fully comprehending how to ensure their safety represents a profoundly reckless course of action.

He additionally remarked, “AI companies encounter fewer regulations than sandwich shops.”

The document advocated for governments to adopt a range of policies, such as:

  1. Governments dedicating one-third of their AI research and development funding, with companies allocating one-third of their AI R&D resources, to ensure the safe and ethical use of these systems.
  2. Allowing independent auditors access to AI labs.
  3. Implementing a licensing framework for the development of advanced models.

The document also stressed that AI firms must incorporate safety measures upon detecting hazardous features in their models, and tech companies should bear responsibility for foreseeable and avoidable damages arising from their AI systems.

The co-authors of the document include distinguished figures such as Geoffrey Hinton and Yoshua Bengio, two of the triumvirate recognized as the “godfathers of AI.” In 2018, they were awarded the ACM Turing Award, considered the Nobel Prize in computer science, for their contributions to AI.

Both Hinton and Bengio are among the exclusive list of 100 attendees invited to the summit. This year, Hinton parted ways with Google, expressing concerns about what he termed the “existential risk” associated with digital intelligence. Meanwhile, Bengio, a computer science professor at the University of Montreal, joined him and thousands of other experts in March by signing a letter that called for a halt to large-scale AI experiments.

The roster of co-authors endorsing these proposals includes prominent figures like Yuval Noah Harari, the bestselling author of “Sapiens,” Nobel laureate in economics Daniel Kahneman, Sheila McIlraith, an AI professor at the University of Toronto, and the esteemed Chinese computer scientist Andy Yao.

The authors have conveyed apprehensions that hastily developed AI systems pose a substantial threat, potentially exacerbating social injustice, eroding established professions, destabilizing society, enabling large-scale criminal or terrorist activities, and undermining our collective understanding of reality, which serves as the foundation of our society.

They warned that existing AI systems were already exhibiting unsettling capabilities, suggesting the potential emergence of autonomous systems with the ability to plan, establish goals, and carry out actions in the physical world. As an example, they highlighted the GPT-4 AI model, employed in the ChatGPT tool and created by the US company OpenAI, which has demonstrated the capacity to design and conduct chemistry experiments, navigate the web, and utilize various software tools, including other AI models.

The specialists also emphasized that the development of highly advanced autonomous AI poses a potential danger, as it may lead to the creation of systems autonomously pursuing undesirable objectives. Effectively controlling these systems could present significant challenges.

Additional policy recommendations within the document encompass:

  1. Compulsory reporting of incidents involving models exhibiting worrisome behavior.
  2. Enforcement of measures to hinder the self-replication of perilous models.
  3. Granting regulators the authority to cease the development of AI models manifesting hazardous behavior.

The upcoming safety summit will focus on existential threats associated with AI, addressing concerns such as their potential role in devising new bioweapons and circumventing human oversight. The UK government, along with other participants, is in the process of developing a statement that is expected to highlight the magnitude of the threat posed by cutting-edge AI, particularly advanced systems. However, it is not expected that the summit will formally establish a global regulatory body, even though it will outline the risks associated with AI and propose measures to mitigate them.

While certain AI experts contend that worries about existential threats to humanity are overstated, Yann LeCun, a co-recipient of the Turing Award in 2018 with Bengio and Hinton, presently serving as the Chief AI Scientist at Meta under Mark Zuckerberg and also participating in the summit, dismissed the notion of AI exterminating humans as “absurd” in an interview with the Financial Times.

However, the writers of the policy document have argued that in the event of the sudden appearance of highly advanced autonomous AI systems, there would be a dearth of understanding on how to guarantee their safety or conduct safety assessments. They additionally underscored that even if such knowledge were accessible, most nations lack the requisite institutions to prevent misuse and enforce safe practices.

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Specialdealsshop.com |