Stop talking about tomorrow’s AI doomsday when AI poses risks today

Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology and the Law.

Open AI CEO Sam Altman (seen here testifying before the US Senate) is among the signatories of an open letter warning of the risk of human extinction due to AI.Credit: Win McNamee/Getty

It’s unusual to see industry leaders talk about the potential lethality of their product. It’s not something tobacco or oil executives tend to do, for example. Yet it seems barely a week goes by without a tech insider trumpeting the existential risks of artificial intelligence (AI).

In March, an open letter signed by Elon Musk and other technologists warned that giant AI systems pose grave risks to humanity. A few weeks later, Geoffrey Hinton, a pioneer in the development of artificial intelligence tools, left his research role at Google, warning of the serious risks posed by the technology. More than 500 business and scientific leaders, including representatives of OpenAI and Google DeepMind, have signed a 23-word declaration saying that addressing the risk of human extinction due to AI should be a global priority alongside other societal risks such as pandemics and nuclear war. And on June 7, the UK government invoked AI’s potential existential danger when it announced it would host the first major global AI security summit this fall.

The idea that artificial intelligence could lead to human extinction has been discussed on the fringes of the tech community for years. The enthusiasm for the ChatGPT tool and generative AI has now propelled it into the mainstream. But, like a magician’s sleight of hand, it distracts attention from the real problem: the damage to society that AI systems and tools are causing now or are likely to cause in the future. Governments and regulators in particular should not be distracted by this narrative and must act decisively to limit the potential harms. And while their work should be informed by the tech industry, it shouldn’t be tied to the tech agenda.

Many AI researchers and ethicists to whom Nature spoke are frustrated with the doomsday speeches that dominate the AI ​​debates. It is problematic in at least two ways. First, the specter of AI as an all-powerful machine fuels competition between nations to develop AI so they can take advantage of and control it. This benefits tech companies: it encourages investment and weakens the case for regulating the industry. A veritable arms race is already underway to produce next-generation military technology powered by AI, perhaps increasing the risk of catastrophic apocalyptic conflicts, but not the kind much talked about in the mainstream narrative AI threatens human extinction .

Second, it allows a homogeneous group of business executives and technologists to dominate the conversation around AI risk and regulation, while other communities are left out. Letters written by tech industry leaders are essentially drawing lines around who counts as the expert in this conversation, says Amba Kak, director of the AI ​​Now Institute in New York City, which focuses on the social consequences of AI.

AI systems and tools have many potential benefits, from data synthesis to assisting in medical diagnoses. But they can also cause well-documented harm, from skewed decision-making to the elimination of jobs. AI-powered facial recognition is already being abused by autocratic states to track and oppress people. Biased AI systems could use opaque algorithms to deny people social benefits, medical care, or technology asylum claims that are likely to hit people in marginalized communities the most. Debates on these issues are starved of oxygen.

One of the major concerns surrounding the latest generation of generative AI is its potential to increase disinformation. Technology makes it easier to produce more and more convincing fake text, photos and videos that could influence elections, say, or undermine people’s ability to trust any information, potentially destabilizing societies. If technology companies are serious about avoiding or reducing these risks, they need to put ethics, safety and accountability at the heart of their work. At present, they seem to be reluctant to do so. OpenAI has stress-tested GPT4, its latest generative AI model, prompting it to produce malicious content and then putting safeguards in place. But while the company has described what it did, the full details of the tests and the data on which the model was trained have not been made public.

Technology companies must formulate industry standards for the responsible development of AI systems and tools, and undertake rigorous safety testing before products are released. They would have to submit the data in full to independent regulatory bodies who are able to verify it, just as pharmaceutical companies must submit clinical trial data to medical authorities before drugs can go on sale.

For this to happen, governments need to establish appropriate legal and regulatory frameworks, as well as enforce existing laws. Earlier this month, the European Parliament passed the Artificial Intelligence Law, which would regulate AI applications in the European Union based on their potential risk of banning police use of facial recognition technology in real time in public spaces, for example. There are further hurdles to overcome before the bill becomes law in EU member states and there are questions about the lack of details on how it will be enforced, but it could help set global standards on AI systems. Further consultation on AI risks and regulations, such as the forthcoming summit in the UK, should invite a diverse list of participants including researchers studying the harms of AI and representatives of communities that have been or are at particular risk of being damaged by technology.

Researchers need to do their part by building a bottom-up culture of responsible AI. In April, the Neural Information Processing Systems (NeurIPS) Machine Learning Big Meeting announced the adoption of a code of ethics for meeting presentation. This includes the expectation that research involving human participants has been approved by an institutional or ethics review board (IRB). All researchers and institutions should follow this approach and also ensure that IRBs or peer review groups in cases where no IRBs exist have the expertise to review potentially risky AI research. And scientists using large datasets containing data from people have to find ways to get consensus.

Alarmist narratives about existential risks are not constructive. Serious discussions about actual risks and actions to contain them are. The sooner humanity establishes its rules for interacting with artificial intelligence, the sooner we can learn to live in harmony with technology.

#Stop #talking #tomorrows #doomsday #poses #risks #today
Image Source : www.nature.com

Leave a Comment