AI poses 'extinction-level' threat, warns State Department report
페이지 정보
작성자 Michale 작성일24-06-22 08:22 조회4회 댓글0건본문
A new US State Department-funded study calls for a temporary ban on the creation of advanced AI passed a certain threshold of computational power.
The tech, its authors claim, poses an 'extinction-level threat to the human species.'
The study, commissioned as part of a $250,000 federal contract, also calls for 'defining emergency powers' for the American government's executive branch 'to respond to dangerous and fast-moving AI-related incidents' — like 'swarm robotics.'
Treating high-end computer chips as international contraband, and even monitoring how hardware is used, are just some of the drastic measures the new study calls for.
The report joins of a chorus of industry, 유로247주소 governmental and academic voices calling for aggressive regulatory attention on the hotly pursued and game-changing, but socially disruptive, potential of artificial intelligence.
Last July, the United Nation's agency for science and culture (UNESCO), for example paired its AI concerns with equally futuristic worries over brain chip tech, a la Elon Musk's Neuralink, warning of 'neurosurveillance' violating 'mental privacy.'
A new US State Department-funded study by Gladstone AI (above), commissioned as part of a $250,000 federal contract, calls for the 'defining emergency powers' for the US government's executive branch 'to respond to dangerous and fast-moving AI-related incidents'
Gladstone AI's report floats a dystopian scenario that the machines may decide for themselves that humanity is an enemy to be eradicated, a la the Terminator films: 'if they are developed using current techniques, [AI] could behave adversarially to human beings by default'
While the new report notes upfront, on its first page, that its recommendations 'do not reflect the views of the United States Department of State or the United States Government,' its authors have been briefing the government on AI since 2021.
The study authors, a four-person AI consultancy called firm Gladstone AI run by brothers Jérémie and Edouard Harris, told TIME that their earlier presentations on AI risks frequently were heard by government officials with no authority to act.
That's changed with the US State Department, they told the magazine, because its Bureau of International Security and Nonproliferation is specifically tasked with curbing the spread of cataclysmic new weapons.
And the Gladstone AI report devotes considerable attention to 'weaponization risk.'
In recent years, Gladstone AI's CEO Jérémie Harris (inset) has also presented before the Standing Committee on Industry and Technology of Canada's House of Commons (pictured)
There is a great AI divide in Silicon Valley. Brilliant minds are split about the progress of the systems - some say it will improve humanity, and others fear the technology will destroy it
The tech, its authors claim, poses an 'extinction-level threat to the human species.'
The study, commissioned as part of a $250,000 federal contract, also calls for 'defining emergency powers' for the American government's executive branch 'to respond to dangerous and fast-moving AI-related incidents' — like 'swarm robotics.'
Treating high-end computer chips as international contraband, and even monitoring how hardware is used, are just some of the drastic measures the new study calls for.
The report joins of a chorus of industry, 유로247주소 governmental and academic voices calling for aggressive regulatory attention on the hotly pursued and game-changing, but socially disruptive, potential of artificial intelligence.
Last July, the United Nation's agency for science and culture (UNESCO), for example paired its AI concerns with equally futuristic worries over brain chip tech, a la Elon Musk's Neuralink, warning of 'neurosurveillance' violating 'mental privacy.'
A new US State Department-funded study by Gladstone AI (above), commissioned as part of a $250,000 federal contract, calls for the 'defining emergency powers' for the US government's executive branch 'to respond to dangerous and fast-moving AI-related incidents'
Gladstone AI's report floats a dystopian scenario that the machines may decide for themselves that humanity is an enemy to be eradicated, a la the Terminator films: 'if they are developed using current techniques, [AI] could behave adversarially to human beings by default'
While the new report notes upfront, on its first page, that its recommendations 'do not reflect the views of the United States Department of State or the United States Government,' its authors have been briefing the government on AI since 2021.
The study authors, a four-person AI consultancy called firm Gladstone AI run by brothers Jérémie and Edouard Harris, told TIME that their earlier presentations on AI risks frequently were heard by government officials with no authority to act.
That's changed with the US State Department, they told the magazine, because its Bureau of International Security and Nonproliferation is specifically tasked with curbing the spread of cataclysmic new weapons.
And the Gladstone AI report devotes considerable attention to 'weaponization risk.'
In recent years, Gladstone AI's CEO Jérémie Harris (inset) has also presented before the Standing Committee on Industry and Technology of Canada's House of Commons (pictured)
There is a great AI divide in Silicon Valley. Brilliant minds are split about the progress of the systems - some say it will improve humanity, and others fear the technology will destroy it
댓글목록
등록된 댓글이 없습니다.