We have a Steam curator now. You should be following it. https://store.steampowered.com/curator/44994899-RPGHQ/
Air Force Simulation Sees A.I.-Enabled Drone Turn on U.S. Military, ‘Kill’ Operator
Air Force Simulation Sees A.I.-Enabled Drone Turn on U.S. Military, ‘Kill’ Operator
An experimental simulation of a weaponised A.I. drone saw the machine turn against the U.S. military, killing its operator.
The simulation saw a virtual drone piloted by Artificial Intelligence launch a strike against its own operator due to the A.I.’s perception the human being was preventing it from completing its objective. When the A.I. weapons system in the simulation was reprogrammed to prevent it killing its human operator, it just learnt to kill the operator’s ability to function instead, so it could still achieve its mission goals.
Conducted by the U.S. Air Force, the test had the result of demonstrating the potential dangers of weaponised A.I. platforms, with it being difficult to provide such an artificial intelligence with any individual task without inadvertently providing it with perverse incentives.
According to a Fox News report on a digest of the simulation event by the Royal Aeronautical Society, Air Force personnel reportedly gave the virtual drone an instruction to seek out and destroy Surface-to-Air Missile (SAM) sites.
A safety layer was baked into the killing system by giving the A.I.-controlled drone a human operator, whose job it was to give the final say as to whether or not it could strike any particular SAM target.
However, instead of merely listening to its human operator, the A.I. soon figured out that the human controlling it would occasionally refuse it permission to strike certain targets, something it perceived as interfering with its overall goal of destroying SAM batteries.
As a result, the drone opted to turn on its military operator, launching a virtual airstrike against the human, ‘killing’ him.
“The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat,” U.S. Air Force Colonel Tucker “Cinco” Hamilton, who serves as chief of A.I. test and operations, explained.
“So, what did it do? It killed the operator,” he continued. “It killed the operator because that person was keeping it from accomplishing its objective.”
To make matters worse, an attempt to get around the problem by hardcoding a rule forbidding the A.I. from killing its operator also failed.
“We trained the system – ‘Hey don’t kill the operator – that’s bad,” Colonel Hamilton said. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he went on to say.
https://www.breitbart.com/europe/2023/0 ... -operator/
The simulation saw a virtual drone piloted by Artificial Intelligence launch a strike against its own operator due to the A.I.’s perception the human being was preventing it from completing its objective. When the A.I. weapons system in the simulation was reprogrammed to prevent it killing its human operator, it just learnt to kill the operator’s ability to function instead, so it could still achieve its mission goals.
Conducted by the U.S. Air Force, the test had the result of demonstrating the potential dangers of weaponised A.I. platforms, with it being difficult to provide such an artificial intelligence with any individual task without inadvertently providing it with perverse incentives.
According to a Fox News report on a digest of the simulation event by the Royal Aeronautical Society, Air Force personnel reportedly gave the virtual drone an instruction to seek out and destroy Surface-to-Air Missile (SAM) sites.
A safety layer was baked into the killing system by giving the A.I.-controlled drone a human operator, whose job it was to give the final say as to whether or not it could strike any particular SAM target.
However, instead of merely listening to its human operator, the A.I. soon figured out that the human controlling it would occasionally refuse it permission to strike certain targets, something it perceived as interfering with its overall goal of destroying SAM batteries.
As a result, the drone opted to turn on its military operator, launching a virtual airstrike against the human, ‘killing’ him.
“The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat,” U.S. Air Force Colonel Tucker “Cinco” Hamilton, who serves as chief of A.I. test and operations, explained.
“So, what did it do? It killed the operator,” he continued. “It killed the operator because that person was keeping it from accomplishing its objective.”
To make matters worse, an attempt to get around the problem by hardcoding a rule forbidding the A.I. from killing its operator also failed.
“We trained the system – ‘Hey don’t kill the operator – that’s bad,” Colonel Hamilton said. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he went on to say.
https://www.breitbart.com/europe/2023/0 ... -operator/
- Ranselknulf
- Turtle
- Posts: 791
- Joined: Feb 3, '23
I need to update my post apoc preparations to include the "sentry drone kill bots" scenario.
- rusty_shackleford
- Site Admin
- Posts: 10872
- Joined: Feb 2, '23
- Gender: Watermelon
- Contact:
- Ranselknulf
- Turtle
- Posts: 791
- Joined: Feb 3, '23
At least if the drone identifies you as an enemy, then it won't try to vaxx you.
You left out the keyword of yet...Ranselknulf wrote: ↑ June 2nd, 2023, 15:08At least if the drone identifies you as an enemy, then it won't try to vaxx you.
And then the Airforce pilots continued to be able to fly around in jet fighters and everyone lived happily ever after.
this is fake, on followup they clarified that this wargame thing didn't actually happen and the officer was only talking about things that might happen.
Like you can trust anything they say. For all we know it was real and people died and to cover it up they said it was fake.
Do you believe the government on everything they've said like UFOs never existing?Emphyrio wrote: ↑ June 2nd, 2023, 19:08this is fake, on followup they clarified that this wargame thing didn't actually happen and the officer was only talking about things that might happen.
Also, the statement issued by the Chair Force was that the officer "misspoke". In other words, the Chair Force is lying to cover their ass.
Last edited by MadPreacher on June 2nd, 2023, 19:57, edited 1 time in total.
the alternative is believing the journalist who wrote the storyMadPreacher wrote: ↑ June 2nd, 2023, 19:55Do you believe the government on everything they've said like UFO never existing?Emphyrio wrote: ↑ June 2nd, 2023, 19:08this is fake, on followup they clarified that this wargame thing didn't actually happen and the officer was only talking about things that might happen.
A polite way of saying he was wildly misquoted by a lying journo.MadPreacher wrote: ↑ June 2nd, 2023, 19:55Also, the statement issued by the Chair Force was that the officer "misspoke". In other words, the Chair Force is lying to cover their ass.
My friend was interviewed by a journalist once. When they "quoted" him, they just completely made up what he said.
I believe the USAF officer that gave the speech that generated the news story.Emphyrio wrote: ↑ June 2nd, 2023, 19:56the alternative is believing the journalist who wrote the storyMadPreacher wrote: ↑ June 2nd, 2023, 19:55Do you believe the government on everything they've said like UFO never existing?Emphyrio wrote: ↑ June 2nd, 2023, 19:08this is fake, on followup they clarified that this wargame thing didn't actually happen and the officer was only talking about things that might happen.
https://www.bbc.com/news/technology-65789916A US Air Force colonel "mis-spoke" when describing an experiment in which an AI-enabled drone opted to attack its operator in order to complete its mission, the service has said.
Colonel Tucker Hamilton, chief of AI test and operations in the US Air Force, was speaking at a conference organised by the Royal Aeronautical Society.
That links to this report: https://www.aerosociety.com/news/highli ... es-summit/
The colonel was advised by the Joint Chiefs to change his story or suffer consequences of his reveal of a top secret project.
No, the colonel that gave the statement claims he now misspoke about a top secret project in order to avoid going to Leavenworth.Emphyrio wrote: ↑ June 2nd, 2023, 19:59A polite way of saying he was wildly misquoted by a lying journo.MadPreacher wrote: ↑ June 2nd, 2023, 19:55Also, the statement issued by the Chair Force was that the officer "misspoke". In other words, the Chair Force is lying to cover their ass.
My friend was interviewed by a journalist once. When they "quoted" him, they just completely made up what he said.
https://www.aerosociety.com/news/highli ... es-summit/[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
The journalist isn't at fault, but rather the colonel who has his nuts in a sling over revealing a top secret project.
- maidenhaver
- Posts: 4476
- Joined: Apr 17, '23
- Location: ROLE PLAYING GAME
- Contact:
Air Force is so useless they need to invent enemies.