Thread: Randomness 3
View Single Post
  #1122  
Old 06-04-2020, 10:01 PM
Mazryonh Mazryonh is offline
Senior Member
 
Join Date: Feb 2010
Posts: 290
Default

Quote:
Originally Posted by MT2008 View Post
A lot has changed since then - particularly when it comes to machine learning and sensor capabilities.
It doesn't change the fundamental issues though, such as the blurring of accountability when things go wrong, or the fact that such autonomous hardware may end up being more expensive to run and maintain than human pilots, or their susceptibility to spoofing and hacking. Furthermore, algorithms don't have any sense of context (and AI routines are algorithms), which comes from a lifetime of experience. I would hate to be in a "the Taliban don't wave" situation where an AI vehicle with autonomous decision making capability mistakes me or people around me for a legitimate target and opens fire despite anything we might do to the contrary. In real life, "the Taliban don't wave" incident happened in Afghanistan when a Canadian-led ANA squad was mistaken for a legitimate target by an American Apache helicopter pilot and the Canadian squad leader ordered everyone in the unit to stand up and wave at the helicopter, which prevented a friendly fire incident because "the Taliban don't wave."

(And before anyone tells me that "electronic IFF would solve the problem of AIs distinguishing friend from foe," that's not foolproof either due to jamming or technical problems.)

Skynet or not, I'm personally not a fan of AI autonomously making decisions to engage enemies in war myself.

Last edited by Mazryonh; 06-04-2020 at 10:36 PM.
Reply With Quote