#72121
Strata Ken
Flatchatter

    The risks of AI are very much overrated, at least for decades. AI essentially is about building systems that make decisions, and they initially do this by looking at a large number of decisions and trying to get the correct result. As such they are fairly limited in what they can do.

    There are risks, and abig one is with the military. We can design AI that determine if someone or something is hostile and then choose a strategy to kill it. These will likely become more complex. Currently the major nations have agreed that any decisions should be agreed by humans but I expect it will be watered down.

    As an example a tank might detect an incoming shell. It takes action to shoot it down automatically, because there isn’t the time for human approval. It then calculates the location of where the shell was fired from and requests approval for fire. At some future time there will be multiple attackers and everything will be automatic.

    It isn’t going to read a book on ethics and decide that we are killing the planet and should end humans existence.