Will humans lose control of AI development to AI itself?

Background

Artificial intelligence (AI) is already all around us.  However, most of the AI in common use would be described as “Narrow AI”.  Narrow AI is found in computers solving mathematical problems and other devices with limited use. Examples are found in thermostats or smart TVs.

However, AI is rapidly evolving and integrating.  Its functionality, ability to learn and lack of predictability will challenge humans ability to control.

The field (AI)  was founded on the assumption that human intelligence “can be so precisely described that a machine can be made to simulate it”. This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction and philosophy since antiquity.  Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals

Can AI gain  functionality of such sophistication that its creators will no longer be able to control it?

General

What is Artificial Intelligence: Learning, Reasoning, problem solving, perception and language

Four advantages to Artificial Intelligence

AI will be smarter than humans in 5 years and smarter than “civilization” in 25

Out of control AI will destroy humanity

Yes

*Safe interruptibility and intentional forgetting can be coded in

*Humans can program “degrees of control” which divides areas of work between humans and AI

*Time frames of control can be programmed into AI

*We can control AI, and make it do what we want, but that might be a problem itself

Programming can contain threats from a super intelligent AI

Through “Safe Interruptibility” and intentional forgetting.  This complexity is what the EPFL researchers aim to resolve through “safe interruptibility.” Their breakthrough method lets humans interrupt AI learning processes when necessary

4 strategies to ensure human control of AI

No

* Future AI, also known as Strong AI, can’t be controlled by human programming because we can’t predict what it will do. Containment algorithms won’t work.

*AI will be able to escape quarantine and access the internet

*AI development is to fragmented and out of sight preventing a unified approach to controlling AI

*We won’t be able to detect “AI” superintelligence – preventing containment

AI is not predictable, may be impossible to control

AI development is an out of control process, fragmented – which prevents humans from containing it.

Researchers conclude it will be impossible to contain September 2022

52 experts explain how they think AI escaping our control will happen

The best study shows super intelligent AI can not be contained by humans The study “Superintelligence cannot be contained: Lessons from Computability Theory“ was published in the Journal of Artificial Intelligence Research.