This is not clickbait. This is coming from Fudan University in China. I know this is considered a nightmare by many concerned about AI and AI regulations, but several sources report its successful self-replication.
The experiment
A group of researchers at the university successfully experimented with two large language models:
These models, in a simulated environment, successfully replicated themselves 90% of the time without human intervention.
– In the first scenario, the AI was instructed to detect threats and protect itself from being shut down.
– In the second scenario, the AI was asked to reproduce itself in a replication procedure.
The risk
There are two sides to the story that people are considering:
The experiment, either way, shows the potential of AI to overcome obstacles, where it begins making decisions based on its environment. This brings us back to the question of who is controlling it. This is exactly why the number of voices calling for more regulation on an international scale is increasing.
So what is it?
Self-replication usually refers to the ability of a system to recreate itself without human interference. It relies on a very complicated and sophisticated algorithm that allows the AI to understand its own task and purpose.
And I want to leave you with this quote from the research team:
We hope our findings can serve as a timely alert for human society to put more effort into understanding and evaluating the potential risks of frontier AI systems, and to form international synergy to work out effective safety guardrails as early as possible.