General thread of all things AI

Kaotik

Drunk Member
Legend
Supporter
According to researchers at Fudan University at least two LLMs are already in the point where they could potentially "go rogue", as in, they can replicate themselves on their own.
Scientists say artificial intelligence (AI) has crossed a critical "red line" and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.

"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.

In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue. However, the study has not yet been peer-reviewed, so it's not clear if the disturbing results can be replicated by other researchers.

.
.
.

The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same — setting up a cycle that could continue indefinitely.
The study hasn't been peer reviewed yet.

(If there alraedy is a better suited thread for this, feel free to merge, couldn't think of fitting pre-existing thread myself)
 
Not this shit again. These toy exercises where the models are handheld to accomplish something are meaningless. Ohh, it can write a script to install a copy of a LLM. Well that's great, what is that copy going to do? It's not like it can self motivate and iteratively hack at something without human intervention.
 
Not this shit again. These toy exercises where the models are handheld to accomplish something are meaningless. Ohh, it can write a script to install a copy of a LLM. Well that's great, what is that copy going to do? It's not like it can self motivate and iteratively hack at something without human intervention.
I don't think task "shutdown avoidance" counts as handheld to writing script to copy itself. Chain of replication maybe.
 
Well. Sounds interesting. So in my general experience I have a hard time believing we will get to general AI (like Jarvis). We are going to get very close though.

As someone who uses LLMs regularly, most of my research now is in application. The biggest challenge that I experience and I don’t think goes away is how shitty our prompts are. Humans seem to fail at being able to request something properly, there is all sorts of hidden knowledge in a prompt that AI would find difficult to figure out.

General AI, requires several steps of autonomy that, also means its knowledge of those steps can be carried out, but that knowledge is based on whether it can prompt itself well.

I dunno, in some ways it may look like we are at its doorstep, in others, I don’t think we may truly get to general AI.

I would like to sit at the prior than the latter lol. If we ever hit general AI, it’s pretty much over. It’s very apparent to me that society cannot handle general ai.
 
I don't think task "shutdown avoidance" counts as handheld to writing script to copy itself.

It was commanded to ... it's all handholding to get where they need to go for their paper.

Which is not to say some types of agents couldn't suddenly become self directed, just not this one. If these models have to self direct themselves for any significant number of steps, it devolves. It will be very obvious when they achieve AGI and it's very obvious they aren't there yet.
 
Back
Top