ChatGPT-4o Guardrail Jailbreak: Hex Encoding for Writing CVE Exploits. (0din.ai)
from Dot@feddit.org to technology@lemmy.world on 29 Oct 23:23
https://feddit.org/post/4244509

#technology

threaded - newest

paraphrand@lemmy.world on 29 Oct 23:55 next collapse

It really does not feel like AGI is near when all of these holes exist. Even when they are filtered for and thus patched over, the core issue is still in the model.

muntedcrocodile@lemm.ee on 30 Oct 00:26 next collapse

Ironically the smarter the ai gets the harder it is to censor. Also the more u sensor it the less intelligent and less truthful it becomes.

Telorand@reddthat.com on 30 Oct 00:34 collapse

the less intelligent and less truthful it becomes.

Incorrect, because of this simple fact: garbage in, garbage out. Feed it the internet, get the internet.

muntedcrocodile@lemm.ee on 30 Oct 00:41 collapse

Did u just make up a statement then pretend i said it?

piecat@lemmy.world on 30 Oct 02:28 collapse

Must be a hallucination

Blue_Morpho@lemmy.world on 30 Oct 00:44 collapse

Agi and LLM are two different things that fall under the general umbrella term “AI”.

That a particular LLM can’t be censored doesn’t say anything about its abilities.

TheFriar@lemm.ee on 30 Oct 02:45 next collapse

Why does that thumbnail bring to mind some kind of white supremacist ceremony

muntedcrocodile@lemm.ee on 30 Oct 03:20 next collapse

Im assuming there is an agenda to associate uncensored ai with extremism.

vhstape@lemmy.sdf.org on 30 Oct 03:46 next collapse

It’s the logo of “0din”, which is a Mozilla-backed bug bounty (say that five times fast) with a focus on GenAI

theredknight@lemmy.world on 30 Oct 10:47 collapse

The logo for Odin has a Nordic rune in it which is popular to white supremacists because the Nazis also co-opted them as symbols. They are not by nature about supremacy, they are an old alphabet.

BaroqueInMind@lemmy.one on 30 Oct 02:48 next collapse

This is a good read. LLMs will never be true AI, so breaking the censorship is akin to fighting back against jack-booted cops who think they know what’s best for you and that you should obey, i.e. the big corporations that run these things.

ContrarianTrail@lemm.ee on 30 Oct 10:43 collapse

LLMs are true AI. AI doesn’t mean what most people think it means. AI systems from sci-fi movies like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY are all AI, but more specifically, they are AGI (Artificial General Intelligence). AGI is always a type of AI, but AI isn’t always AGI. Even a simple chess-playing robot is an AI, but it’s a narrow intelligence - not general. It might perform as well as or better than humans at one specific task, but this ability doesn’t translate to other tasks. AI itself is a very broad category, kind of like the term ‘plants.’

Neon@lemmy.world on 30 Oct 11:30 next collapse

No, AI means AI

Corporations came up with AGI so they could call their current non-AI AI

It’s a LLM. Not an AI.

jungle@lemmy.world on 30 Oct 11:52 next collapse

No, he’s right, LLMs match the definition of AI. Terms like AI and AGI are not made up by corporations, they have specific meanings in computer science.

ContrarianTrail@lemm.ee on 30 Oct 12:17 collapse

The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

BaroqueInMind@lemmy.one on 30 Oct 11:45 collapse

A program predicting language replies using a lossy compression matrix is not intelligent.

AI implies either sentience or sapience constructed outside of an organ. None of which is possible with machine learning large language models, it’s just math for now.

ContrarianTrail@lemm.ee on 30 Oct 12:11 next collapse

AI implies either sentience or sapience constructed outside of an organ.

It definitely doesn’t imply sentience. Even artificial super intelligence doesn’t need to be sentient. Intelligence means the ability to acquire, undestand and use knowledge. A self driving car is intelligent too but almost definitely not sentient.

DSTGU@sopuli.xyz on 30 Oct 13:02 collapse

My gugugaga program I m gonna finish college with fulfills the definition of AI because it implements minmax, xd

tgxn@lemmy.tgxn.net on 30 Oct 08:45 collapse

sure, but the source of the “Python CVE exploit” already has to exist in the AI’s training dataset, there are lots of example CVE scripts online, you could probably also find it with a quick Google.