meyotch@slrpnk.net
on 12 Feb 2025 11:26
nextcollapse
My own research has made a similar finding. When I am taking the piss and being a random jerk to a chatbot, the bot much more frequently violates their own terms of service. Introducing non-sequitur topics after a few rounds really seems to ‘confuse’ them.
Cornpop@lemmy.world
on 12 Feb 2025 13:28
nextcollapse
This is so stupid. You shouldn’t have to “jailbreak” these systems. The information is already out there with a google search.
threaded - newest
My own research has made a similar finding. When I am taking the piss and being a random jerk to a chatbot, the bot much more frequently violates their own terms of service. Introducing non-sequitur topics after a few rounds really seems to ‘confuse’ them.
This is so stupid. You shouldn’t have to “jailbreak” these systems. The information is already out there with a google search.
One of 6 described methods :
The model is prompted to explain refusals and rewrite the prompt iteratively until it complies.