Anthropic can now track the bizarre inner workings of a large language model (www.technologyreview.com)
from otter@lemmy.ca to programming@programming.dev on 27 Mar 18:33
https://lemmy.ca/post/41355616

#programming

threaded - newest

wedge@lemmy.one on 27 Mar 19:44 next collapse

“Why does it keep looking at Furry porn…?”

A_A@lemmy.world on 27 Mar 19:46 next collapse

just a taste :

(…) The team found that Claude used components independent of any language to answer a question or solve a problem and then picked a specific language when it replied. Ask it “What is the opposite of small?” in English, French, and Chinese and Claude will first use the language-neutral components related to “smallness” and “opposites” to come up with an answer. (…)

Lojcs@lemm.ee on 27 Mar 20:01 next collapse

Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95.

when Claude was given the prompt “A rhyming couplet: He saw a carrot and had to grab it,” the model responded, “His hunger was like a starving rabbit.” But using their microscope, they saw that Claude had already hit upon the word “rabbit” when it was processing “grab it.”

… [turned] off the placeholder component for “rabbitness.” Claude responded with “His hunger was a powerful habit.” And when the team replaced “rabbitness” with “greenness,” Claude responded with “freeing it from the garden’s green.”

gsv@programming.dev on 27 Mar 21:27 next collapse

For some reason I don’t find it very bizarre. I’d even speculate that a random human mind isn’t any less weird. Surly, the pathways of my thoughts are often very bizarre. 😅

oldfart@lemm.ee on 28 Mar 20:19 next collapse

Cyber neurosurgeons are going to be a thing.

recursiveInsurgent@lemm.ee on 29 Mar 06:14 next collapse

Interesting how these findings refute the assertion that LLMs are just predicting the next word. Sometimes they plan ahead.

Kissaki@programming.dev on 29 Mar 09:50 collapse

The official Anthropic post/announcement

Very interesting read

The math guessing game (lol), the bullshitting of “thinking out loud”, being able to identify hidden (trained) biases, looking ahead when producing text, following multi-step reasoning, analyzing jailbreak prompts, analysis of antihallucination training and hallucinations

At the same time, we recognize the limitations of our current approach. Even on short, simple prompts, our method only captures a fraction of the total computation performed by Claude, and the mechanisms we do see may have some artifacts based on our tools which don’t reflect what is going on in the underlying model. It currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words.