Claude 3 launched by Anthropic — new AI model leaves OpenAI's GPT-4 in the dust
(www.tomsguide.com)
from simple@lemm.ee to technology@lemmy.world on 04 Mar 2024 19:40
https://lemm.ee/post/25758867
from simple@lemm.ee to technology@lemmy.world on 04 Mar 2024 19:40
https://lemm.ee/post/25758867
Title is a bit dramatic, but yes, Claude 3 claims to be better than GPT 4 in most ways.
threaded - newest
It’s called Claude though… That’s definetly not better than GPT
Think it depends the language, in french gpt is very very close to “j’ai pété” which means “I farted”. But, yeah agree that Claude ain’t much better name
chat-jai-pete.fr
True, but Claude is like my grandfather name…
Exactly, neither fit right for an AI
Disagree, never liked OpenAI trying to claim a generic term as the name of their product.
This is how French people pronounce cloud, so might be where the name comes from
More likely a reference to Claude Shannon, the founder of AI.
Yes, it’s named after Claude Shannon, but I’ve never heard him described as “the founder of AI”. He’s the father of information theory, which is only indirectly connected to AI.
From the linked Wikipedia page:
Nah, we say “claoude”.
Sounds more like claode. Which is fraction away from Claude, and most often than not the the ao sounds au
Absolutely not. It’s pronounced \klod. The « OW » diphthong sound doesn’t exist in French.
Cloud is generally pronounced as in English \aʊ\ or maybe \klud\ for non English speakers.
There is no possible confusion in French between this two words.
Trust me… “je les ai téléchargés depuis le Claude” is exactly how most French will pronounce. Not all, but most. First hand experience
The sonnet model is decent, but something weird is going on with their opus model as it just terribly sucks.
Mistral-large is probably the best large model for practical purposes at this point.
What makes you say that? I have not performed my own comparison, but everything I have seen and read suggests that GPT4 is king, currently.
It depends on the task, but in general a lot of the models have fallen into a dark pattern of Goodhart’s Law, targeting the benchmarks but suffering at other things.
So as an example, while GPT-4 used to correctly model variations of the wolf, goat, cabbage problem with token similarity hacks (i.e. using emojis instead of nouns to break pattern similarity with the standard form of the question), now it even fails for that with the most recent updates, whereas mistral-large is the only one that doesn’t need the hack at all.
Interesting. That’s not something I’ve heard about until now, but something I’ll surely look into.
I just spent some time on Claude 3, and I see how it can be considered ‘better’ than GPT4, however I quickly found that it tends to lie about itself in subtle ways. When I called it out on an error it would say things like ‘I’ll strive to be better’. I called it out on the fact that it’s model doesn’t grow or change based on conversations it has and that it’s impossible for it to strive to do anything outside of, maybe, that chat. It then went on to show me that it couldn’t even adjust within that chat by doing the same thing 5 more times in 5 different ways.
I see the model it used for the apologies (acknowledge, apologize, state intent to do better in the future) which is appropriate for people or beings capable of learning, but it is not. I went from having a good conversation with it about a poem I wrote to being weirdly grossed out by it. GPT does a good job of not pretending to be human, and I appreciate that.
The cynic in me says that’s perfectly human behavior, though
Yea that’s what I’m saying, and I don’t like it. I don’t want my LLM acting human, I want it acting like an LLM. My interactions with Claude 3 were very uncanny valley and bugged me a lot.
so you’re basically saying it talked itself squarely into uncanny valley?
i honestly didn’t consider that would be an issue for LLMs, but in hindsight…yeah, that’s gonna be a problem…
Yea, that’s exactly what it did. It was bizarre to realize actually because I felt the same way because it’s text. But here I am
It does amazingly well with schemas:
When the dead rabbit was seen by the dog, it hopped. What does “it” refer to: the rabbit or the dog?
When the iceberg was struck by the ship, it sunk. What does “it” refer to: the iceberg or the ship?
You’re right! I tried against ChatGPT 3.5:
It got the ship one right though.
I found that it helps if you ask chatGPT 4 to act as a Vulcan from Star Trek, it does better with logic puzzles. But it doesn’t work with 3.5.
Hey have you guys heard about ChatGPT 7? It makes chatGPT 6 look like ChatGPT 5!
Who ever thought the AI awakening would this fucking banal?
Don’t worry, it’s not the AI awakening. It’s just people figuring out how to sell text generators.
If you generate your own marketing copy, these things practically sell themselves
Is it open source? If not then it’s just as worthless as OpenAI
What beat Claude 3?
No free version of Opus, so I can’t try it.
It’s 👍 that the ai competition is sizzling.