I actually have wanted to try this out to see how accurate it can actually be. I already have conversations with myself, so I can truly compare the reality to a LLM. Itās actually weird that even the supposed āunlocked and able to generate anything from anythingā tools Iāve found still donāt let you just input direct forum posts and shit to use. Though, I can totally understand why; most people probably arenāt gonna use it with their own shit, but someone elseās.
It wouldnāt be that hard to write a tool to pull from all of your social media accounts, Twitter, Facebook, Lemmy, I assume blue sky has an API, and just dump it into a big text file that you could give to a AI. I assume it understands CSV format.
Obviously Reddit is out of the question.
But I would only be comfortable giving an AI that much information about myself if it was 100% running locally. Which is probably why no oneās done it yet.
Telorand@reddthat.com
on 22 Nov 13:27
nextcollapse
Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.
Okay, but can it embody my traumas?
moistclump@lemmy.world
on 22 Nov 15:04
nextcollapse
Maybe some of the symptoms of the traumas that you exhibited during the interview.
conciselyverbose@sh.itjust.works
on 22 Nov 22:11
collapse
lol because people always behave in ways consistent with how they tell an interviewer they will.
If I can make a version of me that likes its job then that will be a deviation from the template thatās worth having. Assuming this technology actually worked, an exact digital replica of me isnāt particularly useful, Itās just going to automate the things I was going to do anyway but if I was going to do them anyway they arenāt really a hassle worth automating.
What I want is a version that has all of my knowledge, but infinitely more patience and for preference one that actually understands tax law. I need an AI to do the things I hate doing, but I can see the advantage of customizing it with my values to a certain extent.
nimble@lemmy.blahaj.zone
on 22 Nov 13:47
nextcollapse
But is it convincing enough to attend meetings for me
Telorand@reddthat.com
on 22 Nov 15:35
nextcollapse
Asking the important questions
BertramDitore@lemm.ee
on 22 Nov 15:47
nextcollapse
Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didnāt seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. Itās not exactly the same thing, but no, weāre not there yet.
nimble@lemmy.blahaj.zone
on 22 Nov 15:55
nextcollapse
Wait until you hear about doctors using AI to summarize visits š
Imgonnatrythis@sh.itjust.works
on 22 Nov 18:09
collapse
Imgonnatrythis@sh.itjust.works
on 22 Nov 21:53
collapse
Have you seen current doctor visit note summaries?
The bar is pretty low. A lot of these are made with conventional dictation software that has no sense of context when it misunderstands. Agree the consequences can be worse if the context is wrong, but I would guess a well programmed AI could summarize better on average than most visit summaries are currently. With this sort of thing there will be errors, but letās not forget that there ARE errors.
I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.
It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were āis that a chat bot?ā āSteve, are you actually on this meeting?ā āIām going to kick Steve out in a minute if nobody can get him to answerā, etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.
You just have to love that these assholes are so lazy that they first use an LLM to write their work, but then are also too lazy to quickly proof read what the LLM spat out.
People caught doing this should be fired on the spot, youāre not doing your job.
ThePowerOfGeek@lemmy.world
on 22 Nov 18:38
collapse
Or family reunions.
ā¦Asking for a friend.
stringere@sh.itjust.works
on 22 Nov 22:35
collapse
What does an AI look like in jorts?
ElectroLisa@lemmy.blahaj.zone
on 22 Nov 19:42
nextcollapse
threaded - newest
Great. Now I can see first hand how annoying I am š¤
I like you.
I actually have wanted to try this out to see how accurate it can actually be. I already have conversations with myself, so I can truly compare the reality to a LLM. Itās actually weird that even the supposed āunlocked and able to generate anything from anythingā tools Iāve found still donāt let you just input direct forum posts and shit to use. Though, I can totally understand why; most people probably arenāt gonna use it with their own shit, but someone elseās.
It wouldnāt be that hard to write a tool to pull from all of your social media accounts, Twitter, Facebook, Lemmy, I assume blue sky has an API, and just dump it into a big text file that you could give to a AI. I assume it understands CSV format.
Obviously Reddit is out of the question.
But I would only be comfortable giving an AI that much information about myself if it was 100% running locally. Which is probably why no oneās done it yet.
Okay, but can it embody my traumas?
Maybe some of the symptoms of the traumas that you exhibited during the interview.
lol because people always behave in ways consistent with how they tell an interviewer they will.
If I can make a version of me that likes its job then that will be a deviation from the template thatās worth having. Assuming this technology actually worked, an exact digital replica of me isnāt particularly useful, Itās just going to automate the things I was going to do anyway but if I was going to do them anyway they arenāt really a hassle worth automating.
What I want is a version that has all of my knowledge, but infinitely more patience and for preference one that actually understands tax law. I need an AI to do the things I hate doing, but I can see the advantage of customizing it with my values to a certain extent.
Iām pretty sure we already explored this timeline in a black mirror episode
I was thinking more like White Christmas but yeah.
But is it convincing enough to attend meetings for me
Asking the important questions
Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didnāt seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. Itās not exactly the same thing, but no, weāre not there yet.
Wait until you hear about doctors using AI to summarize visits š
What about it?
All the above would apply to doctor visit notes. Would you find that helpful?
Plus, they can hallucinate phrases or entire sentences
Have you seen current doctor visit note summaries? The bar is pretty low. A lot of these are made with conventional dictation software that has no sense of context when it misunderstands. Agree the consequences can be worse if the context is wrong, but I would guess a well programmed AI could summarize better on average than most visit summaries are currently. With this sort of thing there will be errors, but letās not forget that there ARE errors.
I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.
It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were āis that a chat bot?ā āSteve, are you actually on this meeting?ā āIām going to kick Steve out in a minute if nobody can get him to answerā, etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.
Also: what a dick move.
You just have to love that these assholes are so lazy that they first use an LLM to write their work, but then are also too lazy to quickly proof read what the LLM spat out.
People caught doing this should be fired on the spot, youāre not doing your job.
Or family reunions.
ā¦Asking for a friend.
What does an AI look like in jorts?
Just one or all of them? /s
Iām calling BS on this one. āValues and preferencesā are such a far cry from Actual Personality that itās meaningless. Just more LLM hype