The way you wrote this, I thought you meant that if it required a cloud service you would turn it off. But now I think you’re just saying you wouldn’t use this feature.
I share the confusion over your definition of “active”. You got all defensive when someone asked, so now no one really knows what you meant.
theshatterstone54@feddit.uk
on 13 Jan 09:41
nextcollapse
It’s not every day that you see actually useful applications of AI, but this might be one.
metaStatic@kbin.earth
on 13 Jan 10:47
nextcollapse
I've seen some pretty piss poor implementations on streaming apps but if anyone can get it right it's VLC
FuckyWucky@hexbear.net
on 13 Jan 11:14
nextcollapse
This means that they most likely went for lighter AI models that use fewer resources, so that they run smoothly without putting too much strain on the machine.
Pretty good. Captions are one of the legitimate uses of “AI”.
I’m consistently impressed whenever I have auto-subtitles turned on on Youtube.
YourMomsTrashman@lemmy.world
on 13 Jan 23:33
collapse
I’m not impressed by the subtitles themselves (they’re just ok) but rather by how accessible it is. Like it being an option rather than it being a “tool for creators” or limited to premium or something
Or maybe youtube has added so much dogshit features recently (like ai overviews, automatically adding info cards for anyone mentioned, and highlighting seemingly random words in comments to search it outside of context) that it makes me appreciate these things more lol
jayandp@sh.itjust.works
on 13 Jan 17:26
nextcollapse
I’ve been messing with more recent open-source AI Subtitling models via Subtitle Editor which has a nice GUI for it. Quality is much better these days, at least for English. It still makes mistakes, but the mistakes are on the level of “I misheard what they said and had little context for the conversation” or “the speaker has an accent which makes it hard to understand what they’re saying” mistakes, which is way better than most YouTube Auto Transriptions I’ve seen.
They’re helpful to my deaf ears, even when they’re wrong (50% of the words) they do give me a solid idea of what is being said together with what the audio sounds like.
With it, I get almost everything correct. Without it, I understand near to nothing.
This only goes for English spoken by Americans and sometimes London Britons, sadly, nothing else get detected nearly as good enough, so I can’t enjoy YouTube in my native language (Dutch), but being able to consume English YouTube already helps a lot!
It is probably good that OS community are exploring this however I’m not sure the technology is ready (or will ever be maybe) and it potentially undermines the labour intensive activity of producing high quality subtitling for accessibility.
I use them quite a lot and I’ve noticed they really struggle on key things like regional/national dialects, subject specific words and situations where context would allow improvement (e.g. a word invented solely in the universe of the media). So it’s probably managing 95% accuracy which is that danger zone where its good enough that no one checks it but bad enough that it can be really confusing if you are reliant on then. If we care about accessibility we need to care about it being high quality.
While good quality subtitles are essential VLC can’t ensure that, it’s the responsibility of the production studio. AI subtitles on vlc are for those videos which doesn’t have any sub (which are a lot). The pushback shouldn’t be for vlc implementing AI, but production studios replacing translators or transcriber with AI (like crunchyroll tried last year).
Also while transcribing and subtitle editing is a labour intensive job, use of AI to help the editors shouldn’t be discouraged, it can increase their productivity by automating repeatative tasks so that they can focus on better quality.
Agreed that the studios need to be held more accountable and their usage of AI is more problematic than open source last resort type work. I have noticed a degradation of quality in the last five years on mainstream sources.
However, the existence of this last resort tool will shift the dynamics of the “market” for the work that should be being done. Even in the open source community. There used to be an active community of people giving their voluntary labour to writing subtitles for those that lacked them (they may still be active I don’t know). Are they as likely to do that if they think oh well it can be automatically done now?
The real challenge with the argument that it helps editors is the same as the challenge for Automated Driving. If something performs at 95% you actually end up deskilling and stepping down the attention focus and make it more likely to miss that 5% that requires manual intervention. I think it also has a material impact on the wellbeing of those doing the labour.
To be clear I’m not anti this at all but think we need to think carefully about the structures and processes around it to ensure it does lead to improvement in quality not just an improvement in quantity at the cost of quality.
Cause we can no longer sit back and allow AI infiltration, AI indoctrination, AI subversion and the international AI conspiracy to sap and impurify all of our precious bodily fluids.
VintageGenious@sh.itjust.works
on 13 Jan 17:31
collapse
RedstoneValley@sh.itjust.works
on 13 Jan 15:16
nextcollapse
VLC always had a ton of applications, network device playback, TV, streaming server, files, physical media, music player, effects, recording, AV format conversion, subtitles, plugins and so on.
“Do one thing well” is what gives you software like sendmail, which requires several other programs to be actually useful, all of which have to be configured separately to work together, with wildly different syntax.
This is not by default bad thing, if it is something you only use when you decide to do so, when you don’t have other subtitles available tbh.
I hate AI slop too but people just go to monkey brain rage mode when they read AI and stop processing any further information.
I’d still always prefer human translated subtitles if possible.
However, right now I’m looking into translating entire book via LLM cause it would be only way to read that book, as it is not published in any language I speak. I speak English well enough, so I don’t really need subtitles, just like to have them on so I won’t miss anything.
For English language movies, I’d probably just watch them without subtitles if those were AI, as I don’t really need them, more like nice to have in case I miss something.
For languages I don’t understand, it might be good, although I wager it will be quite bad for less common languages.
There’s a difference between LLM slop (“write me an article about foo”) and using an LLM for something that’s actually useful (“listen to the audio from this file and transcribe everything that sounds like human speech”).
Exactly. I know someone who is really smart and works in machine learning and when I listen to him in isolation, AI sounds like actually useful thing.
Most people just are not smart like that, and most applications for AI are not very useful.
One of the things I often think is that AI makes it possible to do things that shouldn’t be done very easily and fast, that would had previously been too much effort or craft for some people, like now they can easily make website for whatever grift they are pushing.
The whole knee jerk reaction against anything AI related is tiresome and utterly irrational. This seems like a perfectly legitimate use of technology. If I have a movie in a language I don’t know and I can’t find subs for it, then I’d much rather have AI subs than nothing at all.
Eyck_of_denesle@lemmy.zip
on 14 Jan 00:47
collapse
Yea. Sometimes I just can’t process what they are saying because of my adhd ass and subs really help.
HappyTimeHarry@lemm.ee
on 13 Jan 18:12
nextcollapse
Im curious What makes what VLC is doing qualify as artificial intelligence instead of just an automated transcription plugin?
Automated transcription software has been around for decades, I totally understand getting in on the ai hype train but i guess I’m confused as to if software from years past like “dragon naturally speaking” or Shazam are also LLMs that predate openAI or is how those services worked to identify things different from how modern llms work?
VintageGenious@sh.itjust.works
on 13 Jan 18:57
nextcollapse
automated transcription is AI, neural networks are just better AI sometimes
seliaste@lemmy.blahaj.zone
on 14 Jan 02:02
collapse
Llms are a very specific Gennerative AI subset. Not everything AI is LLM, especially stuff like Shazam is pretty traditional AI. It’s been around for a while already, and studied for even longer (even back in the 1960s we were already starting to have a field of study in this domain)
tamagotchicowboy@hexbear.net
on 13 Jan 18:41
nextcollapse
I have some older foreign films I’d like to watch that have like 0 subtitles, seems useful.
There are a number of open weight open source models out there with all their data sourced from the public domain. Look up BLOOM and Falcon. There are others.
JetBrains’ AI code suggestions were only trained on code where authors gave explicit permission for it, but that’s the only one I know from the top of my head.
Most chat-oriented LLMs (ChatGPT, Claude, Gemini…) were almost certainly trained using corporate piracy.
Personally, I would be happy even if it didn’t translate it but were able to give some half decent transcription of, at least, English voice into English text. I prefer having subtitles, even when I speak the language, because it helps in noisy environments and/or when the characters mumble / have weird accents.
However, even that would likely be difficult with a lightweight model. Even big companies like Google often struggle with their autogenerated subtitles. When there’s some very context-specific terminology, or uncommon names, it fumbles. And adding translation to an already incorrect transcript multiplies the nonsense, even if the translation were technically correct.
mexicancartel@lemmy.dbzer0.com
on 14 Jan 06:10
collapse
It won’t be better than human translated ones but begter than no subtitles. I don’t think even humans can make subtitles correctly without knowing context
Honestly, if it can generate subtitle files it’ll be a huge benefit to people creating subtitles. It’s way easier to start with bad subs and fix them than it is to write from scratch.
mexicancartel@lemmy.dbzer0.com
on 14 Jan 07:16
collapse
threaded - newest
I’m ready to deactivate it if it comes with any active component.
What do you mean by active component? Is processing the audio being played back to add subtitles active?
Sending the audio to an LLM in the sky. But I assume it would be local?
It says pretty explicitly that it only runs on the user’s machine.
Not sure where you are confused. If any part of this feature is active by default I will disable it.
Even non-AI subtitles are off by default, what exactly are you expecting to be on?
Find someone else to argue with.
This is the Internet, there’s no shortage of targets.
Exactly this makes no sense. Which tool would force subtitles
The way you wrote this, I thought you meant that if it required a cloud service you would turn it off. But now I think you’re just saying you wouldn’t use this feature.
I share the confusion over your definition of “active”. You got all defensive when someone asked, so now no one really knows what you meant.
It’s not every day that you see actually useful applications of AI, but this might be one.
While I hate the capitalist AI-apocalypse with a passion I think this is great news for accessibility.
If it’s opt in/opt out then am fine with that.
Yup. Easy uninstall otherwise.
Not only is it opt in, it’s also running fully locally on your machine.
Ohh I assume it’s Mistral cause Llama uses a Incompatible license.
It’s not an LLM, just a subtitles generator for video.
It’s Whisper.
OHHH okay
I wonder how powerful a device you need to run this live a la YouTube auto caption-style.
Does anyone have experience with this?
My biggest issue with that is the amount of bloat a full local LLM implementation would add.
But if it’s an optional module that you can choose to add (or choose not to add) after the fact, I have no complaint.
Hold on to your butts!
I've seen some pretty piss poor implementations on streaming apps but if anyone can get it right it's VLC
Pretty good. Captions are one of the legitimate uses of “AI”.
If youtube transcriptions is anything to go by this won’t be great. But I’m optimistic
Youtube transcriptions are suprisingly abysmal considering what technology google already has at hand.
I find them pretty good for English spoken by native speakers. For anything else it’s horrible.
As long as they are talking about normal things and not playing D&D 😃
I actually disagree.
I’m consistently impressed whenever I have auto-subtitles turned on on Youtube.
I’m not impressed by the subtitles themselves (they’re just ok) but rather by how accessible it is. Like it being an option rather than it being a “tool for creators” or limited to premium or something
Or maybe youtube has added so much dogshit features recently (like ai overviews, automatically adding info cards for anyone mentioned, and highlighting seemingly random words in comments to search it outside of context) that it makes me appreciate these things more lol
I’ve been messing with more recent open-source AI Subtitling models via Subtitle Editor which has a nice GUI for it. Quality is much better these days, at least for English. It still makes mistakes, but the mistakes are on the level of “I misheard what they said and had little context for the conversation” or “the speaker has an accent which makes it hard to understand what they’re saying” mistakes, which is way better than most YouTube Auto Transriptions I’ve seen.
They’re helpful to my deaf ears, even when they’re wrong (50% of the words) they do give me a solid idea of what is being said together with what the audio sounds like.
With it, I get almost everything correct. Without it, I understand near to nothing.
This only goes for English spoken by Americans and sometimes London Britons, sadly, nothing else get detected nearly as good enough, so I can’t enjoy YouTube in my native language (Dutch), but being able to consume English YouTube already helps a lot!
That is very true. It’s hard to find local subtitles to a lot of stuff. And the whole deaf angle :)
It is probably good that OS community are exploring this however I’m not sure the technology is ready (or will ever be maybe) and it potentially undermines the labour intensive activity of producing high quality subtitling for accessibility.
I use them quite a lot and I’ve noticed they really struggle on key things like regional/national dialects, subject specific words and situations where context would allow improvement (e.g. a word invented solely in the universe of the media). So it’s probably managing 95% accuracy which is that danger zone where its good enough that no one checks it but bad enough that it can be really confusing if you are reliant on then. If we care about accessibility we need to care about it being high quality.
While good quality subtitles are essential VLC can’t ensure that, it’s the responsibility of the production studio. AI subtitles on vlc are for those videos which doesn’t have any sub (which are a lot). The pushback shouldn’t be for vlc implementing AI, but production studios replacing translators or transcriber with AI (like crunchyroll tried last year).
Also while transcribing and subtitle editing is a labour intensive job, use of AI to help the editors shouldn’t be discouraged, it can increase their productivity by automating repeatative tasks so that they can focus on better quality.
Agreed that the studios need to be held more accountable and their usage of AI is more problematic than open source last resort type work. I have noticed a degradation of quality in the last five years on mainstream sources.
However, the existence of this last resort tool will shift the dynamics of the “market” for the work that should be being done. Even in the open source community. There used to be an active community of people giving their voluntary labour to writing subtitles for those that lacked them (they may still be active I don’t know). Are they as likely to do that if they think oh well it can be automatically done now?
The real challenge with the argument that it helps editors is the same as the challenge for Automated Driving. If something performs at 95% you actually end up deskilling and stepping down the attention focus and make it more likely to miss that 5% that requires manual intervention. I think it also has a material impact on the wellbeing of those doing the labour.
To be clear I’m not anti this at all but think we need to think carefully about the structures and processes around it to ensure it does lead to improvement in quality not just an improvement in quantity at the cost of quality.
Aaaaaand I drop VLC. Fucking shame.
Edit: “wtf i love ai now”- this thread
Why would you need to do that if it’s off by default and locally processed?
Because triggered and hate circlejerk.
Nuance is deader than Elvis.
uh huh-huh.
Is it off, or is it an optional module that doesn’t have to be adding bloat to my system if I don’t want to use it?
LLMs can take up a pretty big storage footprint.
Why don’t you ask them? They’re very responsive to their community of users.
I just took a spin through their news blog and changelog and didn’t see anything about it in the latest release, so it’s probably not out yet.
Cause we can no longer sit back and allow AI infiltration, AI indoctrination, AI subversion and the international AI conspiracy to sap and impurify all of our precious bodily fluids.
Braindead comment
You’re right! Your comment has added a tremendous amount of value to this thread.
You know AI can mean more than generative AI slop ?
I wonder how good it is.
Does it translate from audio or from text?
Does it translate multiple languages, if video has a, b, c languages does it translate all to x.
Does user need to set input language?
What would be actually cool if it could translate foreign movies based on audio and add the English subtitles to it.
Translating a transcription should be easy.
Yes, if the transcript feature works well for the original language.
Do one thing and do it well. Oh well…
VLC always had a ton of applications, network device playback, TV, streaming server, files, physical media, music player, effects, recording, AV format conversion, subtitles, plugins and so on.
“Do one thing well” is what gives you software like
sendmail
, which requires several other programs to be actually useful, all of which have to be configured separately to work together, with wildly different syntax.And enables modular workflows and flexiblity.
Meh, I’ll just stick with
mpv
.How is MPVs impementation? Does it work fairly well?
Its a command line multimedia player. It’s implementation is ideal for minimalists, and easily understood by reading the man pages.
It works very well imo.
This is not by default bad thing, if it is something you only use when you decide to do so, when you don’t have other subtitles available tbh. I hate AI slop too but people just go to monkey brain rage mode when they read AI and stop processing any further information.
I’d still always prefer human translated subtitles if possible. However, right now I’m looking into translating entire book via LLM cause it would be only way to read that book, as it is not published in any language I speak. I speak English well enough, so I don’t really need subtitles, just like to have them on so I won’t miss anything.
For English language movies, I’d probably just watch them without subtitles if those were AI, as I don’t really need them, more like nice to have in case I miss something. For languages I don’t understand, it might be good, although I wager it will be quite bad for less common languages.
There’s a difference between LLM slop (“write me an article about foo”) and using an LLM for something that’s actually useful (“listen to the audio from this file and transcribe everything that sounds like human speech”).
Exactly. I know someone who is really smart and works in machine learning and when I listen to him in isolation, AI sounds like actually useful thing. Most people just are not smart like that, and most applications for AI are not very useful.
One of the things I often think is that AI makes it possible to do things that shouldn’t be done very easily and fast, that would had previously been too much effort or craft for some people, like now they can easily make website for whatever grift they are pushing.
The whole knee jerk reaction against anything AI related is tiresome and utterly irrational. This seems like a perfectly legitimate use of technology. If I have a movie in a language I don’t know and I can’t find subs for it, then I’d much rather have AI subs than nothing at all.
Yea. Sometimes I just can’t process what they are saying because of my adhd ass and subs really help.
Im curious What makes what VLC is doing qualify as artificial intelligence instead of just an automated transcription plugin?
Automated transcription software has been around for decades, I totally understand getting in on the ai hype train but i guess I’m confused as to if software from years past like “dragon naturally speaking” or Shazam are also LLMs that predate openAI or is how those services worked to identify things different from how modern llms work?
automated transcription is AI, neural networks are just better AI sometimes
Llms are a very specific Gennerative AI subset. Not everything AI is LLM, especially stuff like Shazam is pretty traditional AI. It’s been around for a while already, and studied for even longer (even back in the 1960s we were already starting to have a field of study in this domain)
I have some older foreign films I’d like to watch that have like 0 subtitles, seems useful.
Pandora’s Box is already open. Might as well make use of it.
Oh so that wasn’t a joke from their booth.
This seems really out of place, but locally ran auto subtitles from ethically sourced AI would be great.
It’s just that there’s two very big conditions in that sentence there.
Which AI is the ethically-sourced one
There are a number of open weight open source models out there with all their data sourced from the public domain. Look up BLOOM and Falcon. There are others.
JetBrains’ AI code suggestions were only trained on code where authors gave explicit permission for it, but that’s the only one I know from the top of my head. Most chat-oriented LLMs (ChatGPT, Claude, Gemini…) were almost certainly trained using corporate piracy.
Not against this feature, but this quote made me laugh:
As if MTL will get anywhere near the nuance of a properly made human translation.
Personally, I would be happy even if it didn’t translate it but were able to give some half decent transcription of, at least, English voice into English text. I prefer having subtitles, even when I speak the language, because it helps in noisy environments and/or when the characters mumble / have weird accents.
However, even that would likely be difficult with a lightweight model. Even big companies like Google often struggle with their autogenerated subtitles. When there’s some very context-specific terminology, or uncommon names, it fumbles. And adding translation to an already incorrect transcript multiplies the nonsense, even if the translation were technically correct.
It won’t be better than human translated ones but begter than no subtitles. I don’t think even humans can make subtitles correctly without knowing context
Honestly, if it can generate subtitle files it’ll be a huge benefit to people creating subtitles. It’s way easier to start with bad subs and fix them than it is to write from scratch.
Yeah true. Good feature anyways