I feel like this is a normal cycle of new tech. People get really excited about all the possibilities and don’t have any experience to ground expectations. Eventually people use it enough to realize what is more realistically achievable. Then mentally shifts from “magical solution to everything” to “a tool that is good at some things and bad at others”.
WoodScientist@sh.itjust.works
on 01 Aug 21:17
nextcollapse
The problem for OpenAI and their ilk is that the actual legitimate uses of LLMs are so few and niche that they cannot hope to pay for the immense cost of developing and running these systems. Like, cool. Sure I can use copilot to generate derivative meme images, but what’s that actually worth to me monetarily? I’m not going to subscribe to a monthly service just to access a tool for shit posting.
I have had nothing but issues attempting to use AI to help with code. The amount of times it has given me something clearly incorrect, and I am not very experienced to be able to identify “wrong code” so easily, is way to high.
There is 1 particular thing where I think it can be very useful.
That is when looking for relevant documentation.
It has happened to me a few times that despite enough documentation being available I am unable to find the relevant thing because even though the description in the documentation matches what I need to do, I didn’t use the correct keywords required for the search to find the correct class and I ended up using internet search.
If a language model can find relevant patterns and reduce the requirement onto the documentation writers for having to add tags to make search easier, that would be a win.
But of course, no generation of new text. All that is needed is to parse the documentation and examples provided by developers and to provide links to them.
threaded - newest
That makes sense. People are starting to use it more, but for less elaborate things where they still have control.
I feel like this is a normal cycle of new tech. People get really excited about all the possibilities and don’t have any experience to ground expectations. Eventually people use it enough to realize what is more realistically achievable. Then mentally shifts from “magical solution to everything” to “a tool that is good at some things and bad at others”.
The problem for OpenAI and their ilk is that the actual legitimate uses of LLMs are so few and niche that they cannot hope to pay for the immense cost of developing and running these systems. Like, cool. Sure I can use copilot to generate derivative meme images, but what’s that actually worth to me monetarily? I’m not going to subscribe to a monthly service just to access a tool for shit posting.
That sounds like a problem for the people dumping money into these companies and keeping them afloat.
Often known as the “Gartner Hype Cycle”
I have had nothing but issues attempting to use AI to help with code. The amount of times it has given me something clearly incorrect, and I am not very experienced to be able to identify “wrong code” so easily, is way to high.
That’s because it’s absolute shit.
There is 1 particular thing where I think it can be very useful.
That is when looking for relevant documentation.
It has happened to me a few times that despite enough documentation being available I am unable to find the relevant thing because even though the description in the documentation matches what I need to do, I didn’t use the correct keywords required for the search to find the correct class and I ended up using internet search.
If a language model can find relevant patterns and reduce the requirement onto the documentation writers for having to add tags to make search easier, that would be a win.
But of course, no generation of new text. All that is needed is to parse the documentation and examples provided by developers and to provide links to them.