Yeah, this whole product is an exercise in doing things not because it’s more practical than what we already have but simply because we can… for $700 + $24/month… No thanks…
ramenshaman@lemmy.world
on 13 Nov 2023 05:15
nextcollapse
I hate it. I don’t want to speak to people who are wearing one.
makingStuffForFun@lemmy.ml
on 13 Nov 2023 05:51
nextcollapse
I feel like that with being photographed. Being in a room full of smartphones, and associating me with them in a large database somewhere. Having someone’s kids use tick tock on my home network. Etc etc. We are massively under corporate surveillance, and I despise it.
I’m going to actively avoid people doing so, and I feel like others will as well.
If someone walks up to you and they’re filming you on their phone, how would most people react?
nicetriangle@kbin.social
on 13 Nov 2023 11:17
collapse
Fortunately there's almost no chance of this thing getting off the ground
Eggyhead@kbin.social
on 13 Nov 2023 07:07
collapse
What if it looked like a Star Trek communicator?
ramenshaman@lemmy.world
on 13 Nov 2023 07:44
nextcollapse
I… I’ve never really watched Star Trek. I think I could get into it but I’ve just never really sat down and watched it. I’ve probably only seen an episode or two in my whole life.
I’ll see myself out.
kalkulat@lemmy.world
on 13 Nov 2023 07:58
collapse
What if it looked like Iggy Pop? Zac Efron?
Eggyhead@kbin.social
on 13 Nov 2023 12:34
collapse
I'm just imagining people walking around with smart Iggy Pop badges and having AI conversations with them.
For some reason this is making me wish Jobs were still around.
I’d hope he’d have some subtle burns about this product … maybe about how we’re visual animals and you can’t just throw out decades of progress on screen tech and call that innovation. Maybe something about how we’ve got one voice but 10 fingers and two eyes.
No screen, but a projector to project on your hand? WTF? So not only will it far less information, but it will be a pain to use…
Voice commands? Meaning I will need to tell everyone around me what I am doing? Also calling bullshit on getting them to work in a busy area.
No it can’t, there are no ways to detect nutrition from a picture of a peice of food
Privacy? Yeah, get back with me in 20 years when it has been proven to not sell, leak or have data stolen, then I’ll be impressed.
In conclusion, this is as real as the Skarp laser razor is.
morrowind@lemmy.ml
on 13 Nov 2023 06:11
nextcollapse
No it can’t, there are no ways to detect nutrition from a picture of a peice of food
Why not? at least to the extent that a human can. Some AI model recognizes the type of food, estimates the amount and calculates nutrition based on that (hopefully verified with actual data, unlike in this demo).
All three of these functions already exist, all that remains is to put them together.
Ok, if you take any book, keep it closed, how many times do the letters s, q, d and r appear in the book?
There is no way to know without opening the book and counting, sure, you could make some statisticsl analysis based on the language used, but that doesn’t take into account the font size and spacing, nor the number of pages.
Since the machine only has a photo to analyze, it can only give extremely generic results, making them effectively useless.
You would need to open the food up and actually analyze a part of the inside with something like a mass spectrometer to get any useful data.
KairuByte@lemmy.dbzer0.com
on 13 Nov 2023 07:51
nextcollapse
I agree with you, but disagree with your reasoning.
If you take 1lb of potatoes, boil and mash them with no other add-ins, you can reasonably estimate the nutritional information through visual inspection alone, assuming you have enough reference to see there is about a pound of potatoes. There are many nutrition apps out there that utilize this, and it’s essentially just lopping off the extremes and averaging out the rest.
The problem with this is, it’s impossible to accurately guess the recipe, and therefore the ingredients. Take the aforementioned mashed potatoes. You can’t accurately tell what variety of potato was used. Was water added back during the mashing? Butter? Cream cheese? Cheddar? Sour cream? There’s no way to tell visually, assuming uniform mashing, what is in the potatoes.
Not to mention, the pin sees two pieces of bread on top of each other… what is in the bread? Who the fuck knows!
I see what you mean, and while you raise a few excellent points, you seem to forget that a human looking at mashed potatoes have far more data than a computer lookkng at an image.
A human get data about smell, temperature texture and weight in addition to a simple visual impression.
This is why I picked a book/letter example, I wanted to reduce the variables available to a human to get closer to what a computer has from a photo.
adeoxymus@lemmy.world
on 13 Nov 2023 10:14
nextcollapse
It needn’t be exact. A ballpark calorie/sugar that’s 90% accurate would be sufficient.
There’s some research that suggests that’s possible: arxiv.org/pdf/2011.01082.pdf
But what use would it be then, you wouldn’t be able to compare one potato to another, both would register the same values.
adeoxymus@lemmy.world
on 13 Nov 2023 11:52
collapse
I think the use case is not people doing potato study but people that want to lose weight and need to know the amount of calories in the piece of cake that’s offered at the office cafeteria.
And that means the feature is useless, there are so many things in a cake that can’t be seen from a simple picture.
And if it is just a generic “cake” value, it will show incorrect data
adeoxymus@lemmy.world
on 13 Nov 2023 13:44
collapse
The paper I showed earlier disagrees
webghost0101@sopuli.xyz
on 13 Nov 2023 13:54
collapse
You are correct but you are speaking for yourself and not for example the disabled community who may lack senses or the capacity to calculate a result. While ai still improves its capabilities they are the first to benefit.
I get what you are saying, but this specific decive had no future.
knotthatone@lemmy.one
on 14 Nov 2023 04:06
collapse
It isn’t as magical (or accurate) as it looks. It’s just an extension of how various health tracking apps track food intake. There’s usually just one standard entry in the database for mashed potatoes based on whatever their data source thinks a reasonable default value should be. It doesn’t know if what you’re eating is mostly butter and cheese.
How useful a vague and not particularly accurate nutrition profile really can be is an open question, but it seems to be a popular feature for smartwatches.
MrScottyTay@sh.itjust.works
on 13 Nov 2023 14:51
collapse
If i had a big list or directory of a lot of well known books and how many times s, q, d and r appears in them then sure I would be able to make a very good estimate on how many there are from just looking at the cover of the book, with a slight variance being in the editing that version may have. Almost like how a specific type of food will likely have a certain amount of protein fibre etc, with slight variations based on how the cook prepared the food.
But then you have opened the books, missing the point.
MrScottyTay@sh.itjust.works
on 13 Nov 2023 18:43
collapse
I didn’t open the book, someone else looked into the book and wrote it down for me to then read when needed, just like how someone would put in the data for a program to look it up when asked.
No, you are hung up on trying to read the book without actually reading it.
That breaks the puzzle, since the device would not be able to anslyze the inside of an item of food from a pucture of the inside, and can only use highly generic data based on what it can assume from an image of the outside
MrScottyTay@sh.itjust.works
on 14 Nov 2023 07:44
collapse
Re-read the first one I sent.
You can get a pretty good generalisation if you know what the food is. How do you think current apps for tracking nutrition work? All that this will do is just try and figure out what the food is from the picture rather than the user typing it in. Most foods you can tell what it is without “looking inside”. I’m pretty sure there’s apps that do that now, this isn’t something new and groundbreaking.
And for nutrition you don’t need to be 100% exact when tracking it. Because you can’t be 100% even if you do know exact ingredients and how much of each one. Everything always has a variance. This method doesn’t need to be perfect for it to meet the needs of most that will use it.
I agree that you can get a generic value of nutrition from a photo of a simple, fruit or vegetable, but since a pie/cake contains soo much stuff that looks identical to other stuff, rendering any photographic analysis useless.
So yes, you can get some idea of the nutrition of some foods, but way too low to be useful.
Usernameblankface@lemmy.world
on 13 Nov 2023 11:18
collapse
You have to talk aloud so that you’re included in the distinct lack of privacy this thing has.
Immersive_Matthew@sh.itjust.works
on 13 Nov 2023 06:02
nextcollapse
Another tech that is going to be utterly irrelevant as AR glasses become a mainstream reality like it or not.
Pons_Aelius@kbin.social
on 13 Nov 2023 06:34
collapse
as AR glasses become a mainstream reality like it or not.
Just like the segway was going to change the way we build cities?
Or how cryptocurrencies will replace national fiat money?
Or how google glasses have already become a mainstream reality?
Or how everyone will watch TV in 3D all the time?
Or how everyone will play games in VR?
Just because a technology is possible it does not mean it will become ubiquitous.
Eggyhead@kbin.social
on 13 Nov 2023 07:04
nextcollapse
Hey, I still believe VR is gonna happen... someday... :'(
KairuByte@lemmy.dbzer0.com
on 13 Nov 2023 07:54
collapse
VR has already happened. Are you referring to “deep diving” or whatever word choice you want, where you don’t need to use controllers?
Eggyhead@kbin.social
on 13 Nov 2023 12:37
collapse
I don't know deep diving, I just want a standalone VR gaming headset that isn't owned by an invasive social-media corporation. Until I have something like that in my home, VR simply doesn't exist yet for me.
Immersive_Matthew@sh.itjust.works
on 13 Nov 2023 16:32
nextcollapse
That device is coming in the coming years.
Eggyhead@kbin.social
on 13 Nov 2023 18:56
collapse
And I have full confidence that it’ll be one of the devices of all time!
KairuByte@lemmy.dbzer0.com
on 13 Nov 2023 19:10
collapse
So if I say… “I simply want an electric vehicle with 2000km range on a single charge, and a 20m charge time. Until that exists, EVs just don’t exist yet for me.”… that means EVs haven’t happened yet?
It’s well and good that you know what you want, that doesn’t mean something hasn’t already been on market for years that fits the bill for 99% of people.
Eggyhead@kbin.social
on 13 Nov 2023 19:55
collapse
Huh. I’m flattered that my opinion about what I want to spend my money on matters enough to necessitate such an astute clarification. Thank you. I’ll try to keep in mind what might fit the bill for 99% of people as I continue to disregard Meta and TikTok products moving forward.
Immersive_Matthew@sh.itjust.works
on 13 Nov 2023 16:31
collapse
I fully agree, but to think XR is not going to be the next computing platform is a little surprising. A little as I remember a lot of people saying they would never carry a computer around with them too.
Vr is a fancy display tech for niche games. It’s at best another lane in the console war.
AR, actual ar with light fields is not feasible. The tech will never get there. It’s just too computionally expensive and the optics don’t pan out.
Immersive_Matthew@sh.itjust.works
on 14 Nov 2023 09:21
collapse
Never get there? What would make you say that in the face of constant technological progress? It is for sure going to come and Meta and others already have light field display prototypes that they are trying to shrink down. Just a matter of time. VR/AR/MR are all going to be just one device and games will be as big a segment on it as they are on flat screens which is to say, only a smalll slice.
jackalope@lemmy.ml
on 14 Nov 2023 17:47
nextcollapse
Technological progress is not generalizable in the abstract. For hundreds of years human built faster and faster means of transportation and yet the record for fastest human vehicle remains 1969 appollo 10 mission.
There are hard limits to physics. It is not malleable without limit.
Magic leap said they were going to shrink down their light field tech a decade ago but gave up. The reality is that this technology may just not be physically possible.
just_another_person@lemmy.world
on 13 Nov 2023 06:17
nextcollapse
Nah
NeoNachtwaechter@lemmy.world
on 13 Nov 2023 07:13
nextcollapse
Ai Pin has a “prominent Trust Light” which turns on when the device is in use.
Let’s all place bets: What will be the first mod or patch for this device? 🥷🏽
KairuByte@lemmy.dbzer0.com
on 13 Nov 2023 07:44
collapse
Literally a piece of tape.
MrScottyTay@sh.itjust.works
on 13 Nov 2023 14:53
collapse
Those raybans that some have mentioned here will literally disallow you from recording or taking a pic of it detects something in the way of the activity light. We don’t know what physical tampering may accomplish though.
MudMan@kbin.social
on 13 Nov 2023 07:20
nextcollapse
I mean, it's fun that a techbro thought "isn't the TNG combadge cool?" and actually went and made it, but this was a Youtube video, not a product launch.
kalkulat@lemmy.world
on 13 Nov 2023 07:54
nextcollapse
A built-in 13-megapixel ultra wide-angle camera can be used to capture photographs and videos
I bet other people won’t like the camera any more than they did with Google Glasses.
Photos can be viewed using the “Center” website on any web browser.
Want to see photos? Gotta go to the website. All photos therefore shared. Along with notes, music listened to, reminders … Nuh-uh!
threaded - newest
It’s a neat idea, but frankly, I don’t want or need other people hearing my business. It needs to pair with (smart?) earbuds.
It supports earbuds from watching a random YT video about it that popped up on my feed from the creators
Yeah, this whole product is an exercise in doing things not because it’s more practical than what we already have but simply because we can… for $700 + $24/month… No thanks…
I hate it. I don’t want to speak to people who are wearing one.
I feel like that with being photographed. Being in a room full of smartphones, and associating me with them in a large database somewhere. Having someone’s kids use tick tock on my home network. Etc etc. We are massively under corporate surveillance, and I despise it.
I’m going to actively avoid people doing so, and I feel like others will as well.
If someone walks up to you and they’re filming you on their phone, how would most people react?
Fortunately there's almost no chance of this thing getting off the ground
What if it looked like a Star Trek communicator?
I… I’ve never really watched Star Trek. I think I could get into it but I’ve just never really sat down and watched it. I’ve probably only seen an episode or two in my whole life.
I’ll see myself out.
What if it looked like Iggy Pop? Zac Efron?
I'm just imagining people walking around with smart Iggy Pop badges and having AI conversations with them.
For some reason this is making me wish Jobs were still around.
I’d hope he’d have some subtle burns about this product … maybe about how we’re visual animals and you can’t just throw out decades of progress on screen tech and call that innovation. Maybe something about how we’ve got one voice but 10 fingers and two eyes.
I am calling bullshit on all of their points.
No screen, but a projector to project on your hand? WTF? So not only will it far less information, but it will be a pain to use…
Voice commands? Meaning I will need to tell everyone around me what I am doing? Also calling bullshit on getting them to work in a busy area.
No it can’t, there are no ways to detect nutrition from a picture of a peice of food
Privacy? Yeah, get back with me in 20 years when it has been proven to not sell, leak or have data stolen, then I’ll be impressed.
In conclusion, this is as real as the Skarp laser razor is.
Why not? at least to the extent that a human can. Some AI model recognizes the type of food, estimates the amount and calculates nutrition based on that (hopefully verified with actual data, unlike in this demo).
All three of these functions already exist, all that remains is to put them together.
Ok, if you take any book, keep it closed, how many times do the letters s, q, d and r appear in the book?
There is no way to know without opening the book and counting, sure, you could make some statisticsl analysis based on the language used, but that doesn’t take into account the font size and spacing, nor the number of pages.
Since the machine only has a photo to analyze, it can only give extremely generic results, making them effectively useless.
You would need to open the food up and actually analyze a part of the inside with something like a mass spectrometer to get any useful data.
I agree with you, but disagree with your reasoning.
If you take 1lb of potatoes, boil and mash them with no other add-ins, you can reasonably estimate the nutritional information through visual inspection alone, assuming you have enough reference to see there is about a pound of potatoes. There are many nutrition apps out there that utilize this, and it’s essentially just lopping off the extremes and averaging out the rest.
The problem with this is, it’s impossible to accurately guess the recipe, and therefore the ingredients. Take the aforementioned mashed potatoes. You can’t accurately tell what variety of potato was used. Was water added back during the mashing? Butter? Cream cheese? Cheddar? Sour cream? There’s no way to tell visually, assuming uniform mashing, what is in the potatoes.
Not to mention, the pin sees two pieces of bread on top of each other… what is in the bread? Who the fuck knows!
I see what you mean, and while you raise a few excellent points, you seem to forget that a human looking at mashed potatoes have far more data than a computer lookkng at an image.
A human get data about smell, temperature texture and weight in addition to a simple visual impression.
This is why I picked a book/letter example, I wanted to reduce the variables available to a human to get closer to what a computer has from a photo.
It needn’t be exact. A ballpark calorie/sugar that’s 90% accurate would be sufficient. There’s some research that suggests that’s possible: arxiv.org/pdf/2011.01082.pdf
But what use would it be then, you wouldn’t be able to compare one potato to another, both would register the same values.
I think the use case is not people doing potato study but people that want to lose weight and need to know the amount of calories in the piece of cake that’s offered at the office cafeteria.
And that means the feature is useless, there are so many things in a cake that can’t be seen from a simple picture.
And if it is just a generic “cake” value, it will show incorrect data
The paper I showed earlier disagrees
You are correct but you are speaking for yourself and not for example the disabled community who may lack senses or the capacity to calculate a result. While ai still improves its capabilities they are the first to benefit.
I get what you are saying, but this specific decive had no future.
It isn’t as magical (or accurate) as it looks. It’s just an extension of how various health tracking apps track food intake. There’s usually just one standard entry in the database for mashed potatoes based on whatever their data source thinks a reasonable default value should be. It doesn’t know if what you’re eating is mostly butter and cheese.
How useful a vague and not particularly accurate nutrition profile really can be is an open question, but it seems to be a popular feature for smartwatches.
If i had a big list or directory of a lot of well known books and how many times s, q, d and r appears in them then sure I would be able to make a very good estimate on how many there are from just looking at the cover of the book, with a slight variance being in the editing that version may have. Almost like how a specific type of food will likely have a certain amount of protein fibre etc, with slight variations based on how the cook prepared the food.
But then you have opened the books, missing the point.
I didn’t open the book, someone else looked into the book and wrote it down for me to then read when needed, just like how someone would put in the data for a program to look it up when asked.
That changes nothing, you had the book inspected and hot the data.
I think you’re missing what I’m trying to say.
No, you are hung up on trying to read the book without actually reading it.
That breaks the puzzle, since the device would not be able to anslyze the inside of an item of food from a pucture of the inside, and can only use highly generic data based on what it can assume from an image of the outside
Re-read the first one I sent.
You can get a pretty good generalisation if you know what the food is. How do you think current apps for tracking nutrition work? All that this will do is just try and figure out what the food is from the picture rather than the user typing it in. Most foods you can tell what it is without “looking inside”. I’m pretty sure there’s apps that do that now, this isn’t something new and groundbreaking.
And for nutrition you don’t need to be 100% exact when tracking it. Because you can’t be 100% even if you do know exact ingredients and how much of each one. Everything always has a variance. This method doesn’t need to be perfect for it to meet the needs of most that will use it.
I agree that you can get a generic value of nutrition from a photo of a simple, fruit or vegetable, but since a pie/cake contains soo much stuff that looks identical to other stuff, rendering any photographic analysis useless.
So yes, you can get some idea of the nutrition of some foods, but way too low to be useful.
You have to talk aloud so that you’re included in the distinct lack of privacy this thing has.
Another tech that is going to be utterly irrelevant as AR glasses become a mainstream reality like it or not.
Just like the segway was going to change the way we build cities?
Or how cryptocurrencies will replace national fiat money?
Or how google glasses have already become a mainstream reality?
Or how everyone will watch TV in 3D all the time?
Or how everyone will play games in VR?
Just because a technology is possible it does not mean it will become ubiquitous.
Hey, I still believe VR is gonna happen... someday... :'(
VR has already happened. Are you referring to “deep diving” or whatever word choice you want, where you don’t need to use controllers?
I don't know deep diving, I just want a standalone VR gaming headset that isn't owned by an invasive social-media corporation. Until I have something like that in my home, VR simply doesn't exist yet for me.
That device is coming in the coming years.
And I have full confidence that it’ll be one of the devices of all time!
So if I say… “I simply want an electric vehicle with 2000km range on a single charge, and a 20m charge time. Until that exists, EVs just don’t exist yet for me.”… that means EVs haven’t happened yet?
It’s well and good that you know what you want, that doesn’t mean something hasn’t already been on market for years that fits the bill for 99% of people.
Huh. I’m flattered that my opinion about what I want to spend my money on matters enough to necessitate such an astute clarification. Thank you. I’ll try to keep in mind what might fit the bill for 99% of people as I continue to disregard Meta and TikTok products moving forward.
I fully agree, but to think XR is not going to be the next computing platform is a little surprising. A little as I remember a lot of people saying they would never carry a computer around with them too.
Vr is a fancy display tech for niche games. It’s at best another lane in the console war.
AR, actual ar with light fields is not feasible. The tech will never get there. It’s just too computionally expensive and the optics don’t pan out.
Never get there? What would make you say that in the face of constant technological progress? It is for sure going to come and Meta and others already have light field display prototypes that they are trying to shrink down. Just a matter of time. VR/AR/MR are all going to be just one device and games will be as big a segment on it as they are on flat screens which is to say, only a smalll slice.
Technological progress is not generalizable in the abstract. For hundreds of years human built faster and faster means of transportation and yet the record for fastest human vehicle remains 1969 appollo 10 mission.
There are hard limits to physics. It is not malleable without limit.
Magic leap said they were going to shrink down their light field tech a decade ago but gave up. The reality is that this technology may just not be physically possible.
.
Nah
Let’s all place bets: What will be the first mod or patch for this device? 🥷🏽
Literally a piece of tape.
Those raybans that some have mentioned here will literally disallow you from recording or taking a pic of it detects something in the way of the activity light. We don’t know what physical tampering may accomplish though.
I mean, it's fun that a techbro thought "isn't the TNG combadge cool?" and actually went and made it, but this was a Youtube video, not a product launch.
.
The Meta Ray-Bans have a better path to success than this. (Honestly, if they weren’t tied to Meta, I’d love to get a pair.)
Mike Elgan had a great piece of why this will not take off:
…substack.com/…/why-humanes-ai-pin-wont-succeed-6…
He makes a ton of great points.
.