Apple Vision Pro’s Eye Tracking Exposed What People Type
(www.wired.com)
from BrikoX@lemmy.zip to cybersecurity@sh.itjust.works on 12 Sep 2024 10:26
https://lemmy.zip/post/22604748
from BrikoX@lemmy.zip to cybersecurity@sh.itjust.works on 12 Sep 2024 10:26
https://lemmy.zip/post/22604748
The Vision Pro uses 3D avatars on calls and for streaming. These researchers used eye tracking to work out the passwords and PINs people typed with their avatars.
Archived version: web.archive.org/…/apple-vision-pro-persona-eye-tr…
threaded - newest
That should be an easy fix in a future software update by simply not replicating eye movement as soon as the user is looking at the keyboard.
The solution is constant googly eyes.
Let’s be honest: the solution is always googly eyes.
youtu.be/zc7qJE9Nzo8
Sounds like what they already did: as soon as the virtual keyboard pops up the eye movement isn’t transmitted as part of the avatar.
Oh I see. According to the article:
Easy fix.
<img alt="" src="https://lemmy.ml/pictrs/image/2ad60e53-bef2-401d-b0cb-5aa26ac3ab40.gif">
bet same for video calls.
Sounds like you could do this to a person in a normal zoom call with no headset.
Most people don’t look while typing, especially things with muscle memory like passwords, when using a physical keyboard. And a zoom call doesn’t convey facial data in three dimensions. The unique nature of the virtual keyboard, plus the three dimensional avatar, makes this new attack more feasible.
Seems like we’re going to be stuck in the uncanny valley of telepresence. The more fidelity we add, the more we’re able to pick up on microexpressions, subtle eye movements, and breathing, which helps trigger oxytocin and promote trust. But also, the more fidelity we add, the more attack surface we open up for malicious actors to exploit.