A Researcher Figured Out How to Reveal Any Phone Number Linked to a Google Account (brutecat.com)
from Pro@programming.dev to technology@lemmy.world on 09 Jun 17:25
https://programming.dev/post/31911310

archive.today link

#technology

threaded - newest

Stamets@lemmy.world on 09 Jun 17:54 next collapse

Well that’s not terrifying at all.

saltesc@lemmy.world on 09 Jun 18:09 collapse

Our names, numbers, and home addresses used to be in a book delivered to everyone’s door or found stacked in a phone booth on the street. That was normal for generations.

It’s funny how much fuckwits can change the course of society and how we can’t have nice things.

dmtalon@infosec.pub on 09 Jun 18:34 next collapse

Right, but when everyone got phone books, those were only shared locally in the town. It would be pretty hard to figure out someones phone number from across the state/country without the internet unless you knew someone in the town.

You could also pay to be unlisted, which is a luxury long since gone. How cool would it be to make your data ‘unlisted’ by paying a small monthly fee.

[deleted] on 09 Jun 18:40 next collapse

.

Mongostein@lemmy.ca on 09 Jun 18:42 next collapse

It would be even cooler if we had a right to privacy

dmtalon@infosec.pub on 09 Jun 18:55 collapse

no doubt, lucky us, we get neither…

corsicanguppy@lemmy.ca on 09 Jun 18:55 next collapse

Phone books from outside my region were available at the library; that place where they store a consolidated collection of books for just anyone to sign out and use.

dmtalon@infosec.pub on 09 Jun 20:33 next collapse

I don’t remember that, however it doesn’t surprise me at least for a radius around your area. I’d be surprised if they had all of them from all the states

Jerkface@lemmy.world on 09 Jun 22:53 collapse

You could just have them borrow one from whatever other library had it. Hell, you could just call the phone company and order the one you want yourself. Fuck, you could just call 411 and have them look it up for you right then.

Paradox@lemdro.id on 09 Jun 23:04 collapse

I once used one to look up my friend from summer camp. He lived in New York City and I didn’t live anywhere close

Library had a bunch of NYC phonebooks

Pyr_Pressure@lemmy.ca on 10 Jun 02:46 collapse

Proton estimates the average Americans data is worth $700 per year.

Sign me up for $1000/year privacy fee and you will make more money by doing absolutely nothing.

Ibuthyr@lemmy.wtf on 10 Jun 10:56 collapse

Or, how about they fuck off and leave me alone with my private data? I don’t want to have to pay for something that should be an irrevocable right.

Even if you completely degoogle and whatnot, these cunts will still get hold of your data one way or the other. Its sickening.

Stamets@lemmy.world on 09 Jun 18:37 collapse

Still are. I got a phone book delivered a week ago, I shit thee not. Granted I’m on a small island and the book is small too. But like, you can pay to have your number removed from the book. Can you have it removed from this? Not to mention all the 2FA stuff that can be connected to the phone number. Someone clones your number or takes it and suddenly they’ve got access to a whole lot of your login stuff.

unphazed@lemmy.world on 09 Jun 18:42 next collapse

My phone book is smaller than a novel and only has yellow pages these days.

Imgonnatrythis@sh.itjust.works on 09 Jun 18:49 collapse

Pay to have it removed! That sounds like blackmail doxing.

dan@upvote.au on 09 Jun 18:00 next collapse

Most service providers like Vultr provide /64 ip ranges, which provide us with 18,446,744,073,709,551,616 addresses. In theory, we could use IPv6 and rotate the IP address we use for every request, bypassing this ratelimit.

This usually doesn’t work, as IPv6 rate limiting is usually done per /64 range (which is the smallest subnet allowed per the IPv6 spec), not per individual IP.

Album@lemmy.ca on 09 Jun 18:07 collapse

Ipv6 catching strays

Glitchvid@lemmy.world on 09 Jun 20:16 collapse

Usually is. Still common among network admins to hear dumb shit like IPv6 being less secure because no NAT. 🤦‍♂️

Archer@lemmy.world on 09 Jun 21:12 collapse

If NAT is your “firewall”, you have bigger problems!

IllNess@infosec.pub on 09 Jun 18:13 next collapse

Eventually, I had a PoC running, but I was still getting the captcha? It seemed that for whatever reason, datacenter IP addresses using the JS disabled form were always presented with a captcha, damn!

The simplest answer is probably the right one. They are used for bots.

JoMiran@lemmy.ml on 09 Jun 18:15 next collapse

I set up my GranCentral, now Google Voice, account using a VoIP number from a company that went defunct many years ago. My Google accounts use said Google Voice phone number to validate because GrandCentral wasn’t owned by Google back then. I assume this use case is so small, there is no point fixing it. So essentially, my accounts fall into a loop where google leads to google, etc.

heh

atrielienz@lemmy.world on 09 Jun 19:30 collapse

I did something of the opposite. I had a Verizon number. I moved it to Google voice. I had a second Google voice number that then became a google fi number. So now I have a Verizon coded google voice number (that my bank accepts etc), and a google fi number that was originally a google voice number. I’m curious how this honestly effects me. My work numbers have never been associated with my personal accounts so there’s that.

rollmagma@lemmy.world on 09 Jun 18:24 next collapse

God, I hate security “researchers”. If I posted an article about how to poison everyone in my neighborhood, I’d be getting a knock on the door. This kind of shit doesn’t help anyone. “Oh but the state-funded attackers, remember stuxnet”. Fuck off.

ryry1985@lemmy.world on 09 Jun 18:42 next collapse

I think the method of researching and then informing the affected companies confidentially is a good way to do it but companies often ignore these findings. It has to be publicized somehow to pressure them into fixing the problem.

rollmagma@lemmy.world on 09 Jun 23:34 collapse

Indeed, then it becomes a market and it incentivises more research on that area. Which I don’t think is helpful for anyone. It’s like your job description being “professional pessimist”. We could be putting that amount of effort into building more secure software to begin with.

cmnybo@discuss.tchncs.de on 09 Jun 18:46 next collapse

Without researchers like that, someone else would figure it out and use it maliciously without telling anyone. This researcher got Google to close the loophole that the exploit requires before publicly disclosing it.

rollmagma@lemmy.world on 09 Jun 23:29 collapse

That’s the fallacy I’m alluding to when I mention stuxnet. We have really well funded, well intentioned, intelligent people creating tools, techniques and overall knowledge in a field. Generally speaking, some of these findings are more makings then findings.

Imgonnatrythis@sh.itjust.works on 09 Jun 18:48 next collapse

I think it’s important for users to know how vulnerable they really are and for providers to have a fire lit under their ass to patch holes. I think it’s standard practice to alert providers to these finds early, but I’m guessing a lot of them already knew about the vulnerabilities and often don’t give a shit.

Compared to airing this dirty laundry I think the alternatives are potentially worse.

rollmagma@lemmy.world on 09 Jun 23:37 collapse

Hmm I don’t know… Users usually don’t pay much attention to security. And the disclosure method actively hides it from the user until it no longer matters.

For providers, I understand, but can’t fully agree. I think it’s a misguided culture that creates busy-work at all levels.

TipRing@lemmy.world on 09 Jun 18:59 collapse

This disclosure was from last year and the exploit was patched before the researcher published the findings to the public.

malloc@lemmy.world on 09 Jun 18:31 next collapse

Google, Apple, and rest of big tech are pregnable despite their access to vast amounts of capital, and labor resources.

I used to be a big supporter of using their “social sign on” (or more generally speaking, single sign on) as a federated authentication mechanism. They have access to brilliant engineers thus naively thought - "well these companies are well funded, and security focused. What could go wrong having them handle a critical entry point for services?”

Well as this position continues to age poorly, many fucking aspects can go wrong!

  1. These authentication services owned by big tech are much more attractive to attack. Finding that one vulnerability in their massive attack vector is difficult but not impossible.
  2. If you use big tech to authenticate to services, you are now subject to the vague terms of service of big tech. Oh you forgot to pay Google store bill because card on file expired? Now your Google account is locked out and now lose access to hundreds of services that have no direct relation to Google/Apple
  3. Using third party auth mechanisms like Google often complicate the relationship between service provider and consumer. Support costs increase because when a 80 yr old forgot password or 2FA method to Google account. They will go to the service provider instead of Google to fix it. Then you spend inordinate amounts of time/resources trying to fix issue. These costs eventually passed on to customer in some form or another

Which is why my new position is for federated authentication protocols. Similar to how Lemmy and the fediverse work but for authentication and authorization.

Having your own IdP won’t fix the 3rd issue, but at least it will alleviate 1st and 2nd concerns

propitiouspanda@lemmy.cafe on 09 Jun 22:22 next collapse

They have access to brilliant engineers

Not really.

Paradox@lemdro.id on 10 Jun 01:13 collapse

The sad thing is, we had federated auth before social sign on. OpenID was a thing before oauth

Zacryon@feddit.org on 09 Jun 18:47 next collapse

Casually rotating 18,446,744,073,709,551,616 IP addresses to bypass rate limits.

I am not in IT security, but find it fascinating what clever tricks people use to break (into) stuff.

In a better world, we might use this energy for advancing humanity instead of looking how we can hurt each other. (Not saying the author is doing that, just lamenting that ITS is necessary due to hostile actors in this world. )

Kolanaki@pawb.social on 09 Jun 19:00 next collapse

If you know how to hurt others, you can learn how to prevent that way of hurting others.

TheReturnOfPEB@reddthat.com on 10 Jun 03:04 collapse

is that how guns work ?

Attacker94@lemmy.world on 10 Jun 04:47 next collapse

I would say so, in my opinion the US has an education problem when it comes to fire arms. People are rightfully scared of what they don’t know, but culturally, the people who don’t know that much about them are adamant against learning about them. This coupled with the lack of respect given to them by people who do know how to handle them leads to the position we find ourselves in today.

untakenusername@sh.itjust.works on 10 Jun 04:56 collapse

theoretically speaking, if ur a govt, and you get everyone else to stop using guns, and you don’t, then people wont get hurt from guns

Tinidril@midwest.social on 09 Jun 19:34 next collapse

Those are IPv6 addresses that work a bit differently than IPv4. Most customers only get assigned a single IPv4 address, and even a lot of big data centers only have one or two blocks of 256 addresses. The smallest allocation of IPv6 for a single residential customer is typically a contiguous block of the 18,446,744,073,709,551,616 addresses mentioned.

If Google’s security team is even marginally competent, they will recognize those contiguous blocks and treat them as they would a single IPv4 address. Every address in that block has the same prefix, and it’s actually easier to track on those prefixes than on the entire address.

dan@upvote.au on 09 Jun 19:38 collapse

This doesn’t really work in real life since IPv6 rate limiting is done per /64 block, not per individual IP address. This is because /64 is the smallest subnet allowed by the IPv6 spec, especially if you want to use features like SLAAC and privacy extensions (which most home users would be using)

SLAAC means that devices on the network can assign their own IPv6. It’s like DHCP but is stateless and doesn’t need a server.

Privacy extensions means that the IPv6 address is periodically changed to avoid any individual device from being tracked. All devices on an IPv6 network usually have their own public IP, which fixes some things (NAT and port forwarding aren’t needed any more) but has potential privacy issues if one device has the same IP for a long time.

hansolo@lemmy.today on 09 Jun 19:09 next collapse

F. This will be moved to an OSINT tool within a week, and scraped into a darkweb database by next Friday.

Sandbar_Trekker@lemmy.today on 09 Jun 20:48 next collapse

I think you missed the part at the very end of the page that showed the timeline of them reporting the vulnerability back in April, being rewarded for finding the vulnerability, the vulnerability being patched in May, and being allowed to publicize the vulnerability as of today.

hansolo@lemmy.today on 10 Jun 07:35 collapse

Indeed I did! Thanks

HellieSkellie@lemmy.dbzer0.com on 09 Jun 20:48 collapse

Well at the bottom of the article he shows the bug report timeline has been complete, so it’s likely already fixed.

x00z@lemmy.world on 09 Jun 22:19 next collapse

$5,000

This is like 1/10th of what a good blackhat hacker would have gotten out of it.

scarilog@lemmy.world on 09 Jun 23:58 collapse

I always wonder what’s stopping security researchers from selling these exploits to Blackhat marketplaces, getting the money, waiting a bit, then telling the original company, so they end up patching it.

Probably break some contractual agreements, but if you’re doing this as a career surely you’d know how to hide your identity properly.

x00z@lemmy.world on 10 Jun 00:40 next collapse

Chances that such an old exploit get found at the same time by a whitehat and a blackhat are very small. It would be hard not to be suspicious.

scarilog@lemmy.world on 10 Jun 00:50 collapse

Yes, but I was saying the Blackhat marketplaces wouldn’t really have much recourse if the person selling the exploit knew how to cover their tracks. i.e. they wouldn’t have anyone to sue or go after.

x00z@lemmy.world on 10 Jun 01:21 collapse

I’m saying blackhat hackers can make far more money off the exploit by itself. I’ve seen far worse techniques being used to sell services for hundreds of dollars and the people behind those are making thousands. An example is the slow bruteforcing of blocked words on YouTube channel as they might have blocked their name, phone number, or address.

What you’re talking about is playing both sides, and that is just not worth doing for multiple reasons. It’s very obvious when somebody is doing that. People don’t just find the same exploit at the same time in years old software.

filcuk@lemmy.zip on 10 Jun 03:53 collapse

It’s not worth the risk. If your job is border control, would you be smuggling goods? Maybe some would, but most would not.

They’re whitehat because they don’t want to take part in illegal activities, or already have and have grown from it.

propitiouspanda@lemmy.cafe on 09 Jun 22:21 next collapse

I keep telling people, it’s a matter of when, not if.

Do not trust corporations.

jballs@sh.itjust.works on 09 Jun 22:30 collapse

Damn that’s interesting. I like how they walked through step by step how they got the exploit to work. This is what actual real hacking is like, but much less glamorous than what you see in the movies.

turtlesareneat@discuss.online on 10 Jun 00:19 collapse

When do we get to the part where a bunch of UNIX logs get projected, backward, on someone’s face