slazer2au@lemmy.world
on 22 Dec 23:52
nextcollapse
Neat. Most of that went over my head but always good to see more performance out of existing tech.
magic_lobster_party@fedia.io
on 23 Dec 10:01
nextcollapse
Basically it’s just an optimization of a double nested for loop. It’s a way to avoid running the inner for loop when it is known there will be no hit.
This is useful when we for example want to find all product orders of customers in a particular country. The way we can do this is to first filter all customers by their country, and then match orders by the remaining customers. The matching step is the double for loop.
Something like this:
for order in orders:
for customer in customers_in_country:
if order.customer_id == customer.id:
…
Many orders won’t match a customer in the above query, so we want to single out these orders before we run the expensive inner for loop. The way they do it is to create a cache using a Bloom filter. I’d recommend looking it up, but it’s a probabilistic cache that’s fast and space efficient, at the cost of letting through some false positives. With this particular use case it’s ok to have some false positives. The worst thing that can happen is that the inner for loop is run more times than necessary.
The final code is something like this:
bloom_filter = create_bloom(customers_in_country)
for order in orders:
if bloom_filter.contains(order.customer_id):
for customer in customers_in_country:
if order.customer_id == customer.id:
…
Edit: this comment probably contain many inaccuracies, as I’ve never done this kind of stuff in practice, so don’t rely too much on it.
Diplomjodler3@lemmy.world
on 23 Dec 14:59
collapse
That’s certainly not how I would implement any of this in Python.
SwordInStone@lemmy.world
on 23 Dec 15:44
nextcollapse
that’s what’s in the article tho
magic_lobster_party@fedia.io
on 23 Dec 15:57
nextcollapse
It’s just example code to demonstrate the idea of the optimization explained in the article. I also based my code on the code used in the article (and made some major changes to better fit my attempt of explanation).
You can, of course, feel free to show us how you’d implement this in python. It’s fine to say you would do it differently, but don’t stop there, show how/what you would do differently. Add to the discussion, like the person you were replying to did, don’t detract.
ShawiniganHandshake@sh.itjust.works
on 24 Dec 01:20
collapse
I haven’t read the article but I work with Bloom filters at work sometimes.
Bloom filters basically tell you “this thing might be present” or “this thing is definitely not present”.
If you’re looking for a piece of data in a set of large files, being able to say “this data is definitely not in this file” saves you a bunch of time because you can skip over the file instead of searching through the whole thing just to figure out what you’re looking for isn’t there.
The results of this research have been applied to SQLite already and were released in v3.38.0.
tl;dr: Bloom filters were great because: minimal memory overhead, goes well with SQLite’s simple implementation, and worked within existing query engine.
threaded - newest
Neat. Most of that went over my head but always good to see more performance out of existing tech.
Basically it’s just an optimization of a double nested for loop. It’s a way to avoid running the inner for loop when it is known there will be no hit.
This is useful when we for example want to find all product orders of customers in a particular country. The way we can do this is to first filter all customers by their country, and then match orders by the remaining customers. The matching step is the double for loop.
Something like this:
Many orders won’t match a customer in the above query, so we want to single out these orders before we run the expensive inner for loop. The way they do it is to create a cache using a Bloom filter. I’d recommend looking it up, but it’s a probabilistic cache that’s fast and space efficient, at the cost of letting through some false positives. With this particular use case it’s ok to have some false positives. The worst thing that can happen is that the inner for loop is run more times than necessary.
The final code is something like this:
Edit: this comment probably contain many inaccuracies, as I’ve never done this kind of stuff in practice, so don’t rely too much on it.
That’s certainly not how I would implement any of this in Python.
that’s what’s in the article tho
It’s just example code to demonstrate the idea of the optimization explained in the article. I also based my code on the code used in the article (and made some major changes to better fit my attempt of explanation).
You can, of course, feel free to show us how you’d implement this in python. It’s fine to say you would do it differently, but don’t stop there, show how/what you would do differently. Add to the discussion, like the person you were replying to did, don’t detract.
I haven’t read the article but I work with Bloom filters at work sometimes.
Bloom filters basically tell you “this thing might be present” or “this thing is definitely not present”.
If you’re looking for a piece of data in a set of large files, being able to say “this data is definitely not in this file” saves you a bunch of time because you can skip over the file instead of searching through the whole thing just to figure out what you’re looking for isn’t there.
Bloom filters can be handy. They’re worth learning even if you don’t care about SQLite.
Do you happen to know of a few situations where bloom filters are super useful? I need to identify when to use them.
Here’s a clever use of them: blog.mozilla.org/…/crlite-part-2-end-to-end-desig…
We use them for our password blacklisting. Crazy fast even with millions of entries