During the last days I've read an interesting post, published by Uber. It has caught our attention here at CYBERTEC. Here you can read it by yourself.
The idea behind geo-fencing is to provide information about an area to users. Somebody might want to find a taxi near a certain location, or somebody might simply want to order a Pizza from a nearby restaurant.
Table of Contents
According to the blog post Uber has solved this problem using some hand-made GO code. Uber's implementation: 95% <5ms. Another Blog also had an eye on it: https://medium.com/@buckhx/unwinding-uber-s-most-efficient-service-406413c5871d#.vdsg0fhoi
Of course, to a team of PostgreSQL professionals, 5 ms is quite a lot so we tried to do better than that.
The first thing we need to compete is some test data. Some nice data can be found here:
https://github.com/buckhx/gofence-profiling/tree/master/nyc
Then the data can be loaded with psql nicely:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
set content `cat nyc_census_2010_tracts.geojson` CREATE TEMPORARY TABLE geojson (data jsonb); INSERT INTO geojson VALUES (:'content'); CREATE TABLE census_tracts ( id serial primary key, ntaname text, boroname text, shape geometry); INSERT INTO census_tracts (ntaname, boroname, shape) SELECT geom->'properties'->>'NTAName', geom->'properties'->>'BoroName', st_geomfromgeojson(geom->>'geometry') FROM geojson, LATERAL jsonb_array_elements(data->'features') geom; |
The data can already be queried. The following query does what Uber tries to achieve:
1 2 3 4 5 |
SELECT * FROM census_tracts WHERE ST_Contains(shape, ST_Point(-73.9590, 40.7830)); |
However, to be fair:
The sample set is not big enough yet. To increase the amount of data roughly to the size of Uber's database, the following SQL can do the job:
1 2 3 4 5 6 7 |
CREATE TABLE tracts_100k (LIKE census_tracts INCLUDING ALL); INSERT INTO tracts_100k (ntaname, boroname, shape) SELECT ntaname || ' ' || offs::text, boroname || ' ' || offs::text, ST_Translate(shape, offs, 0) FROM census_tracts, LATERAL generate_series(0,360,360/50) offs; |
After applying some nice indexing we can already see, how well our query behaves:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
CREATE EXTENSION file_fdw; CREATE SERVER files FOREIGN DATA WRAPPER file_fdw; CREATE FOREIGN TABLE geobench ( client int, tx_no int, time int, file_no int, time_epoch int, time_us int ) SERVER files OPTIONS (filename '/home/user/code/gis-build/gofence-profiling/nyc/pgbench_log.7665', format 'text', delimiter ' '); SELECT percentile_disc(ARRAY[0.5,0.95,0.99]) WITHIN GROUP (ORDER BY ROUND(time/1000.,3)) latency_ms FROM geobench; |
The results are very promising. Our version of the geo-fencing query is around 40 times faster than the Uber one. Clearly, Uber should consider using PostgreSQL instead of custom code. Given the fact that we invested around 30 minutes to get this done, even developing the business logic is faster with PostgreSQL.
Foto Copyright: Uber
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
#ShotsFired
Pretty cool!
I think one key difference is that not only does Uber have a large amount of data, but the locations of the vehicles update at high frequency (every few seconds) which means that the index will constantly have to be rebuilt. Would this solution still perform as well if the indexes were less static?
Another key difference is that space partitioning optimizations (such as what is needed for this kind of search) is much more efficient when data is scattered evenly in space instead of being clustered in small areas, like cities (and even inside cities, where it's probably clustered in a few main streets as well). The search tree depth may very well be 40 times deeper than your random data set.
I've been doing a little work in this area using geohash-36 algorithm in non-spatially aware data stores. Results are looking good.