The number of cool things you can do with the http extension is large, but putting those things into production raises an important problem.
The amount of time an HTTP request takes, 100s of milliseconds, is 10- to 20-times longer that the amount of time a normal database query takes.
This means that potentially an HTTP call could jam up a query for a long time. I recently ran an HTTP function in an update against a relatively small 1000 record table.
The query took 5 minutes to run, and during that time the table was locked to other access, since the update touched every row.
This was fine for me on my developer database on my laptop. In a production system, it would not be fine.
A really common table layout in a spatially enabled enterprise system is a table of addresses with an associated location for each address.
CREATE EXTENSION postgis;
CREATE TABLE addresses (
pk serial PRIMARY KEY,
address text,
city text,
geom geometry(Point, 4326),
geocode jsonb
);
CREATE INDEX addresses_geom_x
ON addresses USING GIST (geom);
INSERT INTO addresses (address, city)
VALUES ('1650 Chandler Avenue', 'Victoria'),
('122 Simcoe Street', 'Victoria');
New addresses get inserted without known locations. The system needs to call an external geocoding service to get locations.
SELECT * FROM addresses;
pk | address | city | geom | geocode
----+----------------------+----------+------+---------
8 | 1650 Chandler Avenue | Victoria | |
9 | 122 Simcoe Street | Victoria | |
When a new address is inserted into the system, it would be great to geocode it. A trigger would make a lot of sense, but a trigger will run in the same transaction as the insert. So the insert will block until the geocode call is complete. That could take a while. If the system is under load, inserts will pile up, all waiting for their geocodes.
A better performing approach would be to insert the address right away
[...]