postgres_as_app
: Hybrid RDS / custom Postgres / PostgREST deployment templated using AWS CloudFormation
This repository completes this blog post. See that post for a higher-level discussion on system design.
-
Make sure to create an AWS root account. You can find additional steps to do so via this AWS help page.
-
Install necessary system dependencies. This project requires
curl
,awscli
,docker
anddocker-compose
, andmake
. I'm using Ubuntu 20.04 LTS, and my installation commands look something like this:$ sudo apt-get install -y curl $ sudo python -m pip install awscli $ sudo apt-get install -y docker.io $ sudo apt-get install -y docker-compose $ sudo apt-get install -y build-essential
I'm currently running the following versions:
$ aws --version aws-cli/1.18.35 Python/3.7.7 Linux/5.4.0-33-generic botocore/1.15.35 $ docker --version Docker version 19.03.8, build afacb8b7f0 $ docker-compose --version docker-compose version 1.23.2, build 1110ad01 $ make --version GNU Make 4.2.1 Built for x86_64-pc-linux-gnu Copyright (C) 1988-2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
-
Create a set of root account key and secret pairs in the AWS console, and configure default profile key, secret, region, and format on your local system:
$ aws configure AWS Access Key ID [****************XXXX]: $YOUR_AWS_ACCESS_KEY_ID AWS Secret Access Key [****************XXXX]: $YOUR_AWS_SECRET_ACCESS_KEY Default region name [us-east-1]: $YOUR_AWS_REGION Default output format [json]: $YOUR_OUTPUT_FORMAT
-
(Optional, required for IAM setup) Download a TOTP app like Google Authenticator or Authy. Either should be fine. Set up your login. I combine Authy with the Bitwarden password manager for another layer of security.
-
If downloading from GitHub,
git
clone this repository:$ git clone https://github.com/yingw787/postgres_as_app $ cd postgres_as_app
Assuming that all commands are run from ${BASEDIR}/services/
.
-
Stand up the
docker-compose
stack:make up
-
In your origin database (
make origindb-psql
), create a table:somedb=# CREATE TABLE todos (col1 VARCHAR(256), col2 VARCHAR(256)); CREATE TABLE somedb=# INSERT INTO todos (col1, col2) VALUES ('cat', 'dog'); INSERT 0 1 somedb=#
-
In your custom database (
make customdb-psql
), create apostgres_fdw
extension:mydb=# CREATE EXTENSION postgres_fdw; CREATE EXTENSION
-
In your custom database (
make customdb-psql
), create a server referencing your origin database:mydb=# CREATE SERVER origindb FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'origindb', port '5432', dbname 'somedb'); CREATE SERVER
-
In your custom database (
make customdb-psql
), create a user mapping to map permissions between your user on the custom database and a user on the origin database:mydb=# CREATE USER MAPPING FOR myuser SERVER origindb OPTIONS (user 'someuser', password 'somepassword'); CREATE USER MAPPING
-
In your custom database (
make customdb-psql
), create a foreign table referencing the table in the origin database:mydb=# CREATE FOREIGN TABLE todos (col1 VARCHAR(256), col2 VARCHAR(256)) SERVER origindb OPTIONS (schema_ name 'public', table_name 'todos'); CREATE FOREIGN TABLE
You can see the table using
make -f services/Makefile customdb-psql
and thepsql
command\dE[S+]
:mydb=# \dE[S+] List of relations Schema | Name | Type | Owner | Size | Description --------+-------+---------------+--------+---------+------------- public | todos | foreign table | myuser | 0 bytes | (1 row) mydb=#
-
Open up
localhost:3000
in order to see the PostgREST server connectedto your custom database up and running, andlocalhost:3000/todos
in order to see the foreign table reference. -
To tear down the
docker-compose
stack, run:make down
-
(OPTIONAL) To publish the custom database Dockerfile to Docker Hub, run:
make publish-customdb
Take care to change
$(DOCKER_HUB_USERNAME)
to your Docker Hub username. -
(OPTIONAL) To publish the PostgREST Dockerfile to Docker Hub, run:
make publish-postgrest
Assuming that all commands are run from ${BASEDIR}/infra-aws/
.
NOTE: Deployment targets (deploy-$SOMETHINg
) assume no changes in AWS
CloudFormation input parameters. In order to override parameters, the command
needs to be copied and pasted into the terminal, with the flag
--parameter-overrides
passed in. See the documentation for aws cloudformation deploy
for more details.
-
Copy the file
${BASDIR}/infra-aws/iam.sample.json
to${BASEDIR}/infra-aws/iam.json
, containing the following:[ { "ParameterKey": "IAMPassword", "ParameterValue": "$YOUR_PASSWORD_HERE" } ]
-
Replace
$YOUR_PASSWORD_HERE
in `${BASEDIR}/infra-aws/iam.json with your preferred IAM user password. Note that the password must follow IAM password policies. In order to create a password that passes, run this command in your terminal:aws secretsmanager get-random-password --include-space --password-length 20 --require-each-included-type --output text
-
Run
make create-iam
:$ make create-iam
You should get a response like:
$ make create-iam aws cloudformation create-stack --stack-name postgresasapp-iam --template-body file://iam.yaml --parameters file://iam.json --capabilities CAPABILITY_NAMED_IAM { "StackId": "$SAMPLE_ARN" }
This details the "Amazon Resource Number", an AWS-specific UUID describing the created resource.
After AWS CloudFormation finishes creating the stack with response
CREATE_COMPLETE
(which you can see either by logging into the AWS CloudFormation console, or by runningmake wait-iam
), you can begin setting up local references for this user. -
In the AWS console, take your root account ID, your newly created IAM user ID
postgresasapp-user
, and your IAM user passwordIAMPassword
, to log into a session of AWS console. -
Open
Services | IAM
, and inUsers | postgresasapp-user | Security Credentials
, add an MFA device inAssigned MFA Device
. Register your MFA device, then log out and log back in using the same root account ID, IAM user ID, and IAM user password. This time, the console should prompt you for an MFA code. Enter the code from your authenticator app and login. -
On the right-hand side of the top navbar, click on "Switch Role", just above "Sign Out. Switch role to role
postgresasapp-admin
, using your root AWS account ID and rolepostgresasapp-admin
.You should now have access to all AWS resources after switching to this role, and confirmed signing into your IAM user with an MFA device grants you access to all AWS resources.
-
In order to run further AWS commands in the terminal, you need a set of AWS credentials saved onto your local computer. In the AWS console, in window
Services | IAM
, and in windowUsers | postgresasapp-user | Security Credentials
, click on "Create Access Key". This should create a key/secret pair for you to download. -
On your local machine, configure your new user by running
aws configure --profile postgresasapp-user
. This should properly configure~/.aws/credentials
. -
At this point, you need to configure
~/.aws/config
in order to enable MFA login within the terminal. Take the section of~/.aws/config
that matches your IAM user:[profile postgresasapp-user] region = us-east-1 output = json
And replace it with this new section to properly configure MFA via CLI:
[profile postgresasapp-user] source_profile = postgresasapp-user role_arn = arn:aws:iam::${RootAWSAccountID}:role/postgresasapp-admin role_session_name=postgresasapp-user mfa_serial = arn:aws:iam::${RootAWSAccountID}:mfa/postgresasapp-user region = us-east-1 output = json
-
Finally, export
AWS_PROFILE
aspostgresasapp-user
to avoid piping--profile
intoaws
commands:export AWS_PROFILE=postgresasapp-user
You should now be able to lift into admin role via MFA on the CLI.
-
As a sanity check, run the following test command:
aws s3 ls
This command should prompt you for an MFA token. After successful MFA validation, it should then give the appropriate response (no S3 buckets created) without erroring out.
-
To deploy changes to the IAM user, make your changes in
iam.yaml
, then run:make deploy-iam
-
To tear down the IAM user, run:
make terminate-iam
-
To wait until the IAM user has been successfully created, run:
make wait-iam
-
To create the VPC, run:
make create-vpc
-
To deploy changes to the VPC, make your changes in
vpc.yaml
, then run:make deploy-vpc
-
To tear down the VPC, run:
make terminate-vpc
-
To wait until the VPC has been successfully created, run:
make wait-vpc
-
Copy the file
${BASDIR}/infra-aws/rds.sample.json
to${BASEDIR}/infra-aws/rds.json
, containing the following:[ { "ParameterKey": "DatabaseUsername", "ParameterValue": "$YOUR_USERNAME_HERE" }, { "ParameterKey": "DatabasePassword", "ParameterValue": "$YOUR_PASSWORD_HERE" } ]
-
Replace
$YOUR_USERNAME_HERE
in${BASEDIR}/infra-aws/rds.json
with your preferred RDS master username, and$YOUR_PASSWORD_HERE
in `${BASEDIR}/infra-aws/rds.json with your preferred RDS master password. -
Run
make create-rds
:$ make create-rds
-
To deploy changes to the RDS instance, make your changes in
rds.yaml
, then run:make deploy-rds
-
To tear down the RDS instance, run:
make terminate-rds
-
To wait until the RDS instance has been successfully created, run:
make wait-rds
-
To connect to the RDS instance via
psql
, run:make rds-psql
-
To create the custom database persist layer, run:
$ make create-persist
-
To deploy changes to the custom database persist layer, run:
$ make deploy-persist
-
To tear down the custom database persist layer, run:
$ make terminate-persist
-
To wait until the custom database persist layer has been successfully created, run:
$ make wait-persist
-
Login to the AWS EC2 console for your specific region.
-
Under 'Network & Security', select 'Key Pairs' to see the 'Key Pair' window. In the top-right corner, click on the orange button "Create key pair". This should open a new window.
-
Give a name to your keypair, like
admin
. Select.pem
for your file format, then click "Create key pair" in the bottom-right corner of the form. -
Download your keypair
.pem
file to your local computer, and save it as/path/to/file.pem
. -
Copy the file
${BASDIR}/infra-aws/compute.sample.json
to${BASEDIR}/infra-aws/compute.json
, containing the following:[ { "ParameterKey": "EC2KeyPair", "ParameterValue": "$YOUR_EC2_KEYPAIR" } ]
-
Replace
$YOUR_EC2_KEYPAIR
in${BASEDIR}/infra-aws/compute.json
with your EC2 keypair. This will allow you to SSH into the EC2 instance, which grants you access to the ECS Docker container runtimes, by runningssh -i /path/to/file.pem ec2-user@$YOUR_EC2_IPV4_ADDRESS
. -
To create the custom database / app compute node, run:
$ make create-compute
-
To deploy changes to the custom database / app compute node, run:
$ make deploy-compute
-
To tear down the custom database / app compute node, run:
$ make terminate-compute
-
To wait until the custom database / app compute node has been successfully created, run:
$ make wait-compute
-
To connect to the custom database instance using
psql
, run:make compute-psql
Assuming that all commands are run from ${BASEDIR}/infra-aws/
.
-
In your RDS console (
make rds-psql
), create a table, and insert some data into it.$ make rds-psql psql (12.3 (Ubuntu 12.3-1.pgdg20.04+1), server 11.6) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type "help" for help. pgdb=> CREATE TABLE test (col1 VARCHAR(256), col2 VARCHAR(256)); CREATE TABLE pgdb=> INSERT INTO test (col1, col2) VALUES ('cat', 'dog'); INSERT 0 1 pgdb=>
-
In your custom database (
make compute-psql
), create apostgres_fdw
extension:$ make compute-psql psql (12.3 (Ubuntu 12.3-1.pgdg20.04+1)) Type "help" for help. mydb=# CREATE EXTENSION postgres_fdw; CREATE EXTENSION
-
In your custom database (
make compute-psql
), create a server referencing your RDS instance:mydb=# CREATE SERVER origindb FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '$YOUR_RDS_HOST', port '5432', dbname '$YOUR_RDS_DB'); CREATE SERVER
Where
$YOUR_RDS_HOST
is the endpoint for your specific RDS service (findable via the RDS console), and$YOUR_RDS_DB
is the parameterPostgresDBName
invoked inrds.yaml
. -
In your custom database (
make compute-psql
), create a user mapping to map permissions between your user on the custom database and the user on the origin database:mydb=# CREATE USER MAPPING FOR `$CUSTOMDB_USERNAME` SERVER origindb OPTIONS (user `$RDS_USERNAME`, password `$RDS_PASSWORD`); CREATE USER MAPPING
Where
$CUSTOMDB_USERNAME
is parameterEnvVarPostgresUser
defined incompute.yaml
,$RDS_USERNAME
is parameterDatabaseUsername
defined inrds.yaml
, and$RDS_PASSWORD
is parameterDatabasePassword
defined inrds.yaml
. -
In your custom database (
make compute-psql
), create a foreign table referencing the table in the origin database:mydb=# CREATE FOREIGN TABLE test (col1 VARCHAR(256), col2 VARCHAR(256)) SERVER origindb OPTIONS (schema_name 'public', table_name 'test'); CREATE FOREIGN TABLE
You should know be able to see the foreign table fetched correctly as part of the custom database:
mydb=# SELECT * FROM test; col1 | col2 ------+------ cat | dog (1 row) mydb=#
Setup of the PostgREST instance is on Heroku, instead of AWS. You can probably set this up on AWS ECS as well, but attempting to do so surpassed my resource and time constraints for now, due to issues with inter-container networking and service discovery.
Also, since PostgREST simply does the lifting of SQL into HTTP and does not contain any data, it's less important where it's located since there's no lock-in whatsoever. You can continue to use your local system, or use one of infinitely many deployment platforms (Render, Netlify, Elastic Beanstalk, etc.) for this step.
-
Create a Heroku account. You can do so on this Heroku page.
-
Download the Heroku CLI. On Ubuntu, this might look like:
sudo snap install --classic heroku
-
Create an app, with the [
postgrest-heroku
] buildpack.heroku apps:create --buildpack https://github.com/PostgREST/postgrest-heroku.git
If you created an app already, switch to the new app by running:
heroku git:remote -a $YOUR_NEW_HEROKU_APP
-
Set configuration variables. It's very important to specify the config variable
POSTGREST_VER=0.5.0.0
, as it's hardcoded into the Heroku buildpack.heroku config:set POSTGREST_VER=0.5.0.0 heroku config:set DB_URI=$YOUR_CUSTOMDB_URI heroku config:set DB_SCHEMA=$YOUR_CUSTOMDB_SCHEMA heroku config:set DB_ANON_ROLE=$YOUR_CUSTOMDB_ANON_ROLE
You can find
$YOUR_CUSTOMDB_URI
by running the following command in your terminal, which should match parameter$ECS_PSQL_CONN_URI
defined ininfra-aws/Makefile
:aws cloudformation describe-stacks --stack-name postgresasapp-compute --query "Stacks[0].Outputs[?OutputKey=='PostgresConnectionString'].OutputValue" --output text
$YOUR_CUSTOMDB_SCHEMA
should bepublic
.$YOUR_CUSTOMDB_ANON_ROLE
should be parameterEnvVarPostgresUser
incompute.yaml
. -
Push the Heroku app to master:
git push heroku master
-
Open the app:
heroku open
-
Go to
/test
in order to see your table. It should print out:[ { "col1": "cat", "col2": "dog" } ]