Skip to content

Commit

Permalink
Move deployment info to deployment.md (webrecorder#159)
Browse files Browse the repository at this point in the history
* add deployment instructions
  • Loading branch information
ikreymer authored Feb 24, 2022
1 parent 3fe3691 commit 92878de
Show file tree
Hide file tree
Showing 3 changed files with 47 additions and 35 deletions.
43 changes: 43 additions & 0 deletions Deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Deploying Browsertrix Cloud

Currently Browsertrix Cloud can be deployed in both Docker and Kubernetes.

Some basic instructions are provided below, we plan to expand this into more detail tutorial in the future.

## Deploying to Docker

For testing out Browsertrix Cloud on a single, local machine, the Docker Compose-based deployment is recommended.

To deploy via local Docker instance, copy the `config.sample.env` to `config.env`.

Docker Compose is required.

Then, run `docker-compose build; docker-compose up -d` to launch.

To update/relaunch, use `./docker-restart.sh`.

The API should be available at: `http://localhost:8000/docs`


Note: When deployed in local Docker, failed crawls are not retried currently. Scheduling is handled by a subprocess, which stores active schedule in the DB.



## Deploying to Kubernetes

For deploying in the cloud and across multiple machines, the Kubernetes (k8s) deployment is recommended.

To deploy to K8s, `helm` is required. Browsertrix Cloud comes with a helm chart, which can be installed as follows:

`helm install -f ./chart/values.yaml btrix ./chart/`

This will create a `browsertrix-cloud` service in the default namespace.

For a quick update, the following is recommended:

`helm upgrade -f ./chart/values.yaml btrix ./chart/`


Note: When deployed in Kubernetes, failed crawls are automatically retried. Scheduling is handled via Kubernetes Cronjobs, and crawl jobs are run in the `crawlers` namespace.


38 changes: 3 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,44 +14,12 @@ The system is designed to run equally in Kubernetes and Docker.

See [Features](https://browsertrix.cloud/features) for a high-level list of planned features.

## Deployment

## Deploying to Docker
See the [Deployment](Deployment.md) page for information on how to deploy Browsertrix Cloud.

For testing out Browsertrix Cloud on a single, local machine, the Docker Compose-based deployment is recommended.

To deploy via local Docker instance, copy the `config.sample.env` to `config.env`.

Docker Compose is required.

Then, run `docker-compose build; docker-compose up -d` to launch.

To update/relaunch, use `./docker-restart.sh`.

The API should be available at: `http://localhost:8000/docs`


Note: When deployed in local Docker, failed crawls are not retried currently. Scheduling is handled by a subprocess, which stores active schedule in the DB.



## Deploying to Kubernetes

For deploying in the cloud and across multiple machines, the Kubernetes (k8s) deployment is recommended.

To deploy to K8s, `helm` is required. Browsertrix Cloud comes with a helm chart, which can be installed as follows:

`helm install -f ./chart/values.yaml btrix ./chart/`

This will create a `browsertrix-cloud` service in the default namespace.

For a quick update, the following is recommended:

`helm upgrade -f ./chart/values.yaml btrix ./chart/`


Note: When deployed in Kubernetes, failed crawls are automatically retried. Scheduling is handled via Kubernetes Cronjobs, and crawl jobs are run in the `crawlers` namespace.

## Status
## Development Status

Browsertrix Cloud is currently in pre-alpha stages and not ready for production. This is an ambitious project and there's a lot to be done!

Expand Down
1 change: 1 addition & 0 deletions build-backend.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docker buildx build --platform linux/amd64 --push -t registry.digitalocean.com/btrix/webrecorder/browsertrix-api ./backend/

0 comments on commit 92878de

Please sign in to comment.