A new blog post covering each of the main components of this project can be found here:
http://thelastpickle.com/blog/2018/01/23/docker-meet-cassandra.html
git clone [email protected]:thelastpickle/docker-cassandra-bootstrap.git
cd docker-cassandra-bootstrap
cp .env.template .env
docker-compose buildIf you would like to see a hosted log service interact seemlessly with this Docker Compose stack, sign up for Papertrail.
Then find your specific port number by looking at your
Log Destinations and update
your .env setting accordingly.
# turn off all running Docker containers
docker-compose down
# delete any persistent data
rm -rf data/
# rebuild the images
docker-compose buildStart our Docker-integrated logging connector:
# start Docker logging connector
docker-compose up logspout
# view logging HTTP endpoint
curl http://localhost:8000/logsStart Cassandra and setup the required schema:
# start Cassandra
docker-compose up cassandra
# view cluster status
docker-compose run nodetool status
# create schema
docker-compose run cqlsh -f /schema.cql
# confirm schema
docker-compose run cqlsh -e "DESCRIBE SCHEMA;"Start Reaper for Apache Cassandra and monitor your new cluster:
# start Reaper for Apache Cassandra
docker-compose up cassandra-reaper
open http://localhost:8080/webui/
# add one-off repair
# add scheduled repairStart Prometheus and become familiar with the UI:
# start Prometheus
docker-compose up prometheus
open http://localhost:9090Start Grafana, connect it to the Prometheus data source, and upload the TLP Dashboards.
# start Grafana
docker-compose up grafana
# create
./grafana/bin/create-data-sources.sh
# user/pass: admin/admin
open http://localhost:3000
# upload dashboards
./grafana/bin/upload-dashboards.shGenerate fake workforce and activity:
docker-compose run pickle-factorySample timesheets:
docker-compose run pickle-shop