This guide covers deploying the Text-to-SQL application with a FastAPI backend and Streamlit frontend.
┌─────────────────┐ HTTP Requests ┌──────────────────┐
│ Streamlit │ ───────────────────────> │ FastAPI │
│ Frontend │ │ Backend │
│ (UI Layer) │ <─────────────────────── │ (API Layer) │
└─────────────────┘ JSON Responses └──────────────────┘
│ │
│ │
v v
Streamlit Cloud Render/Railway/Fly.io
(Free hosting) (Free tier available)
│
v
SQLite Database
OpenAI API
You have three options for deploying the FastAPI backend:
Why Render?
- Free tier with persistent storage
- Easy setup with
.deploy/render.yaml - Automatic deployments from Git
Steps:
-
Create a Render account: https://render.com
-
Create a new Web Service:
- Click "New +" → "Web Service"
- Connect your GitHub repository
- Render will auto-detect
.deploy/render.yaml
-
Set environment variables:
- Go to "Environment" tab
- Add
OPENAI_API_KEYwith your key
-
Deploy:
- Render will automatically deploy
- Your API will be at:
https://your-app-name.onrender.com
-
Test your API:
curl https://your-app-name.onrender.com/health
Important Notes:
- Free tier spins down after 15 minutes of inactivity
- First request after sleep takes ~30 seconds
- Database persists on a 1GB disk
Why Railway?
- Very fast deployments
- Generous free tier ($5/month credits)
- Great developer experience
Steps:
-
Create a Railway account: https://railway.app
-
Create a new project:
- Click "New Project" → "Deploy from GitHub repo"
- Select your repository
-
Railway auto-detects the config from
.deploy/railway.json -
Set environment variables:
- Go to "Variables" tab
- Add
OPENAI_API_KEY - Railway auto-generates
PORT
-
Generate domain:
- Go to "Settings" → "Generate Domain"
- Your API will be at:
https://your-app.railway.app
-
Test:
curl https://your-app.railway.app/health
Why Fly.io?
- Best for global deployments
- Runs on edge locations
- Free tier includes 3 VMs
Steps:
-
Install Fly CLI:
# macOS brew install flyctl # Linux curl -L https://fly.io/install.sh | sh
-
Login:
fly auth login
-
Initialize:
fly launch
-
Set secrets:
fly secrets set OPENAI_API_KEY=your_key_here -
Deploy:
fly deploy
-
Your API will be at:
https://your-app.fly.dev
Before deploying the Streamlit frontend, you need to enable CORS on your FastAPI backend so it can accept requests from the Streamlit app.
Edit app/main.py:
from fastapi import FastAPI, HTTPException, status
from fastapi.middleware.cors import CORSMiddleware # Add this import
import os
app = FastAPI(
title="Text-to-SQL API",
description="Convert natural language questions into SQL queries.",
version="0.1.0",
)
# Add CORS middleware (configure via environment variable)
app.add_middleware(
CORSMiddleware,
allow_origins=[
origin.strip()
for origin in os.getenv("CORS_ALLOW_ORIGINS", "http://localhost:8501,https://*.streamlit.app").split(",")
if origin.strip()
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Rest of your code...You can set allowed origins by exporting CORS_ALLOW_ORIGINS as a comma‑separated list:
# Local development defaults (already applied when unset)
export CORS_ALLOW_ORIGINS="http://localhost:8501,https://*.streamlit.app"
# Example: restrict to your deployed Streamlit app
export CORS_ALLOW_ORIGINS="https://your-app-name.streamlit.app"Commit and push this change before deploying Streamlit.
-
Push your code to GitHub (if not already)
git add streamlit_app/ git commit -m "Add Streamlit frontend" git push -
Go to Streamlit Cloud
-
Sign in with GitHub
-
Click "New app"
-
Configure the deployment:
- Repository:
your-username/text2sql - Branch:
main - Main file path:
streamlit_app/streamlit_app.py
- Repository:
-
Advanced settings → Environment variables:
API_URL=https://your-fastapi-backend.onrender.com(Replace with your actual backend URL from Part 1)
-
Click "Deploy"
-
Wait for deployment (~2-3 minutes)
-
Your app will be live at:
https://your-app-name.streamlit.app
-
Test the backend directly:
curl -X POST "https://your-backend.onrender.com/query" \ -H "Content-Type: application/json" \ -d '{"question": "How many customers do we have?"}'
-
Test the Streamlit app:
- Visit
https://your-app.streamlit.app - Check the sidebar: "API Status" should show "✅ API is healthy"
- Try an example question
- Visit
-
If API status shows red:
- Check your
API_URLenvironment variable in Streamlit Cloud - Verify CORS is configured on the backend
- Check backend logs on Render/Railway
- Check your
| Service | Free Tier | Limitations |
|---|---|---|
| Render | 750 hours/month | Spins down after 15 min inactivity, 1GB storage |
| Railway | $5/month credits | No spin-down, usage-based billing |
| Fly.io | 3 shared VMs | 160GB data transfer/month |
| Streamlit Cloud | Unlimited apps | Community tier, 1GB RAM per app |
| OpenAI API | Pay-as-you-go | GPT-4o-mini: ~$0.15 per 1M input tokens |
Monthly Cost Estimate (Low Usage):
- Backend: $0 (free tier)
- Frontend: $0 (Streamlit free)
- OpenAI API: $1-5 (depends on usage)
Total: ~$1-5/month for light usage
Render:
- Dashboard: https://dashboard.render.com
- View logs: Click on service → "Logs" tab
- Metrics: CPU, Memory, Requests
Railway:
- Dashboard: https://railway.app/dashboard
- View logs: Click project → "Deployments" → "View Logs"
- Metrics: Memory, Network, Build times
Fly.io:
fly logs
fly statusStreamlit Cloud:
- Dashboard: https://streamlit.io/cloud
- View logs: Click app → "Manage app" → "Logs"
- Reboot app: "Reboot app" button
Set up automated monitoring with UptimeRobot (free):
- Create account
- Add monitors for:
- Backend health:
https://your-backend.onrender.com/health - Frontend:
https://your-app.streamlit.app
- Backend health:
- Get email alerts if services go down
Render/Railway (Auto-deploy enabled):
git add .
git commit -m "Update backend"
git push(Automatically deploys)
Fly.io:
git add .
git commit -m "Update backend"
git push
fly deployStreamlit Cloud (Auto-deploy):
git add streamlit_app/
git commit -m "Update frontend"
git push(Automatically deploys in ~2 minutes)
Problem: Database not persisting
- Render: Check that disk is mounted at
/opt/render/project/src/data - Railway: Add a volume in settings
- Fly.io: Configure persistent volumes
Problem: 502 Bad Gateway
- Check logs for errors
- Verify
PORTenvironment variable is used - Ensure app starts successfully
Problem: OpenAI API errors
- Verify
OPENAI_API_KEYis set correctly - Check OpenAI account has credits
- Review rate limits
Problem: "API is unreachable"
- Check
API_URLenvironment variable - Verify CORS is enabled on backend
- Test backend URL directly in browser
Problem: Slow loading
- Free tier backend spins down (first request takes 30s)
- Consider upgrading to paid tier for always-on
- Add caching to backend
Problem: App crashes
- Check Streamlit Cloud logs
- Verify
requirements.txthas all dependencies - Ensure
streamlit_app.pyhas no syntax errors
Before going to production:
- OPENAI_API_KEY is stored as environment variable (not in code)
- CORS origins are restricted (not
allow_origins=["*"]) - API has rate limiting (consider adding to FastAPI)
- SQLite is not exposed publicly (it's internal to backend)
- HTTPS is enabled (automatic on Render/Railway/Fly.io/Streamlit)
- Error messages don't leak sensitive info
- Add authentication if needed (OAuth, API keys, etc.)
-
Custom Domain (optional):
- Render: Settings → Custom Domain
- Streamlit: Not available on free tier
-
Analytics (optional):
- Add Google Analytics to Streamlit app
- Track API usage with middleware
-
Scaling:
- Upgrade to paid tiers when needed
- Consider PostgreSQL for production database
- Add caching layer (Redis) for frequently asked questions
-
CI/CD:
- Your GitHub Actions already test the backend
- Add Streamlit tests if needed
- Backend issues: Check GitHub Actions logs, backend service logs
- Frontend issues: Check Streamlit Cloud logs
- API issues: Review FastAPI docs at
/docsendpoint
While the FastAPI stack is production-ready today, Phase 1.5 adds containerization + DevOps plumbing for the NestJS backend so it can mature in parallel.
-
Copy an env template:
cp packages/backend/env-templates/.env.example packages/backend/.env.local
-
Start the TypeScript profile:
docker compose --profile typescript up ts-postgres ts-backend
-
Hit the NestJS API: http://localhost:3000/health
-
Provide AI credentials by editing
.env.local:AI_PROVIDER_DEFAULT=openai|anthropic|customOPENAI_API_KEY,OPENAI_MODEL,OPENAI_TEMPERATUREANTHROPIC_API_KEY,ANTHROPIC_MODEL,ANTHROPIC_TEMPERATURECUSTOM_API_KEY,CUSTOM_API_BASE_URL
Each provider also supports
*_BASE_URLand*_MAX_TOKENS. Seedocs/ai-provider-layer.mdfor the full matrix.
Services included in the profile:
| Service | Purpose | Notes |
|---|---|---|
ts-postgres |
PostgreSQL 15 w/ volume | Stores data in the ts-postgres-data volume |
ts-backend |
NestJS API with hot reload | Binds source directories for live editing |
ts-frontend |
Placeholder container | Keeps env wiring ready for the Next.js app |
packages/backend/Dockerfileis multi-stage:builderfor pnpm workspace install and compilationdevelopmenttarget for Compose (pnpm --filter @text2sql/backend dev)productionimage (<200MB) with health checks and non-rootappuser
.github/workflows/ts-ci.ymlmirrors Python’s rigor with linting, Vitest, coverage, build verification, environment validation, performance smoke tests, and a PR summary.- Branch protection should require the
TypeScript Required Checksjob alongside the Python ones.
See docs/typescript-devops.md for the full breakdown of commands and troubleshooting tips.
Good luck with your deployment! 🚀