We surface, rebuild, and release the tools merchants overpay for—free.
Option A (recommended):
- Run
droid - Enter
/settings - Toggle Allow background processes to true
- Wallet/store audit
- Rank apps by spend (highest → lowest)
- Select targets
- Scope definition (MVP vs parity)
- Success metrics (KPIs + acceptance)
- Source capture (landing, docs, demos, pricing, FAQs + tech stack docs via Context7 or other sources)
- Feature inventory
- Planning with AI (architecture, backlog, milestones)
- Building with AI
- Observability setup (optional: Sentry)
- Unit tests
- Integration tests
- Human QA / Acceptance testing
- Public launch
- Feedback loop + bug triage
We use the Ralph Loop to code while we sleep.
How we run it:
- Write a clear scope + acceptance criteria
- Loop build/test until pass or max iterations
- Persist a summary + next actions for morning review
Notes (distilled from sources below):
- Goal setting: one task per loop; define the desired end state; use checklist-style, testable success criteria; use a PRD/spec file for complex work; avoid exploratory tasks without clear tests.
- Exit criteria: reviewer gate (SHIP/REVISE) with tests passing; stop if blocked or if max iterations are reached.
- Loop steps: work phase reads the task + prior feedback, implements, and writes a summary; review phase verifies and returns SHIP or REVISE; each iteration resets context while state persists in files/git.
- Guardrails: rotate context before it gets polluted; record lessons/guardrails to prevent repeat failures; keep changes small and rerun loops instead of massive diffs; watch the loop to fix failure domains.