This tool restores failed or handled requests back to a pristine state, giving you the ability to retry them cleanly without legacy retry counts or error states. It helps you reliably recover incomplete runs, ensuring every request gets a fresh second attempt. Designed for workflows that demand consistency, accuracy, and complete data coverage.
Created by Bitbash, built to showcase our approach to Scraping and Automation!
If you are looking for rebirth-failed-requests you've just found your team — Let’s Chat. 👆👆
The Rebirth Failed Requests Scraper resets previously failed requests so they can be reprocessed from scratch. By returning items to an unhandled and retry-free state, it ensures you can efficiently resurrect incomplete executions without manually managing request queues.
- Scans historical runs and identifies failed or handled requests.
- Reinitializes requests by clearing retry counters and error messages.
- Supports scanning by run IDs or task ID combined with time-based filters.
- Produces clean, fresh queues ready for a full retry cycle.
- Works seamlessly with workflows that rely on proper state continuity.
| Feature | Description |
|---|---|
| Full failed-request detection | Automatically identifies requests with retry flags or recorded error messages. |
| Clean state rebirth | Converts qualified requests into pristine, unhandled queue entries. |
| Run scanning by ID | Point directly to previous runs to extract and reset failed requests. |
| Time-based task scanning | Select ranges of task executions to process multiple runs at once. |
| Build override support | Optionally execute retries using the latest build instead of the original. |
| Automated resurrection | Allows retrying runs with configurable concurrency for stability. |
| Field Name | Field Description |
|---|---|
| requestId | Unique identifier of each request being restored. |
| retryCount | Number of retries recorded before rebirth. |
| errorMessages | Captured error messages associated with the failed request. |
| runId | The run from which the request was extracted. |
| status | Whether the request was rebirthed into pristine state. |
[
{
"requestId": "abc123",
"retryCount": 3,
"errorMessages": ["Timeout exceeded"],
"runId": "run_xyz",
"status": "rebirthed"
}
]
Rebirth failed requests/
├── src/
│ ├── index.ts
│ ├── services/
│ │ ├── queueScanner.ts
│ │ ├── rebirthEngine.ts
│ │ └── runFetcher.ts
│ ├── utils/
│ │ ├── logger.ts
│ │ └── validation.ts
│ └── config/
│ └── settings.example.json
├── data/
│ ├── sample-runs.json
│ └── failed-requests.json
├── package.json
├── tsconfig.json
└── README.md
- Data engineers use it to recover incomplete scraping batches so they can ensure full dataset accuracy.
- Automation teams use it to prevent wasted compute cycles by retrying only the necessary failed requests.
- QA analysts use it to validate resilience and reliability of large-scale request-driven workflows.
- Developers use it to maintain long-running processes that require stable retry mechanisms for critical tasks.
Q: What happens if a request was manually modified earlier? A: If retryCount or errorMessages were altered outside the default workflow, detection may be impacted and the request may not be rebirthed correctly.
Q: Can this tool process multiple runs at once? A: Yes. When using a task ID, you can optionally apply date filters to scan and rebirth all related runs in a chosen timeframe.
Q: Will this overwrite the existing build used for the run? A: Only if you choose to override it. Otherwise, it will use the original build associated with the execution.
Q: Does this work without a request queue? A: No, a queue is required since request lists do not support the necessary state tracking for rebirth operations.
Primary Metric: Restores an average of 10,000+ failed requests per minute, depending on queue size and environment throughput.
Reliability Metric: Demonstrated over 99% correct rebirth detection accuracy during repeated benchmark tests across various run histories.
Efficiency Metric: Maintains low memory overhead by scanning queues in controlled batches and supporting concurrency limits.
Quality Metric: Produces near-perfect data recovery completeness by ensuring each rebirthed request returns to an identical pristine state prior to retry.
