CLI
Installation
cargo install spidermedicgit clone https://github.com/beekmanlabs/spidermediccd spidermediccargo build --releaseUsage
spidermedic --url=<URL> [OPTIONS]Flags
| Flag | Default | Description |
|---|---|---|
--url | required | Starting URL to crawl |
--path | / | Restrict crawl to this URL path prefix |
--port | 80 | Port override |
--interval | 300 | Milliseconds between requests |
--concurrency | 10 | Max parallel in-flight requests |
--max-depth | 0 | Max depth (0 = unlimited) |
--output | terminal | terminal / json / csv |
Examples
# Crawl a site with default settingsspidermedic --url=https://example.com
# Faster crawl with more concurrency, limited to 3 levels deepspidermedic --url=https://example.com --concurrency=20 --max-depth=3
# Only crawl the /docs sectionspidermedic --url=https://example.com/docs --path=/docs
# JSON output piped to jqspidermedic --url=https://example.com --output=json | jq '.[] | select(.status >= 400)'
# CSV output saved to a filespidermedic --url=https://example.com --output=csv > results.csvExit codes
| Code | Meaning |
|---|---|
0 | All crawled pages returned non-error responses |
1 | One or more 4xx / 5xx responses were found |
Use the exit code in shell scripts to gate deployments:
spidermedic --url=https://staging.example.com && deploy_to_production.sh