From URL to Full Site Audit in Minutes
ScreamingCAT is a desktop application that crawls your website like a search engine would — following links, analyzing pages, and flagging issues. Here’s how it works.
01
Configure Your Crawl
Paste your starting URL and adjust settings to match your needs:
- Crawl depth — how many link levels deep to go
- Speed controls — number of threads and max requests per second
- User-Agent — crawl as Googlebot, Bingbot, a real browser, or a custom agent
- URL filters — include or exclude URL patterns using regex
- Rendering mode — text-only (fast) or JavaScript rendering (headless Chrome)
- Robots.txt — respect, ignore, or report-only mode
- Authentication — log into password-protected areas before crawling
You can also upload a URL list instead of crawling from a starting page.
02
Crawl and Extract
Once you start the crawl, the multi-threaded Rust engine begins visiting pages concurrently. For each URL, it extracts:
- HTTP response code and response time
- Page title, meta description, H1, H2 tags
- Canonical URL and meta robots directives
- Word count, text ratio, and page size
- Internal and external link counts
- Images and missing alt text
- Structured data (JSON-LD) with validation
- Open Graph and Twitter Card tags
- Hreflang tags for international SEO
- Security headers (HSTS, CSP, X-Frame-Options)
- CSS, JavaScript, and inline resource counts
If JavaScript rendering is enabled, ScreamingCAT uses headless Chrome to fully render each page before extracting data — so you see what search engines see.
03
Detect Issues Automatically
ScreamingCAT applies over 60 automated checks to every crawled page and categorizes issues by type and severity:
Critical — broken pages (4xx, 5xx), server errors, no response
Warning — missing titles, duplicate meta descriptions, redirect chains
Notice — generic anchor text, excessive DOM depth, low text ratio
04
Enrich with Google Data (Optional)
Optionally connect your Google accounts to layer real performance data onto crawl results:
- PageSpeed Insights — Lighthouse scores and Core Web Vitals (LCP, FCP, CLS) for each URL
- Google Search Console — clicks, impressions, CTR, and average position
- Google Analytics 4 — sessions, users, bounce rate, and conversions
This lets you prioritize fixes based on actual traffic impact, not just technical severity.
05
Analyze and Visualize
Results Table
A fast, virtualized table displaying all crawled URLs with every data point. Sort, filter, and search across columns.
Issues Panel
See all detected issues in one view. Filter by severity or category. Each issue links to the affected URLs.
Site Visualizations
Crawl Tree, Site Graph, and Crawl Depth Chart — see your site structure at a glance.
06
Export and Compare
Export your data as CSV, Excel (XLSX), or XML sitemap. Save crawl snapshots and compare any two to detect added URLs, removed URLs, and changed metadata. Save your entire crawl as a .sccat project file to reopen later.