Technical SEO Audit Checklist: 60+ Points to Review
Stop using outdated technical SEO audit checklists. This guide has 60+ actionable points for pros, covering everything from crawl budget to Core Web Vitals.
In this article
Why Your Current Technical SEO Audit Checklist Is Probably Wrong
Let’s be direct. Most technical SEO audit checklists are either painfully basic or stuffed with outdated advice from 2015. They treat technical SEO like a paint-by-numbers exercise, which is why so many audits end up as expensive doorstops.
This is not that kind of checklist. This is a framework for thinking, built for technical SEOs, developers, and marketers who are tired of fluff. We’re going to cover the critical systems that dictate how search engines crawl, render, and index your website.
A proper audit requires a powerful crawler. While you can manually check a few pages, you can’t manually check 50,000. We built ScreamingCAT to be the fastest, most efficient way to gather the raw data you need. Grab that data, fire up this checklist, and let’s get to work. For a broader overview, see our guide on how to run a complete SEO audit.
The Foundational Technical SEO Audit Checklist: Crawling & Indexing
If search engines can’t find or understand your pages, nothing else matters. Crawling and indexing are the bedrock of technical SEO. Get this wrong, and you’re invisible.
1. Robots.txt Review: Is your `robots.txt` file blocking important resources? Check for overly aggressive `Disallow` directives that might be preventing crawlers from accessing critical CSS, JS, or entire site sections. A single misplaced slash can de-index your whole site.
2. Meta Robots & X-Robots-Tag: Run a crawl with ScreamingCAT to find pages with `noindex` or `nofollow` tags. Are they intentional? The `X-Robots-Tag` HTTP header is a common culprit for accidental de-indexing, especially on PDFs or non-HTML files.
3. XML Sitemap Validation: Your sitemap should be a clean, curated list for search engines, not a digital junkyard. Check for 4xx/5xx errors, non-canonical URLs, and pages blocked by robots.txt. Ensure it’s submitted in Google Search Console and actually up-to-date.
4. Crawl Budget Analysis: Are search engines wasting time on low-value pages like filtered navigation or infinite calendar archives? Check GSC’s Crawl Stats report. For a real diagnosis, you need log file analysis to see exactly where Googlebot is spending its time.
5. HTTP Status Codes: Find and fix all 4xx (client errors) and 5xx (server errors). A high number of 404s can signal a poor user experience and waste crawl budget. ScreamingCAT’s status code reports make this easy. Don’t forget to find and fix broken links, both internal and external.
6. Canonicalization: The `rel=”canonical”` tag is your primary weapon against duplicate content. Audit for incorrect implementations: pointing to a 404 page, canonicalizing to a non-indexable URL, or creating canonical chains. Every indexable page should have a self-referencing canonical tag by default.
User-agent: *
# This blocks all crawlers from the entire site. Usually a mistake.
Disallow: /
User-agent: Googlebot
# This blocks Google from crawling critical rendering resources.
Disallow: /assets/js/
Disallow: /assets/css/
The On-Page Technical SEO Audit Checklist
Once you’ve confirmed your pages are crawlable and indexable, it’s time to inspect the on-page elements. These signals help search engines understand content hierarchy and relevance. While not as catastrophic as a `Disallow: /`, getting these wrong is a slow bleed of performance.
7. Title Tags: Check for missing, duplicate, or truncated titles. Your title is a heavy-hitting ranking factor, so make sure it’s unique and descriptive.
8. Meta Descriptions: While not a direct ranking factor, a compelling meta description improves click-through rates. Find missing or duplicate descriptions and rewrite them.
9. Heading Hierarchy (H1-H6): A logical heading structure (one H1 per page, followed by H2s, H3s, etc.) helps both users and search engines. ScreamingCAT can export your heading structure for easy review. A messy hierarchy is a sign of messy content.
10. Structured Data (Schema): Is your schema valid? Use the Rich Results Test to validate. Check for deployment across all relevant page types (products, articles, recipes) to earn rich snippets and enhance visibility.
11. Image Optimization: Audit for missing alt text, which is critical for accessibility and image search. Also, check for enormous file sizes that kill page speed. Serve next-gen formats like WebP where possible.
12. Internal Linking & Page Depth: Identify orphan pages (pages with no internal links). Ensure your most important pages are no more than 3-4 clicks from the homepage. A deep page depth signals low importance to search engines.
13. URL Structure: URLs should be clean, logical, and use hyphens instead of underscores. Avoid keyword stuffing and unnecessary parameters. Consistency is key.
Site Architecture and Performance
Now we zoom out from individual pages to the site as a whole. Architecture and performance issues are systemic, affecting user experience and crawl efficiency across the board. These are often the most impactful fixes you can make.
14. Core Web Vitals (CWV): Speed is not a suggestion. Use PageSpeed Insights and the GSC Core Web Vitals report to identify issues with LCP (Largest Contentful Paint), FID/INP (First Input Delay/Interaction to Next Paint), and CLS (Cumulative Layout Shift). ScreamingCAT can connect to the PSI API to pull this data for all your URLs at scale.
15. Mobile-Friendliness: This should be a given, but you’d be surprised. With mobile-first indexing, your mobile site *is* your site. Test with Google’s Mobile-Friendly Test and check the GSC report for any usability issues.
16. HTTPS & Security: Ensure your entire site uses HTTPS. Run a crawl to find and fix mixed content (HTTP resources on an HTTPS page). Implement a strong HSTS policy to enforce secure connections.
17. JavaScript SEO: How reliant is your site on client-side JavaScript? View your site with JavaScript disabled. If the content disappears, you may have rendering issues. Compare the raw HTML to the rendered DOM in ScreamingCAT to see what search engines can (and can’t) see.
18. Pagination Handling: `rel=next/prev` is dead. Ensure your paginated series are linked logically with `` tags and that each page has a self-referencing canonical tag. Avoid canonicalizing all pages to the first page in the series—that’s a classic mistake.
19. Faceted Navigation: Complex filtering systems can generate a near-infinite number of URLs with duplicate or thin content. Use `robots.txt` to block crawlers from parameter combinations that offer no value, and use `rel=”canonical”` to point filtered pages back to their main category page where appropriate.
Warning
Pro Tip: GSC data is great, but it’s sampled and delayed. For the ground truth on how Googlebot interacts with your site architecture, nothing beats log file analysis. It tells you what Google is *actually* doing, not what you *think* it’s doing.
The Advanced Technical SEO Audit Checklist
You’ve handled the fundamentals. Now it’s time to tackle the complex issues that separate the pros from the amateurs. These points often require a deeper understanding of server configurations and international SEO.
20. Hreflang Implementation: For international sites, `hreflang` is non-negotiable. It’s also incredibly easy to mess up. You must ensure that tags are reciprocal (if Page A links to Page B, Page B must link back to Page A) and use correct language-country codes.
Common `hreflang` mistakes to hunt for include:
21. Log File Analysis: We mentioned it before, but it’s worth its own point. Analyzing server logs allows you to verify Googlebot’s identity, identify which pages it crawls most (and least) frequently, find orphan pages it’s hitting, and diagnose crawl budget waste with surgical precision.
22. Redirect Auditing: Map out your redirect chains and loops. A long chain of 301s dilutes link equity and slows down users and bots. Use 301s for permanent moves and 302s for temporary ones—don’t mix them up.
23. Crawl Configuration & Emulation: Are you serving different content to mobile and desktop users? Configure your crawler to use different user-agents (e.g., Googlebot Smartphone) to see your site exactly as the search engine does. ScreamingCAT’s configuration options are built for this. If you’re new to the tool, our getting started guide will get you up to speed.
24. Content Duplication & Parameter Handling: Go beyond basic canonicals. How does your site handle URL parameters for tracking, sorting, or filtering? Configure parameter handling in GSC to tell Google which parameters to ignore, preventing the creation of thousands of duplicate URLs.
- Using incorrect country or language codes (e.g., `en-UK` instead of `en-GB`).
- Missing return tags (non-reciprocal `hreflang`).
- Pointing `hreflang` URLs to pages that are non-indexable or canonicalized elsewhere.
- Including a `hreflang` link to the current page’s URL (it should be self-referencing).
Putting It All Together: Tools and Reporting
A successful technical SEO audit doesn’t end with a spreadsheet of errors. The final, critical step is to translate your findings into an actionable plan. This is where you provide real value.
25. The Right Tools for the Job: You can’t do this by hand. Your essential toolkit should include a crawler like ScreamingCAT, Google Search Console for performance and indexing data, PageSpeed Insights for performance metrics, and a log file analyzer for deep crawl insights.
26. Prioritization Framework: Every issue is not a top priority. Create a simple framework to rank issues based on their potential impact (e.g., how many pages are affected, how much revenue is at stake) versus the effort required to fix them. A site-wide `noindex` tag is a P1; a few missing alt tags are a P3.
27. Clear, Actionable Reporting: Don’t just deliver a data dump. Your report should clearly explain what the problem is, why it matters for business goals (traffic, leads, revenue), and what the specific next steps are. Screenshots, examples, and clear language are your friends.
A technical SEO audit checklist is a living document. Websites change, search engines evolve, and entropy is relentless. Running regular, focused audits is the only way to stay on top of your site’s technical health and maintain a competitive edge.
The goal of a technical audit is not to find problems. It’s to find solutions that drive business results.
The ScreamingCAT Team
Key Takeaways
- Technical SEO audits must go beyond basic checklists; focus on core systems like crawling, indexing, and rendering.
- Start with foundational checks: `robots.txt`, meta tags, sitemaps, and status codes. If search engines can’t access your site properly, nothing else matters.
- Site architecture and performance (Core Web Vitals, mobile-friendliness, JS rendering) have a systemic impact on both user experience and SEO.
- Use a powerful crawler like ScreamingCAT combined with Google Search Console and log file analysis for a complete data picture.
- The final output of an audit should be a prioritized action plan, not just a list of errors. Explain the ‘why’ behind each recommendation.
Ready to audit your site?
Download ScreamingCAT for free. No limits, no registration, no cloud dependency.