All Features
Everything Crawly can do - in one place.
Crawl Engine
Fast, configurable crawling
Built for professionals who need control over how their site is crawled.
Multi-threaded
Crawls up to 10 pages concurrently by default for fast site coverage.
Configurable depth & limits
Set a maximum crawl depth and page limit per session.
robots.txt support
Respect or bypass robots.txt rules depending on your workflow needs.
User agent picker
Crawl as Googlebot, Google Smartphone, Bingbot, Chrome 128, Crawly, or a fully custom user agent string.
List mode
Paste a list of URLs for a targeted one-shot audit without following links - ideal for checking specific pages after a migration.
Multi-tab crawling
Run crawls for multiple sites simultaneously in separate tabs without losing any previous crawl data.
Save & reopen crawls
Full crawl history persisted to disk. Reopen any previous crawl instantly - no need to re-crawl.
Issues
Auto-categorised SEO issues
Every problem surfaced and grouped the moment a crawl completes. No manual filtering required.
Missing page titles
Pages with no <title> tag.
Duplicate page titles
Multiple pages sharing an identical title.
Title too long / too short
Titles outside the recommended character range.
Missing meta descriptions
Pages without a meta description tag.
Duplicate meta descriptions
Multiple pages sharing the same meta description.
Meta description too long
Descriptions likely to be truncated in search results.
Missing H1
Pages with no H1 heading.
Duplicate H1
Multiple pages sharing the same H1 text.
H1 / H2 length warnings
Headings that are unusually short or long.
4xx broken pages
Client errors including 404 Not Found.
5xx server errors
Server-side errors returned during the crawl.
Redirect chains
Multi-hop redirect sequences that waste crawl budget and slow page load.
Non-indexable pages
Pages blocked from Google via noindex or a canonical pointing elsewhere.
Images missing alt text
Pages where one or more images have no alt attribute.
Mixed content
HTTP resources loaded on HTTPS pages.
Missing HSTS
Pages not sending the Strict-Transport-Security header.
Missing CSP
Pages not sending a Content-Security-Policy header.
Near-duplicate content
Pages with identical MD5 body hash - exact duplicate content.
hreflang missing x-default
International pages without an x-default hreflang tag.
On-Page SEO
Complete on-page data per URL
Every crawled page shows the full picture: title, meta, headings, canonical, robots directives, word count, and more.
Page titles
Title tag text, length, and pixel-width estimation for Google truncation.
Meta descriptions
Full text and length check for every page.
H1 & H2 headings
H1 text per page, all H2s listed, with length warnings on both.
Canonical tags
Canonical URL per page - see whether it matches, is missing, or points elsewhere.
Robots meta directives
noindex, nofollow, nosnippet, and any other robots meta values.
Word count
Body word count per page visible in the main pages table.
Response time
Page load time recorded per URL during the crawl.
Page size
Raw HTML byte size per page.
Images
Per-image alt text audit
Every image across every crawled page, with filtering and bulk export.
Per-image table
Every image found during the crawl, showing its src URL, source page, and alt attribute value (or flagging it as missing).
Filter by missing alt
Instantly filter the image table to show only images with missing or empty alt text.
Image stats bar
Total images crawled, unique sources, and missing-alt count shown at a glance.
Bulk copy & export
Copy or export all image URLs for bulk remediation or client reporting.
Headings
H1 & H2 audit across every page
Spot heading issues site-wide without opening individual pages.
H1 per page
See the H1 for every crawled URL in a single sortable table.
All H2s listed
Every H2 heading on a page shown inline.
Length warnings
Headings flagged if they fall outside recommended length ranges.
Response Codes
Full status code breakdown
Understand exactly what your server is returning for every URL.
Status breakdown
Summary count of 2xx, 3xx, 4xx, and 5xx responses.
Filterable URL table
Click any status group to see all matching URLs with their response codes and titles.
Redirect chain detail
See the full chain of redirects for any URL that redirected during the crawl.
External Links
All outbound links in one place
Audit every external link across your site without opening a spreadsheet.
Full external link table
Every outbound link found across your site with its destination URL, source page, and anchor text.
Anchor text
See what text you're using to link out - useful for auditing link context.
Export
Export the full external link list for off-page or link audit analysis.
Site Structure
URL architecture at a glance
Understand how your site is organised without a spreadsheet.
Folder tree
Expandable tree view grouped by URL path segment - see exactly how pages are nested.
Status indicators
Indexability and response code shown per branch of the tree.
Crawl depth
See how many hops it takes to reach each section of the site.
Crawl Comparison
Track what changed between crawls
Diff two crawls to see what appeared, disappeared, or changed - down to individual fields.
Added URLs
Pages that exist in the new crawl but not the previous one.
Removed URLs
Pages that existed before but are gone in the new crawl.
Changed URLs
Pages present in both crawls where the title, H1, status code, or indexability changed.
Migration support
Compare a pre-migration crawl against post-migration to catch regressions quickly.
Technical SEO
Security, hreflang & structured data
Advanced technical signals audited across every page.
Security headers
Per-page audit of HSTS, Content-Security-Policy, X-Frame-Options, Referrer-Policy, and X-Content-Type-Options.
hreflang auditing
Detects missing x-default, incorrect or malformed language codes, and other common hreflang errors.
Structured data extraction
JSON-LD and microdata @type values extracted per page - see which schema types are in use site-wide.
Near-duplicate detection
MD5 hash comparison across body text to surface exact duplicate pages.
Canonical analysis
See whether each canonical points to the current page, is missing, or redirects to another URL.
Custom Extraction
Scrape any data from any page
Define your own data extraction rules and run them across the entire crawl.
Full feature breakdownCSS selectors
Write any CSS selector to extract text content or an attribute value per page. Results appear as a named column in the pages table.
Regex source search
Define regex patterns matched against the raw HTML of every crawled page. Match counts are stored per URL.
Multiple extractions
Stack multiple named selectors and patterns in a single crawl - each becomes its own column.
XML Sitemap
One-click sitemap generation
Export a production-ready sitemap straight from your crawl data.
All indexable pages
Generates a sitemap.xml containing every indexable page discovered during the crawl.
One-click export
Download and submit to Google Search Console immediately - no formatting required.
Claude Code MCP
The only SEO crawler with a built-in AI interface
Crawly ships with a Model Context Protocol server that connects your crawl data to Claude Code.
Claude Code MCP IntegrationBuilt-in MCP server
No plugins, no third-party tools. The MCP server is part of Crawly and activates as soon as you run a crawl.
Natural language queries
Ask Claude which pages are missing meta descriptions, which have duplicate H1s, what redirect chains exist - in plain English from your terminal.
Live crawl data
Queries run against your most recent crawl stored in the local database. No export step needed.
Works with any Claude Code setup
Add the Crawly MCP server to your Claude Code config and it's available immediately across all your projects.
Looking for a desktop SEO spider? See Crawly as a free SEO spider for Mac

Start crawling smarter
Download Crawly for free. Connect to Claude Code via MCP and start auditing your site in minutes.
Download free