Screaming Frog SEO Spider 9.2
Screaming Frog SEO Spider 9.2
The Screaming Frog SEO Spider is a website crawler, that allows you to crawl websites’ URLs and fetch key onsite elements to analyse onsite SEO. Download for free, or purchase a licence for additional advanced features. The SEO Spider is lite, flexible and can crawl extremely quickly allowing you to analyse the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions.
Find Broken Links
Crawl a website instantly and find broken links (404s) and server errors. Bulk export the errors and source URLs to fix, or send to a developer.
Find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.
Analyse Page Titles & Meta Data
Analyse page titles and meta descriptions during a crawl and identify those that are too long, short, missing, or duplicated across your site.
Discover Duplicate Content
Discover exact duplicate URLs with an md5 algorithmic check, partially duplicated elements such as page titles, descriptions or headings and find low content pages.
Extract Data with XPath
Collect any data from the HTML of a web page using CSS Path, XPath or regex. This might include social meta tags, additional headings, prices, SKUs or more!
Review Robots & Directives
View URLs blocked by robots.txt, meta robots or X-Robots-Tag directives such as ‘noindex’ or ‘nofollow’, as well as canonicals and rel=“next” and rel=“prev”.
Generate XML Sitemaps
Quickly create XML Sitemaps and Image XML Sitemaps, with advanced configuration over URLs to include, last modified, priority and change frequency.
Integrate with Google Analytics
Connect to the Google Analytics API and fetch user data, such as sessions or bounce rate and conversions, goals, transactions and revenue for landing pages against the crawl.
The SEO Spider Tool Crawls & Reports On...
The Screaming Frog SEO Spider is an SEO auditing tool, built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl include -
Errors – Client errors such as broken links & server errors (No responses, 4XX, 5XX).
Redirects – Permanent, temporary redirects (3XX responses) & JS redirects.
Blocked URLs – View & audit URLs disallowed by the robots.txt protocol.
Blocked Resources – View & audit blocked resources in rendering mode.
External Links – All external links and their status codes.
Protocol – Whether the URLs are secure (HTTPS) or insecure (HTTP).
URI Issues – Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
Duplicate Pages – Hash value / MD5checksums algorithmic check for exact duplicate pages.
Page Titles – Missing, duplicate, over 65 characters, short, pixel width truncation, same as h1, or multiple.
Meta Description – Missing, duplicate, over 156 characters, short, pixel width truncation or multiple.
Meta Keywords – Mainly for reference, as they are not used by Google, Bing or Yahoo.
File Size – Size of URLs & images.
Page Depth Level.
H1 – Missing, duplicate, over 70 characters, multiple.
H2 – Missing, duplicate, over 70 characters, multiple.
Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc.
Meta Refresh – Including target page and time delay.
Canonical link element & canonical HTTP headers.
rel=“next” and rel=“prev”.
Follow & Nofollow – At page and link level (true/false).
hreflang Attributes – Audit missing confirmation links, inconsistent & incorrect languages codes, non canonical hreflang and more.
Rendering – Crawl jаvascript frameworks like AngularJS and React, by crawling the rendered HTML after jаvascript has executed.
AJAX – Select to obey Google’s now deprecated AJAX Crawling Scheme.
Inlinks – All pages linking to a URI.
Outlinks – All pages a URI links out to.
Anchor Text – All link text. Alt text from images with links.
Images – All URIs with the image link & ll images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
Configurable Accept-Language Header – Supply an Accept-Language HTTP header to crawl locale-adaptive content.
Redirect Chains – Discover redirect chains and loops.
Custom Source Code Search – Find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code etc.
Custom Extraction – Scrape any data from the HTML of a URL using XPath, CSS Path selectors or regex.
Google Analytics Integration – Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
Google Search Console Integration – Connect to the Google Search Analytics API and collect impression, click and average position data against URLs.
External Link Metrics – Pull external link metrics from Majestic, Ahrefs and Moz APIs into a crawl to perform content audits or profile links.
XML Sitemap Generator – Create an XML sitemap and an image sitemap using the SEO spider.
Custom robots.txt – Download, edit and test a site’s robots.txt using the new custom robots.txt.
Rendered Screen Shots – Fetch, view and analyse the rendered pages crawled.
Only for V.I.P
Warning! You are not allowed to view this text.