8.1 Robots.txt doesn't block content we want in the index
Robots.txt can be easily misconfigured so that it blocks content that should be indexed.
8.2 Meta directives don't block content we want in the index
Meta directives, especially if there’s a technical misconfiguration, can block content from being indexed.
8.3 URL structure makes sense and is consistent
URL structure should be easy for users to understand, e.g. https://homepage.com/category/item
8.4 URLs don't contain unnecessary parameters
URLs shouldn’t contain lots of parameters that aren’t used – decommissioned analytics tools are a common source of this problem. Extra parameters can cause traffic tracking issues.
8.5 Pages are not "stuffed" with keywords
To the extent keywords are used in an article, they should be incorporated naturally.
8.6 All pages are on same subdomain
Splitting content up into multiple subdomains – for example, blog.yoursite.com – can hurt search performance and cause inconsistencies between sites.
8.7 Only using 301 redirects
As 302 and other redirects are temporary, there shouldn’t be many (or any) of them on your site. Use a 301 instead.
8.8 Number of pages in Google's index approximately matches site crawl and sitemap
Google Search Console will tell you how many pages on your site are indexed. This number should be roughly equal to the number of pages that are actually on your site.
8.9 Google thinks the site is reasonably fast
Speed is a critical ranking factor in search, and Google’s perception of your site speed is also a pretty good proxy for your users’ experience. Google provides tools that let you see how you score on a number of metrics, including speed. Generally, we find Google’s reports to be extremely pessimistic – it isn’t necessary to score 100, or even 90, on each measure. But Google can flag some issues for you that might be useful (and sometimes easy) to fix.
8.10 Non-valuable content listings (eg. archived content) are followed, noindexed
Archives and other content that isn’t useful should be removed from Google’s index, though you can still allow search engines to follow these links to discover other content.
8.11 Pagination is specified using rel=next and rel=prev
Using pagination attributes is helpful for search engines indexing your site, as well as screen readers.
8.12 Pagination results in reasonable number of URLs, or extra URLs are noindexed
Pagination schemes can sometimes create lots of unnecessary URLs, which can make your analytics more difficult to parse, and make your site more difficult to crawl.
8.13 Navigation results in reasonable number of URLs, or extra URLs are noindexed
Navigation schemes can sometimes create lots of unnecessary URLs, which can make your analytics more difficult to parse, and make your site more difficult to crawl.
8.14 Canonical tag or alternative tag is used for duplicate content
If you have duplicate content on your site, use a canonical tag to indicate which version search engines should rank.
8.15 Canonical tag is not used unless necessary
Don’t use the canonical tag unless it’s needed.
8.16 Error pages return error status codes
Error pages (e.g. 404s) should return a 404 status code to the crawler.
8.17 Meta refresh tags are not used
Meta refresh tags are embedded into pages to refresh them automatically, or forward to another page. These should be avoided in favor of server-side directives.
8.18 No redirect loops
Redirect loops bounce visitors endlessly between pages that redirect to each other.
8.19 No redirect chains
Redirect chains don’t go on forever, but they require visitors to wait while they’re forwarded to several pages on the way to the most up-to-date URL.
8.20 Googlebot does not see different content
Googlebot shouldn’t be served different content.
8.21 Pages are served with a single canonical URL
Common errors include http:// and https:// versions of the page, www and non-www, letter case issues, and having or lacking a trailing slash
8.22 301 redirects are in place for non-preferred URL versions
URL versions that use http:// instead of https:// (for example) should redirect automatically to the correct version.
8.23 All user agents receive same content
The same content should be served to all browsers and devices.
8.24 Sitemap exists
It’s useful to have a sitemap.xml file, and most CMSes will generate one automatically.
8.25 Sitemap is known to and validated by Search Console
Google Search Console should know about your sitemap, and it will also tell you if the sitemap has errors.
8.26 Sitemap URLs all return "success" status codes
All URLs in the sitemap should be functional.
8.27 Nofollow links, if present, are used deliberately
Nofollow attributes aren’t usually necessary, but if you are using them, make sure they’re not interfering with crawling of your site.