Redirects
URL redirection is a process put in place to forward site visitors to an alternative page when the page they are looking to view is no longer live on the site. Redirects may be implemented for migration purposes, as well as for site re-architecture and when pages naturally expire. They can also be used to consolidate ranking signals. Our SEO Office Hours notes below cover the different redirection types and explore how Google understands these.
Further reading: The ABCs of HTTP Status Codes
Google May Choose a Redirect URL Instead of the Target
The selection of a canonical URL is also based on redirects, internal and external links, and Sitemaps, but even in the case of a redirect, Google might still choose to index the redirect source instead of the target.
302 Redirects Won’t Be Cached by the Browser
302 redirects won’t be cached by the browser.
Domain Redirects Should Remain for at Least a Year
Site move domain redirects should remain in place for at least a year, ideally as long as possible.
Excessive URL Parameters and Rewrites Can Causing Problems
Google can have problems crawling your site if your URL structure has an excessive number of URL parameters and rewrites which redirect to a few pages.
Separate Non-WWW/WWW and HTTP>HTTPS Redirects are OK
It’s OK to use separate redirects for Non-WWW to WWW and HTTP to HTTPS which can result in 2 steps. Ideally you would redirect in a single step where possible.
Last Modified Header Used for 304
Last Modified Header is taken into account when using a 304 status code in response to a request which contains an if-modified-since in the request headers.
Set up Image Redirects when URLs Change
If you change image URLs, set up redirects to help them get picked up more quickly.
Site: Returns URLs from Redirected Domains
URLs from redirected domains may still show up for a site: search
Add Last Modified to Redirects in Sitemaps
When Redirecting URLs include them in a Sitemap with a last modified date set after the redirect was put in place, it will encourage them to be crawled more quickly
Solve Duplication with Redirects, Canonical and Linking
John recommends using redirects, canonical tags, and consistent internal linking to the primary page to solve duplication. He says Google are against using robots.txt to prevent content duplication, because Google can’t recognise the pages are duplicated if they cannot crawl it.