JavaScript Rendering & SEO
Search engines treat JavaScript content on a website differently from typical HTML content and will render it separately. As the use of JavaScript on the web increases due to the number of features it makes possible, it is important to understand how search engines view this content and optimize for this. Check out our “Hamburger Analogy” an introduction to client-side vs. server-side JavaScript rendering.
Our SEO Office Hours notes and video clips compiled here cover JavaScript-related SEO best practices from Google, along with notes on the latest advancements to their rendering engine.
Google Will Not Render JavaScript Content if The Page Returns a Redirect or Error Code
If you have a page which contains JavaScript content but it returns a redirect or an error code, Google will not spend time rendering the content. For example, if you use JavaScript on a 404 page to display an error message or links. With redirects, Google does not need to render the content in order to follow the redirect to the new page.
Google Needs to Access JS Files & Server End Points Used For AJAX Requests
If AJAX requests that are needed to download JavaScript on page load are being blocked in robots.txt, then Googlebot won’t be able to see or index any of the content that these requests will generate.
JavaScript SEO Will Evolve to be More About Debugging Issues When JS Works by Default
Martin sees JavaScript SEO as evolving from working around the pitfalls with today’s technologies to knowing what can go wrong when JavaScript works out of the box and how to debug issues. Google can help by providing troubleshooting tools, but technical understanding will still be required.
JavaScript SEO Will Continue to be Necessary Due to Changing Frameworks, Complex Issues & Poor Implementations
John and Martin believe that JavaScript SEO won’t go away as Google’s rendering capabilities improve because of the continual changes to frameworks and new elements in Chrome, poor technical implementations and the complexity of debugging issues.
Google is Rendering More Pages as Cheaper Resource-wise Than Running Rendering Heuristic
Googlebot is putting increasingly more pages through the render phase, even if they don’t run JavaScript because it is cheaper resource-wise for Google to render the page than to run a complex heuristic to decide if it should be rendered.
Google Determines if Pages Need to be Rendered by Comparing Content Found in Initial HTML & Rendered DOM
Google compares the content of the raw HTML of a page from the initial crawl to the rendered DOM after rendering to see if there is new content and to determine if it needs to be rendered going forward.
More or Less Every New Website is Rendered When Google Crawls it For the First Time
Nearly every website goes through the two waves of indexing when Google sees it for the first time, meaning it isn’t indexed before it has been rendered.
Ensure Hidden Content is Set-up With CSS Rather Than JS if You Want the Content to be Indexed
If you have hidden content which you want to be indexed, ensure it is implemented using CSS, rather than sever-side JavaScript, to enable Google to see the content has been loaded when they crawl and render the page.
An Updated User Agent is Expected to Reflect The New Modern Rendering Infrastructure
Google has been experimenting with the current user agent settings and is re-thinking the set u. John expects some changes to be announced in the future around an updated user agent so that it reflects the new modern rendering infrastructure.
Look Into Server-side Rendering For Improved UX as Dynamic Rendering is a Temporary Workaround for Crawlers
Dynamic rendering is a temporary workaround to allow search engine crawlers and social media crawlers to be able to access content even if they can’t render JavaScript. John foresees dynamic rendering being less useful in a few years as all crawlers get better at processing JavaScript, but look into server-side rendering for an improved experience for users.