Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

JavaScript Rendering & SEO

Search engines treat JavaScript content on a website differently from typical HTML content and will render it separately. As the use of JavaScript on the web increases due to the number of features it makes possible, it is important to understand how search engines view this content and optimize for this. Check out our “Hamburger Analogy” an introduction to client-side vs. server-side JavaScript rendering.

Our SEO Office Hours notes and video clips compiled here cover JavaScript-related SEO best practices from Google, along with notes on the latest advancements to their rendering engine.

Googlebot Crawls URLs in JavaScript Code But Treats Them as Nofollow

Googlebot crawls urls found in JavaScript code and automatically nofollows them. If the JavaScript modifies the HTML, then Google will respect a nofollow attribute.

30 Jun 2017

Links Which Are Missing on Mobile Versions of Pages Won’t be Seen

Google can crawl links included by JavaScript on mobile pages when they render the page, but if the links are completely missing on mobile pages which exist on the desktop pages, then Google wouldn’t see these links.

27 Jun 2017

Google Recommends Pre-Rendered HTML With JavaScript Frameworks

Google recommends using isomorphic JavaScript or, Universal Angular on Angular, so the first page is pre-rendered with html. This has the effect of the page loading very quickly and search engines won’t have to process JavaScript.

2 Jun 2017

Google Doesn’t Handle All Onclick Events

Google tries to look at Onclick JavaScript events, but can only go so far. Some actions like infinite scroll pages which change the URL won’t be run as they would appear to be redirects. For JavaScript sites, make sure the DOM has the normal elements and href links.

16 May 2017

Google is Trying to Render All Pages

Google is trying to render every page they crawl, so they are not planning on providing any data on which pages are being rendered.

16 May 2017

Google Doesn’t Fetch all Resources when Render a Page

Google caches a lot of the CSS and JavaScript on pages to reduce the load on servers.

16 May 2017

Google Doesn’t Always Crawl Lazy-loaded Images

Google will not always crawl images which are implemented using lazy loading.

5 May 2017

Content that Fades in Onload With ~2s Transition is Not Considered Hidden

Content is considered hidden when it is not visible at all. Content which fades in within a couple of seconds during a transition or is slightly transparent is likely to be indexed. You can test a transition using Fetch and Render.

2 May 2017

JavaScript Links May be Treated as Nofollow

If a JavaScript link uses any server side processing, it probably won’t be crawled correctly If you have JavaScript scripts which generate links, they will be nofollowed. If you use JavaScript to build a link in the page, it will be followed.

4 Apr 2017

Test HTML Parsing Issues with a Headless Browser

Google uses a normal parser to determine which tags fall outside of the head. John recommends using a headless browser or using a W3C validator to check pages.

7 Mar 2017

Back 11/15 Next