#Google Webmaster Hangouts Notes – 22 March 2019 – Part 2

Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time.

This is the second part of the notes from March 22nd, you can find the first part here. And here is a full video of Google Webmaster Hangouts, the timestamps of the answers are in the brackets.

Google hasn’t been using rel=”prev/next’ for some time already (15:21)

But in most cases, you don’t need to worry. If pagination is working well for you now, there’s nothing to change for you there.

  • Don’t remove rel=”prev/next” as other search engines use it
  • Make sure paginated URLs can stand on their own
  • Link between paginated content.

You can find more info on this in my recent post on rel=”prev/next”.

Redirecting an image URL to an HTML page won’t pass pagerank (31:35)

Google picks up images for Image Search. If an image URL is not available, Google will neither index it nor transfer a redirect from an image URL to a web URL.

For example, redirecting an infographics image URL to a web page won’t pass any pagerank. Only a redirect from an HTML page to another HTML page will work.

Google can re-write title tags if they’re stuffed with keywords (35:17)

Google recognizes keyword-stuffed title tags and can rewrite them in the search.

Kristina’s note: But this doesn’t mean that if your titles are over-written, they are over-optimized. There are many other factors that come into play here.

Google uses the URL parameter handling tool found in GSC to decide what pages should not be crawled (36:10)

Google uses the directives you can set up in the URL parameter handling tool. They’re especially useful for large websites.

But before making any changes to the default settings, make sure you understand how it works as you can mistakenly prevent your important pages from being crawled.

Google will not highlight disavowed links in Google Search Console (37:00)

Removing or highlighting disavowed links might make sense but it’s not worth the effort for Google. Moreover, putting too much visible weight on disavowing links might encourage webmasters to use this tool while in reality, most websites don’t need it.

Once you remove links from the disavow file, they’ll be seen by Google as regular links (38:56)

If you remove links from disavow tool (undisavow links 🙂), Google will see them as normal incoming links after it re-crawls them. There’s no fixed time for that, it can take days, weeks or even months.


If you’re using server-side JavaScript rendering, make sure the rendered pages don’t have refences to JS files (44:45)

If you’re using server-side rendering, and JavaScript is still referenced to from the rendered page, Googlebot assumes that JS is still relevant, and it tries to execute it. So it’s better to render the page completely on the server and not have references to JS on the rendered pages.

If that’s problematic, you can at least use fewer JS files and reduce version indications in JS files URLs as well as enable caching.

Make sure that your displayed date on a blog post and the date in structured data for that post are consistent (47:07)

Google uses the visual date on a page as well as date in the structured data you provide to determine which one should be displayed in the search. Make sure this info is consistent on your website, and there are no time zone issues in your structured data (e.g. 9 am GMT vs 9am EST).

Further reading: Provide a publication date to Google search

Don’t use intrusive pop-ups or interstitials (52:47)

This is especially relevant for mobile websites as they’re accessed from smaller screens. Intrusive pop-ups or interstitials on mobile websites are flagged as issues by Google, so avoid using them.

Kristina’s note: it sounds pretty straight-forward. But the main problem seems to be is understanding when it’s too much and how to find a golden middle. John suggested to perform user testing and get feedback on how people see your website and if ads there are too destructing, get rid of them.

500 errors can reduce crawling frequency of the website and make pages drop out from search (53:56)

If Googlebot sees pages returning 500 errors, it might assume that these errors have been caused by crawling, so it reduces crawling frequency. Such pages might also be removed from Google index, so 500 errors should be taken seriously.

Previous episodes

Subscribe to get the notes and other useful tips delivered directly to your inbox!