#Google Webmaster Hangouts Notes – 22 March 2019 – Part 1

Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time.

Quick Info

This is the first part of the notes from March 22nd. You can find the full video below and the timestamps of the answers are in the brackets.

Are low-traffic pages thin and should they be removed?  (1:30)

If pages don’t get a lot of traffic from Google, this doesn’t automatically mean that they are considered thin content. Some searches might not be very popular, so they can’t attract many users. In this scenario, there’s no need to remove low traffic pages from the website.

For large websites with a low percentage of thin content (e.g. old news pages) it’s not critical to remove or change these pages as well.

But if a large part of the website is thin, then it might make sense to clean it up. You can:

  • Update pages
  • Consolidate content
  • Redirect old content to a more relevant URL
  • Remove the page

The path to choose greatly depends on your situation. But in any case, it’s also worth to rethink your approach to content creation to avoid thin content in the future.

Make sure your images URLs are not being dynamically changed and have redirects in place if you modify them (4:19)

Google treats images as more static content as opposed to text, so it doesn’t re-crawls them too often. This means that if an image URL changes, it’ll take Google time to re-crawl the image and show it again in the image search.

Here are general rules of thumb if you care about Google image search:

  • Make sure your CMS doesn’t generate new image URLs for the existing pics (e.g. for every session)
  • Don’t change images URLs unless it’s absolutely required
  • If you change image URLs, set up redirects from the old URLs to the new ones.

Related post:
How to optimize images for SEO

Personalizing content for different users is not treated as ‘bad cloaking’ by Google (7:51)

Cloaking is showing particular content to Googlebot and showing something different to users. This is against Google’s guidelines, and a website can get a manual action for that. For example, showing Google structured data which is significantly different from the info people see on a page is deemed as cloaking.  

In contrast, personalization is providing different content to users based on some criteria, e.g. on location. In this case, Google will see only content for the US users (as Googlebot is based in the USA).

Thus, content personalization is not the same as cloaking, so Google is OK with it. For example, you can localize your website information based on user’s location.

But you should understand that Google will index only that information which is available to it. In the case of localization, Googlebot will see content for the US only and will neither see nor index content for any other country.

A general recommendation for such situations is to have a sufficient amount of universal static content which is available to all users regardless of their location. Then add blocks with personalized content. In this case you don’t need to worry about which part of a page gets indexed as the static part will be picked up by Google anyway.

It’s OK if a domain is not reachable via the same IP address all the time (11:18)

This usually happens if you’re using CDNs or services that respond to users’ requests and allocate users dynamically to different servers with different IP addresses. It’s perfectly fine for Google.


Google ignores content on pages returning 404 or soft 404 errors (12:00)

Google won’t use content on a page which returns a 404 (or soft 404) error and will drop this URL from the search.

What happens if a 200 page has a canonical pointing to a noindex page? (13:16)

If you have a page with a 200 HTTP response canonicalizing a page that has noindex, multiple things can happen. There’s no single answer which is applicable to all situations.

Google uses many factors to understand which page is canonical. If internal links point to the noindexed page, it may signal to Google that you really want the noindexed page to be seen as canonical.

If internal links go to the normal page, then Google might ignore the canonical.

But Google will never index a noindex page even if there are canonicals pointing to it.

Related reply:
Why Google treats links with UTM parameters as canonical

Mobile-friendly alerts in GSC might be cased by issues with fetching JS and CSS by Google (23:55)

If Google can’t fetch JavaScript or CSS while rendering a page, the layout of this page might look different. It can trigger GSC alerts saing that this page is not mobile-friendly.

It might be a temporary error, so test the page with live mobile-friendliness tool and see what it says.

Choose the Service You Need

Pages disallowed in robots.txt can still be indexed and shown in Google search (24:54)

The robots.txt directives tell Google which pages should not be crawled. But this doesn’t prevent Google from indexing them.

If there are many links pointing to a page which is disallowed in the robots.txt file, Google can still index that page. But since it can’t see the content there, the page will not be ranked and can be show in the search only when someone is explicitly searching for it.

An exception is disallowing a page when Google already has enough information about it (e.g. the homepage). In such a situation, Google might still rank it for some time until it’s sure that this page is gone forever.


This is the end of Part 1 of my Google Webmaster Hangouts Notes from March 22. I’ll publish the second part soon, and I’m also working on a post on rel=”next/prev”, so stay tuned!

Previous episodes

Subscribe to get the notes and other useful tips delivered directly to your inbox!