#Google Webmaster Hangouts Notes – 5 April 2019

Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time.

Here are the notes from April 5th, the timestamps of the answers are in the brackets. Let’s dive in!

Having comments for blog posts is not a ranking factor but still can indirectly help with rankings (1:40)

Relevant and helpful comments can add more keywords and overall value to a blog post as well as help to build a community of people interested in your content.

But this doesn’t mean that having a comments section automatically makes a website better and increases its rankings. It’s rather an indirect factor that can potentially influence how a website is seen by Google.

Kristina’s note: you’re welcome to leave comments on this blog 🙂

Google doesn’t provide specific reporting on voice queries (6:42)

There are 2 types of voice queries.

  1. Voice queries done like a normal search but using voice instead of typing the keywords. Such searches return the same results as normal ‘typed’ searches would. Thus, Google doesn’t separate these queries from others in GSC Performance reports.
  2. Voice searches made with assisting devices – Google Home, Alexa, etc. They are more like a featured snippet. And they are not counted or reported by Google anywhere specifically.

Google values quality over quantity (12:12)

There’s no ranking benefit for having more pages than less. It’s more important to make sure that the quality of your content is high, even if you have just a few pages.

Think about what information you have that users are searching for and how you can provide it in the way that answers their needs. Don’t count words and pages, focus on the content instead.

SEO is not going anywhere (15:08)

There’s a growing number of featured snippets and answers that are available directly on Google search results pages, so for many websites the amount of clicks is decreasing as people don’t need to go to them for answers anymore.

But this doesn’t mean that SEO is dying, it’s just a natural shift. Moreover, such things as making sure that a website is crawlable and indexable as well as promoting  your content to make sure it matches what people are actually searching for – these things are not going away. #SEOisAlive

Pages with the same content but targeting different countries are treated as duplicates but hreflang can help to guide users to the right version  (25:08)

Google will treat multiple pages with exactly the same content as duplicates even though they target different countries. So it’ll just use a single URL as a canonical and will show it in the search results.

If you have hreflang in place, Google will still treat only one of the duplicate pages as a canonical. But it will swap up the URLs in search to show more relevant country versions which is good for users.

More on hreflang:

Google can’t read text on images, so use a traditional way to provide more context (27:36)

Google uses many things to understand what is displayed on an image. But it still can’t read text on images.

Alt tags, captions, content around images, names – all these things provide direct context to Google and help it understand and rank images in the Image Search. Use them!A

Also, check out this post on Image SEO.

If you want to see if Google picked up your JS or other types of content, use URL Inspection tool in GSC, not page cache (29:38)

A cached page is a cached HTML that Googlebot saw when it tried to crawl and index that page. But it doesn’t reflect what Google would actually use for indexing because it woudn’t include things like JavaScript when it’s exwcuted and changes tthings on a page. So the cached page is only a static HTML.

In the Inspect URL tool you’ll see the rendered version that Google uses for indexing. Sometimes you also see a static version there when Google hasn’t had a chance to go and render this page yet.

If you want to see how Google would theoretically render a page, then you can use the live test in the URL Inspection tool to see how the page rendered – screenshot as well as the HTML and JavaScript errors.

If an image URL is blocked in robots.txt, it would prevent that image from showing up in Image Search (31:57)

Google needs to crawl image files directly in order to use them in the Image Search. It also needs a landing page for an image. So don’t block images in your robots.txt and make sure each image is added on HTML page that will be used as a landing page by Google.

Note that not all images should be added to Google Image Search, e.g. theme pics, buttons, so it’s totally fine to prevent Google from crawling them. It won’t negatively influence the performance of the pages with those images in web search.

Jump links don’t give any ranking benefits (35:33)

Jump links are links which link from one part of the page to another part of the same page (for example, a table of contents in this post which links to each individual question within this post). There’s no ranking benefit associated with them, they are used mostly for usability.

Kristina’s note: I like how jump links look like in search and I frequently click them as they guide me directly to the part of the content I’m looking for. Here’s an example from the search page:

While using dynamic rendering, make sure that your server response time is quick (37:07)

If you’re using dynamic rendering and your server takes 5-10 seconds to serve HTML files to Google, the latter will slow down the crawling and decrease crawling budget not to overload your server.

Thus, if you have a large website, it’s better to make sure your prerendered responses to Google are as fast as possible. You can utilize caching to achieve that.

Use <lastmod> date in your XML sitemap to let Google know that many of your pages have been changed and need to be re-crawled (39:27)

You can specify the <lastmod> date in your XML sitemap and ping the sitemap to Google. It usually fetches the sitemap file right away, so that it will see last modification date and re-crawl the pages as quickly as possible.

Also note that Google should trust your modification date: change it only for those URLs that really need to be re-crawled, not for the whole website.

Note that you can also use the Inspect URL tool instead for occasional re-indexing requests. This is done manually, so it’ll work better for only a couple of pages.

Google doesn’t always ignore duplicate content but it avoids duplication in the search results (43:50)

If there are 2 pages which are exactly the same, Google will choose only one page as a canonical and will ignore the other.

If there are only certain blocks or parts of content that are identical on multiple pages, Google will pick up all these pages, i.e. it won’t ignore any of them. But when it comes to ranking, Google will use other factors to determine which page is more relevant and should be shown to a user.

Google still supports Unavailable After Meta Tag (47:37)

The Unavailable After Meta Tag is not gone, you can still use it.

Kristina’s note: this tag lets you specify the time and date after which your content should not be crawled or indexed by Google. Honestly, I’ve never used this tag. Have you?

Keyword stuffing on a page may cause Google to completely ignore this page for these keywords (49:25)

Having keywords to address user intents is good. But mentioning a keyword too many times to the extent when the text becomes artificial and unreadable might come back to bite you. So do use keywords but keep it natural.

Don’t block JS files from crawling in robots.txt (51:55)

Access to JS files helps Google render your pages properly, so don’t disallow them in your robots.txt.

It’s also applicable to situations when you have server-side rendering. With such configuration Google should not process JS on your pages. So if it still does, it means that your server side rendering is not set up properly.

Kristina’s note: This ties back to one of the previous answers on referencing JS files from the server-rendered pages.

Google sees sitelinks as normal search results (55:06)

Sometimes Google shows multiple pages from your website in a form of sitelinks. Those sitelinks are usually internal pages, and Google counts impressions, clicks and position for them in the same way it does for a normal URL.

But in the Queries report you will see info on a website basis. It means that only 1 impression will be reported for a query while each URL shown for that query will get 1 impression in the Pages report.

Kristina’s note: Here is a quick way to see what pages are shown as sitelinks. Go to the performance report and filter the query by your brand name:

Previous episodes

Subscribe to get the notes and other useful tips delivered directly to your inbox!