#Google Webmaster Hangouts Notes – 16 April 2019

Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time.

Find the notes from April 16th below, the timestamps of the answers are in the brackets as always.

This sessions talks a lot about redirects, and some of the info might be new to you. Let’s dive in! But the video first (hey Barry on the thumbnail!).

Table of Contents

If you nofollow links to facet pages, make sure there’s another way Google can discover all your products (1:29)

It’s OK for category pages to nofollow links to the facet pages. But you should make sure that there are other ways Googlebot can find and index your product URLs without visiting the facets.

By nofollowing facet links, you can potentially save crawl budget.

Kristina’s note: oh boy, eCommerce facets is a very interesting and hot topic. Nofollowing facet links is not always an optimal solution since link juice is lost through those links anyway as it’s not directed anywhere.

And there were previous statements by Google recommending to not use nofollow attributes for the internal linking. But there may different scenarios, so there are no actual contradictions in these recommendations, just different cases.

In short, you want your facets to be valuable pages and thus there’s no need to nofollow links pointing to them. And if you have filters, it’d be perfect to use AJAX for them not to create unnecessary URLs in the first place.

Keep your structured data relevant to the primary object of each page instead of having a single set of structured data across all website pages (14:55)

Structured data on a website should be specific to the primary object on each page. For example, structured data on a product page should be specific to this particular product. So having one set of structured data that you apply to all pages of your website generally would be incorrect and can be ignored by Google.

The advice here is to use different sets of structured data for different types of pages to make sure structured data is relevant for those pages.

Choose the Service You Need

If your new content is not picked up by Google as fast as it could, there are a few things you can do speed up indexing (16:47)

Ways to speed up indexing:

  • Reduce the number of pages to save crawl budget (Kristina’s note: do this only if it really makes sense!)
  • Make sure Google can reach new URLs as quickly as possible: link them from a more visible place on your website (e.g. the homepage) so that Googlebot can find them faster
  • Add the URLs of your new content to the XML sitemap.
  • And of course, make sure there are no technical issues that are blocking Google from crawling your new content (server is responsive, it returns a 200 HTTP status code, etc.).

Events structured data should only be used for physical events (18:27)

You can use Events structured data only for those events that occur online at a specified time and place. Online events or promotions are not eligible for this type of markup.

Google doesn’t give preference to expired domains (20:39)

Redirecting expired domains to a website in order to get link juice from this domain is against Google guidelines. But still, some shady tactics can slip through the cracks and websites can benefit from them for some time. Google tries to catch such things, and if you spot something like that, you can submit a webspam report form.

Having self-referencing canonicals is best practice (21:50)

Self-referencing canonicals help Google understand what URL is more important when people use URLs with parameters. 

So having self-referencing canonicals is kind of a best practice.

More about canonicals:

Google uses Quality Raters Guidelines to test algorithmic changes before rolling them out (26:26)

Before any updates are rolled out, they are tested by Google quality raters using the Guidelines Google put together. This is done to make sure that the algorithms work as expected and really improve the search results.

The quality raters don’t fine-tune rankings, they just provide feedback on the updates that are being tested.

When you use hreflang, make sure that each page in a sequence can be accessed and indexed by Google (29:11)

Hreflang is always set up between individual pages, and each of these pages needs to be indexed individually. Once it’s done, Google is able to swap up the URLs to show the relevant version in the search.

So if for some reason Google doesn’t understand hreflang connection between the pages (e.g. one of them is blocked from indexing), it might ignore hreflang.  

More about hreflang:

What happens if you have a 301 redirects chain (30:50)

Every time a redirect happens, Google needs to request a new URL (the target URL). And if you have too many steps in a redirect chain (e.g. Page 1 -> Page 2-> Page 3 -> The Final Page), Google tends to look at the rest of the chain in the next cycle which means crawling and indexing of that page will be delayed.

The limit is ~ 5 redirects that Googlebot follows immediately, the rest is left for the next cycle. So ideally, you need to set up a redirect to the final destination (i.e. Page 1 -> The Final Page).

If redirects are being added to that chain over time, Google might stop indexing the earlier URLs over time and go directly to the final destination.

Kristina’s note: Also, make sure to link interannually to the final URLs or at least to those URLs which are nearer to the final URL in the redirect chain (e.g. to The Final Page or Page 3 as opposed to Page 1 from the example above).

Keep redirects in place for at least a year (32:34)

 This ties into the previous answer but deserves a separate subheading 🙂

Keeping a redirect in place for at least a year will ensure that Google sees your redirects multiple times and after that may go and index the final URL instead of the redirecting one.

But there’s no need to maintain redirects made many years ago.

Kristina’s note: this is applicable to those situations where your redirected URLs don’t have high authority and external links. If they do, I wouldn’t remove the redirects. But you can reduce the number of ‘links’ in the redirect chain.

301 redirect vs 302 redirects (33:54)

With permanent (301) redirect Google indexes the final destination while with temporary (302) redirects Google indexes the initial URL. M

If you have a redirect chain consisting of 301 and 302 redirects, you’re not making yourself a favour (34:41)

Having both permanent and temporary redirects in a chain send conflicting signals to Google, and it uses other ways to determine a canonical page (hint: it might not be the one you see as a canonical).

More on redirects

After a redesign, Google re-evaluates your website which may result in higher or lower rankings (39:38)

After a redesign Google takes into account the final version of the website, not the previous one. It re-considers website architecture, content, technology used and all other signals to evaluate the redesigned website.

It means that not all redesigns are treated equally. Sometimes you may see a boost in traffic, sometimes your traffic can drop. It depends on many factors. But again, if the website used to perform well before the redesign, it doesn’t guarantee high rankings after the switch to the new design.

Note also that it can take Google some time to re-process your website redesign. So rankings fluctuations in search might be temporary.

Having parameter URLs indexed doesn’t automatically influence your website in a negative way (48:06)

If some URLs you don’t want to be indexed are indexed anyway (e.g. URLs with parameters), this doesn’t mean that your website will be devaluated. Google will just not rank such URLs.

If you want to remove them, use the URL Removal tool in Google Search Console, and Google will stop showing those URLs in the search results and drop them from index.

Kritina’s note: I’d still recommend keeping the ‘unwanted’ URLs from Google’s index whenever it’s possible. As sometimes they still might cause duplication issues.

If you redirect one site to another, make sure the initial website is not blocked from crawling and indexing (50:49)

Don’t remove the old URLs from search or block the old website in robots.txt as Google needs to access it to pick up the redirects and transfer the signals correctly.

A site migration to a domain that was problematic in the past can negatively influence your rankings (51:10)

If you migrate your website to a domain which used to be spammy or had a manual action, this might negatively influence the migrated website. You will need to clean out everything you to make sure Google understands that your website is not spammy.

Previous episodes

Subscribe to get the notes and other useful tips delivered directly to your inbox!

2 thoughts on “#Google Webmaster Hangouts Notes – 16 April 2019

  1. Thank you for publishing in your blog, most importantly valuable information to improve my knowledge keep many posts in your site and like this post blog thank you

Leave a Reply

Your email address will not be published. Required fields are marked *