Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time.
Here are the notes from May 10th. This is Part 2, you can find the 1st part here. The timestamps of the answers are in the brackets.
Table of Contents
How to and FAQ reports are coming to GSC (3:00)
But you need to have the markup to see these reports in your Google Search Console.
There are new features coming to Google Search Console (3:30)
- The opt-in for large images
if you have them on your website and want to be shown in the search, you’ll be able to use this future.
- Duplex on the web settings
This is the way to streamline your checkout flow so that people will be able to buy something using Google Assistant. The setting there is mostly a test account: you’ll be able to specify username and password for that test account, and the machine learning system will use it learn your checkout flow.
Evergreen Googlebot is live but some tools still need to be updated (7:16)
Google has recently announced evergreen Googlebot which will be better at rendering content, especially JS based.
This new Googlebot is 100% live (though it’s still using the old name – Chrome 41 in the logs). But testing tools haven’t been updated yet, e.g. mobile-friendly test, URL inspection tool, structured data testing tool.
There shouldn’t be any fluctuations in rankings due to the switch to new Googlebot (8:05)
Those websites that Google could index before shouldn’t see any changes. The update is aimed at the websites using modern features and content which could have been missed by the previous Googlebot.
Googlebot doesn’t use HTTP/2 for crawling and indexing (9:20)
HTTP/2 makes a lot of sense for browsers when you have multiple streams of content that need to be rendered. But it’s not really needed for Google as it caches a lot of the content and uses it when needed.
If you’re using JS, make sure to also provide static content if you need your pages to be indexed quickly (10:50)
When it comes to indexing of JavaScript, Google first picks up the HTML content and can index it right away. But rendering takes a little bit longer (up to a few days). That’s why websites that need their content rendered as quickly as possible (for example, news publishers), should have some kind of static content. This guarantees that its indexing won’t be delayed by rendering.
It’s better to have a single page for your product instead of many pages which are variations of the same item (12:40)
Make your product pages stand on their own: they should be unique products and not variations of the same item. So if you have a product in different sizes and/or colors, it might make sense to have a single page for it.
You can use ‘noindex’ to handle duplicate/thin content (16:43)
Noindexing a page is a good way to handle duplicate or thin content. It particularly works for situations when you still want users to be able to access the page but don’t want Google to access it.
Google recognizes and ignores spammy websites copying content from legitimate websites or linking to them (18:13)
There are many situations when spammers copy content from high-quality websites, span it, link to those websites. Google is pretty good at recognizing this type of spam. It ignores it which means that the website whose content has been copied or which were linked from spammy websites, should not worry about this.
Crawl budget is not about crawl depth but about the volume of requests Google was able to get (29:25)
For Google, crawl budget means how many URLs it would fetch from a website on a day, Crawl budget might be an issue for really large websites while small and medium ones are safe here.
Usually, the hard part with crawl budget is not finding the limit but balancing between indexing the new content and updating index of the existing content.
Reducing the size of your page won’t increase crawl budget (31:27)
What would help though is having a quick server response time. Otherwise, Google will slow down indexing and will get to fewer pages than it potentially could.
3rd party resources don’t influence your crawl budget of a website (32:49)
Google looks at the content on a server level. So if you have content on a CDN, for example, its indexing would apply to their crawl budget.
Content hidden in tabs is used for ranking but isn’t shown in snippets (35:07)
Google uses the content hidden in tabs for indexing and ranking but it won’t be shown in a snippet.
By showing content in the snippet, Google ‘promises’ users that they’ll find this content on a page if they click on the result. But if the content is hidden, users won’t see it directly on a page.
Don’t disallow a page in robots.txt and add a noindex tag to that page at the same time (36:16)
If a page is blocked by robots.txt, Googlebot won’t crawl it. This means that if you also have a noindex tag there, Google won’t see it. What will happen next is that Google will still index that page but without content.
So if you want a page to be noindexed, you need to allow its crawling in robots.txt. Once Google revisits that page and see your noindex tag, it’ll drop the page from the index.
In general, combining robots.txt and other directive for the same page (including a canonical tag) is a bad idea in most cases.
Use Inspection tool to see a canonical for a page (39:11)
As the info: operator has been sunsetted, you can use the Inspection tool in Google Search Console to see what URL is picked up by Google as a canonical. Note that site: doesn’t show you canonicals.
Previous episodes

After 10+ years in SEO, I founded MarketingSyrup Academy where I teach smart SEOs. Over 500+ people have gone through my courses, including SEO Challenge and Tech SEO Pro.
I’m also a creator of the SEO Pro extension with 30K active users worldwide.