#Google Webmaster Hangouts Notes – 25 June 2019

Hey hey!

I’ve been through moving to another place to live lately, so I did these notes literally sitting on the boxes 😀

This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time. You might want to subscribe not to miss anything!

Here is the full video, you’ll find the timestamps in the brackets after each question. Let’s go!

Split content into multiple pages only when it makes sense (2:31) 

In general, having more pages might be beneficial as they might be better focused. But on the other hand, those pages individually have less value. 

So it’s better to start with fewer pages and make them strong and then gradually increase the number of pages to address more niche user intents. 

Kristina’s note: Obviously, creating fewer pages doesn’t mean that you should list all of your services or products on a single page. There should be a balance between the number of pages and products/services you’re offering.

Google is able to crawl non-Latin characters in URLs (7:26)

There’s no need to translate or remove non-Latin characters from the URLs as Google can easily crawl them.

Your meta description tags may have as many characters as you want but… (7:50)

Google doesn’t give any strict character limit for meta descriptions. This means you can have as many characters as you want. But here are a few things you should know regarding this:

  • If the meta description is too long, Google will cut it.
  • If the meta description tag is too short, Google can re-write it by taking a piece from the page content.
  • Google does not penalize websites/pages for having too long/too short meta description. 

If your page is not indexed, it’s most likely a technical issue with your website (11:40)

If you see that a page is not indexed by Google, there’s most likely an indexing issue there. 

Check it by searching for the URL in question in Google. If it’s not found, then it is an indexing issue. You can use an Index Coverage report in Google Search Console to find it.

Kristina’s note. Common technical issues are:

  • Meta-robots noindex tag
  • The page is blocked by robot.txt (it might still be indexed but not rank, so you won’t see traffic to it anyway). 

If you need help with identifying such issues, I’ll be happy to help! Just let me know:

If the page is found on Google, this is not an indexing issue, there might be different reasons why your pages don’t get traffic. There might be different reasons behind that including overall website authority and quality.

Having structured data implemented doesn’t guarantee rich snippets. There are a few more things to it (14:52)

In order to get rich snippets, you should have a few things in place:

  • Structured data should be properly implemented from a technical point of you. You can check it with the Google Structured Data testing tool. 
  • Structured data needs to comply with the policies.
  • Google needs to be sure that a page and the website in general are of high quality. 

With that being said, adding markup doesn’t guarantee getting rich snippets. 

Only those website pages that Google can crawl and index influence the overall quality of a website (17:06)

When Google evaluates the quality of a website,  it looks only at those pages which it can index. This means that preventing Google from indexing or crawling particular pages won’t influence how Google sees the quality of your website. 

The initial question was about an eCommerce website blocking Google access to the cart page. According to John Mueller, Google doesn’t need to be able to crawl the cart and checkout pages, so it’s fine to disallow them in your robots.txt file.  

Use the <lastmod> date attribute in your XML sitemap to make sure Google picks up your website pages quicker (19:34)

When Google sees <lastmod> which is different from the date when a page was last indexed, it tries to re-index a page as quickly as possible. So <lastmod> really helps to keep Google updated on your website changes which is especially helpful if they are frequent or/and significant. 

The <changefreq> and <priority> attributes are not used by Google anymore, so you can omit them in your XML sitemap. 

It will take time to recover rankings of a website that has been hacked (22:17)

If a website had been hacked, Google might see that it was changed in a bad way and drop its rankings. 

Even after the hack is resolved, it will take Google time to re-evaluate the website and rank it again.   

Domain history doesn’t influence the priority of reviewing the request for removing a manual action for unnatural incoming links (23:43)

In general, Google doesn’t consider domain history while reviewing reconsideration requests. 

But if this is kind of a back and forth game with a website which gets penalized, then adds a disavow file to resolve the manual action and once it’s resolved, removes the file – this will catch Google’s webspam team attention and might influence how this website is evaluated in the future.  

You can use lazy loading images as normal images if you make sure that Google can pick them up (26:50) 

Lazy Loading is a way of embedding images on a page so that they are not loaded by default when you open a page but instead load when you scroll this page down. It positively influences page speed,

Google can pick up such lazy loading images if they are embedded in a standard way (Kristina’s note: within <img> attributes which are visible in the page source code). In this case, you can add such images to your XML sitemap and use them in structured data. 

Google doesn’t take into account duplicate images for normal ranking, only for Image search (30:48)

When it comes to normal ranking, Google doesn’t take into account photos and images you have on your website. This means that using photos that are also found on other websites won’t harm you.

For image search, Google tries to recognize when the same photo is used across multiple websites and pick only one of those websites to show in the Image search results. In this respect, it might pay off to use unique photos versus stock photos, for example. 

HTML and CSS errors don’t negatively impact website rankings (34:19)

Even though a page might have some invalid HTML and/or CSS, it might still work well in browsers and for users. So Google doesn’t decrease rankings for having such errors.

The old infinite scroll recommendations are still relevant (41:03)

Having paginated URLs still helps as well as changing URLs as you scroll. 

The only thing you can ignore is rel=prev/next.

Kristina’s note: Martin Splitt from Google also recommended using Intersection Observer for AJAX scrolling.

Google doesn’t always treat subdomains as separate websites (51:02)

When it comes to displaying multiple results from a single website, in some situations, Google treats subdomains as separate websites (e.g. Blogspot sites are completely independent), in others – as parts of the same website. Google tries to figure out how each particular domain should be treated algorithmically.

That’s it for today! Don’t forget to subscribe for the updates!