Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time. You might want to subscribe not to miss anything!
Quick Info
The full video is here, and the timestamps for the answers are in the brackets. Let’s dive in!
Table of Contents
There’s no need in Hreflang annotation for non-canonical URLs (3:17)
You don’t have to use hreflang annotation for your non-canonical URLs as these URLs shouldn’t be shown in the search anyway.
Kristina’s note: But make sure that these URLs have been made non-canonical intentionally, not by mistake.
Don’t use rel=nofollow for internal links (4:54)
If your internal links have a rel=nofollow attribute, Google won’t follow them, crawl them or pass any authority through them. Google won’t recognize them as important pages either.
In most cases, it’s not something you want on your website, so it’s better not to use rel=nofollow for internal links.
Kristina’s note: This ties in with this old video where Matt Cutts explains how nofollowing internal links works:
Google may find and crawl images even if you use lazy loading or defer images in some other ways (6:55)
Lazy loading or other ways of deferring images will still let Google find and crawl these images as long as it can find links to them within the <img> tags in the page source.
There’s nothing special you can do to be shown in the Top Stories carousel in Google (11:05)
The Top Stories carousel is a normal organic search feature in Google, so there’s nothing specific you can implement on your website (like structured data) to be shown there.
Domain age is not a ranking factor (18:14)
Domain age is not a ranking factor, so it’s possible for a new website to successfully compete with old websites.
But it’s also important how these old websites have been maintained and optimized all these years. If they have strong backlink profiles, a solid structure and optimization in place, it’ll be very hard to compete with them. If they were more like ‘set and forget’ websites, a new site can outrank them if it’s optimized properly.
The idea here is that having an old domain doesn’t automatically mean high rankings
Google can still show and rank pages blocked in the robots.txt (22:20)
Google sometimes shows pages blocked in robots.txt in the search results if it sees that they work really well (e.g. other people link to them).
It means that disallowing a page in the robots.txt file doesn’t really prevent it from showing up in the search results, it just prevents Google from accessing content on that page.
Kristina’s note: Disallowing a page in the robots.txt prevents only crawling of this page, not indexing. So If you absolutely don’t want your page to show in the search, use meta robots noindex instead of robots.txt.
When you change content on a page, Google needs to re-evaluate it (24:30)
When page content is changed, Google takes signals associated with this page and looks at the new content to find new signals it can pick up.
It means that changing content influences page rankings.
If the changes are minor, the re-evaluation process shouldn’t influence a lot how Google sees and ranks this page. Bigger changes can lead to more significant shifts.
While changing the content, keep the old URLs whenever possible (25:52)
If you’re changing your content, try to keep the same URLs as before. It makes it a lot easier for Google to associate the old signals with the new content.
If you can’t keep the old URLs, make sure to 301 redirect them to the new ones.
Nothing changes from Google point of view if the website owner changes and the website stays the same (26:28)
If the owner of a website changes but the website stays the same, it doesn’t influence how Google sees or ranks this website in any way.
But if a domain name expires, somebody buys this domain name and puts a new website there, that’s essentially a new website. Google will need to re-evaluate all the signals for it.
Don’t use automatic IP redirects based on users’ location if you want Google to index all languages versions you have (28:22)
Automatic IP redirects based on the user’s location is not the best option from an SEO perspective. Googlebot always crawls your website from the USA. It means that Google only sees the English version of your website and never gets access to the other language versions, so it can’t index it.
It’s better to have individual landing pages for the individual languages and set up hreflang tags so that Google will be able to show the right language versions of the URLs in the search results.
From a UX perspective, it’s also valuable to give people an option to switch the versions manually if they need to.
A few words on an alt tag of an image link (47:08)
Google uses alt attributes found in the image links in 2 ways:
- As a part of the page where this image is found
- As an anchor text for the page which is linked through this image
So the alt tag should ideally be relevant to both pages and describe the image.
That’s it for today. What was the most interesting part for you? Share in the comments!
After 10+ years in SEO, I founded MarketingSyrup Academy where I teach smart SEOs. Over 500+ people have gone through my courses, including SEO Challenge and Tech SEO Pro.
I’m also a creator of the SEO Pro extension with 30K active users worldwide.