How to create Google Search Console account second part
How to create Google Search Console account second part

How to create Google Search Console account second part

Google Index

The Google index tab is where you can view all of the URL’s indexed on your website. This is the page on your site that will be visible on Googles search engine. You should expect the number of URL’s indexed to be lower than the number of ones crawled as Google won’t index certain URLs such as duplicates and ones that have been given no index tags

Next is the content Keywords section, here you’ll find a list of keywords Google found significant when crawling your site.

Blocked Resources are anything that the robots.txt has blocked from Googlebot. For Google to properly index your site it will need access to files such as images and CSS.

Remove URL’s Tool – This tool allows you to temporarily hide a URL from the search engine. This will remove it from Google for about 90 days which will allow you to make any changes needed within that time. For most people though this tool won’t need to be used.

 Crawl

The crawl errors section is where any problems crawling the site or URL’s in the last 90 days will show. A site crawl error will appear if the Googlebot had problems accessing the entire site, and URL errors are page specific.

Site Errors

For the majority of working sites out there, you won’t have any site errors. These errors will either be some sort of DNS error, server error or failing to retrieve your robots.txt. If you do have any errors it will show you on a percentage scale. If you have a high error rate of 100%, then your site most likely down or the robots.txt is blocking the site from being crawled. Here are a few things you could check:

Check that the file permissions of your site haven’t changed recently.
Make sure links aren’t linking to non-existent pages.
Check that any new features or plugins you added recently are running correctly

URL Errors

These kinds of errors can be more common and usually easier to fix. The search console will list any errors you have from the top being most important and least at the bottom. A common error that websites face is a 404 error. These are a way of telling both users and bots that the page it’s trying to connect to does not exist. While this is the correct way to display that fact, if you have just misspelt a link to a page or have removed a certain page but still have a link pointing to it, then this is where the errors come into play. Another thing to watch out for is making sure that any redirects you have are clean through and don’t contain too many hopping points before reaching the main destination. This can cause problems for the Googlebot following your link.

Crawl Rate

The crawl rate shows how often the bots will crawl your site for new information. From an SEO standpoint, you should want there to be a pretty even line with a gradual increase as the site adds more content.

Fetch as Google

You can fetch each of the pages as if you were Google yourself. At the top of the page will be a search bar that allows you to decide the path(page) that Google fetches. On the image below you can see I have fetched used the home, “treatments” and “Contact” page. To select these pages all you do is enter the URL for each, the console will already provide the root URL so all you have to do is enter the page like so: /treatments//news// (Home Page) Now you’ve entered the path, you can choose to ‘FETCH’ or ‘FETCH AND RENDER’.

Fetch – This will simply get an HTTP response from your target URL, this is a quick process that lets you know if Google can connect to your site first and foremost.

Fetch And Render – Along with fetching it will also render your page how it would show on a desktop, run all of the aspects of your page including pictures, videos etc. This gives you a good idea of how the Googlebot views your page data compared to the user.

Once the pages are successfully fetched you can then submit a request to have Google crawl it. You can choose between a URL crawl only or have it crawl any direct links on the URL also. There is a monthly allowance of crawl request being 500 for the URL only and 10 for the direct links.

Once you submit this, Google will crawl the page and it’s content shortly afterwards. If your page follows the appropriate guidelines then it will be considered for indexing. robots.txt tester This allows you to test your sites robots.txt with the multiple bots (Googlebot, Googlebot-image, Googlebot-mobile etc), you can even make changes to the file here and then upload it to your site.

How to create Google Search Console account first part

CREATE sitemaps

 

Leave a Reply