Search engine optimization, in its a lot of fundamental sense, trusts something above all others: Online search engine spiders crawling and indexing your website.
However nearly every site is going to have pages that you do not want to include in this exploration.
In a best-case scenario, these are not doing anything to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more vital pages.
Luckily, Google enables web designers to inform search engine bots what pages and material to crawl and what to disregard. There are a number of ways to do this, the most typical being using a robots.txt file or the meta robots tag.
We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you need to absolutely check out.
However in high-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exclusion Protocol (REP).
Robots.txt offers spiders with guidelines about the website as a whole, while meta robotics tags include instructions for specific pages.
Some meta robotics tags you might employ include index, which informs online search engine to add the page to their index; noindex, which informs it not to add a page to the index or include it in search results; follow, which instructs a search engine to follow the links on a page; nofollow, which tells it not to follow links, and a whole host of others.
Both robots.txt and meta robots tags work tools to keep in your toolbox, but there’s also another method to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it controls indexing for an entire page, in addition to the specific aspects on that page.
And whereas using meta robots tags is relatively simple, the X-Robots-Tag is a bit more complex.
However this, obviously, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any instruction that can be utilized in a robotics meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related regulations in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are certain scenarios where you would wish to utilize the X-Robots-Tag– the 2 most common being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You wish to serve instructions site-wide rather of on a page level.
For example, if you want to obstruct a specific image or video from being crawled– the HTTP action approach makes this easy.
The X-Robots-Tag header is also useful due to the fact that it allows you to combine numerous tags within an HTTP action or utilize a comma-separated list of directives to specify directives.
Perhaps you do not desire a particular page to be cached and want it to be not available after a certain date. You can use a mix of “noarchive” and “unavailable_after” tags to advise online search engine bots to follow these directions.
Essentially, the power of the X-Robots-Tag is that it is far more versatile than the meta robots tag.
The benefit of utilizing an X-Robots-Tag with HTTP responses is that it allows you to use routine expressions to perform crawl directives on non-HTML, along with apply parameters on a bigger, international level.
To help you understand the distinction between these directives, it’s helpful to classify them by type. That is, are they crawler directives or indexer regulations?
Here’s a handy cheat sheet to describe:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, permit, disallow, and sitemap instructions to specify where on-site search engine bots are permitted to crawl and not allowed to crawl.||Meta Robots tag– allows you to specify and prevent search engines from showing particular pages on a site in search results.
Nofollow– enables you to define links that should not pass on authority or PageRank.
X-Robots-tag– permits you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you wish to obstruct specific file types. An ideal approach would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a site’s HTTP actions in an Apache server setup via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds terrific in theory, but what does it appear like in the real life? Let’s have a look.
Let’s state we desired search engines not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would look like the listed below:
place ~ * . pdf$
Now, let’s take a look at a various circumstance. Let’s state we want to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You could do this with an X-Robots-Tag that would look like the below:
Please note that understanding how these directives work and the effect they have on one another is crucial.
For instance, what occurs if both the X-Robots-Tag and a meta robotics tag are located when spider bots discover a URL?
If that URL is blocked from robots.txt, then certain indexing and serving regulations can not be discovered and will not be followed.
If directives are to be followed, then the URLs containing those can not be disallowed from crawling.
Look for An X-Robots-Tag
There are a couple of various techniques that can be utilized to look for an X-Robots-Tag on the website.
The simplest method to examine is to install an internet browser extension that will inform you X-Robots-Tag information about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to identify whether an X-Robots-Tag is being utilized, for instance, is the Web Developer plugin.
By clicking the plugin in your web browser and navigating to “View Reaction Headers,” you can see the various HTTP headers being utilized.
Another approach that can be utilized for scaling in order to pinpoint concerns on sites with a million pages is Shouting Frog
. After running a site through Screaming Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which sections of the site are using the tag, in addition to which specific instructions.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Comprehending and controlling how online search engine engage with your website is
the cornerstone of search engine optimization. And the X-Robots-Tag is a powerful tool you can utilize to do simply that. Simply know: It’s not without its risks. It is very easy to slip up
and deindex your whole website. That stated, if you’re reading this piece, you’re most likely not an SEO beginner.
So long as you utilize it carefully, take your time and inspect your work, you’ll discover the X-Robots-Tag to be an useful addition to your toolbox. More Resources: Included Image: Song_about_summer/ Best SMM Panel