Search engine optimization, in its many standard sense, trusts something above all others: Search engine spiders crawling and indexing your website.
However almost every site is going to have pages that you don’t wish to include in this expedition.
In a best-case situation, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more crucial pages.
Fortunately, Google enables web designers to inform search engine bots what pages and material to crawl and what to overlook. There are several ways to do this, the most common being utilizing a robots.txt file or the meta robots tag.
We have an outstanding and detailed explanation of the ins and outs of robots.txt, which you ought to absolutely check out.
However in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (REPRESENTATIVE).
Robots.txt provides spiders with guidelines about the site as a whole, while meta robots tags include directions for specific pages.
Some meta robotics tags you may utilize consist of index, which informs search engines to include the page to their index; noindex, which informs it not to include a page to the index or include it in search results page; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.
Both robots.txt and meta robots tags work tools to keep in your toolbox, however there’s also another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to manage how your websites are crawled and indexed by spiders. As part of the HTTP header response to a URL, it controls indexing for a whole page, as well as the particular components on that page.
And whereas utilizing meta robots tags is fairly simple, the X-Robots-Tag is a bit more complex.
However this, naturally, raises the concern:
When Should You Use The X-Robots-Tag?
According to Google, “Any instruction that can be used in a robotics meta tag can also be specified as an X-Robots-Tag.”
While you can set robots.txt-related directives in the headers of an HTTP action with both the meta robots tag and X-Robots Tag, there are particular circumstances where you would wish to utilize the X-Robots-Tag– the two most common being when:
- You wish to manage how your non-HTML files are being crawled and indexed.
- You want to serve directives site-wide instead of on a page level.
For example, if you wish to obstruct a particular image or video from being crawled– the HTTP action technique makes this simple.
The X-Robots-Tag header is likewise helpful due to the fact that it allows you to integrate multiple tags within an HTTP response or utilize a comma-separated list of directives to specify regulations.
Possibly you don’t want a specific page to be cached and want it to be unavailable after a specific date. You can use a combination of “noarchive” and “unavailable_after” tags to advise online search engine bots to follow these instructions.
Basically, the power of the X-Robots-Tag is that it is far more versatile than the meta robotics tag.
The benefit of utilizing an X-Robots-Tag with HTTP reactions is that it allows you to use routine expressions to perform crawl directives on non-HTML, as well as use parameters on a bigger, international level.
To help you understand the difference in between these instructions, it’s helpful to categorize them by type. That is, are they crawler instructions or indexer directives?
Here’s a helpful cheat sheet to discuss:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user representative, permit, disallow, and sitemap instructions to define where on-site search engine bots are enabled to crawl and not permitted to crawl.||Meta Robots tag– permits you to define and prevent online search engine from showing specific pages on a site in search results page.
Nofollow– enables you to define links that ought to not hand down authority or PageRank.
X-Robots-tag– allows you to manage how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you wish to obstruct particular file types. A perfect method would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a site’s HTTP actions in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds great in theory, but what does it appear like in the real life? Let’s take a look.
Let’s say we desired search engines not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would look like the listed below:
place ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;
Now, let’s look at a various situation. Let’s state we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:
Please note that comprehending how these regulations work and the impact they have on one another is essential.
For example, what happens if both the X-Robots-Tag and a meta robotics tag are located when spider bots discover a URL?
If that URL is blocked from robots.txt, then specific indexing and serving directives can not be discovered and will not be followed.
If directives are to be followed, then the URLs including those can not be prohibited from crawling.
Look for An X-Robots-Tag
There are a couple of various methods that can be utilized to look for an X-Robots-Tag on the website.
The most convenient method to inspect is to set up a browser extension that will tell you X-Robots-Tag details about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to figure out whether an X-Robots-Tag is being utilized, for instance, is the Web Designer plugin.
By clicking the plugin in your internet browser and browsing to “View Action Headers,” you can see the different HTTP headers being used.
Another technique that can be utilized for scaling in order to identify concerns on websites with a million pages is Yelling Frog
. After running a website through Shrieking Frog, you can navigate to the “X-Robots-Tag” column.
This will show you which sections of the site are utilizing the tag, along with which particular directives.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Understanding and controlling how search engines engage with your website is
the cornerstone of search engine optimization. And the X-Robots-Tag is an effective tool you can utilize to do just that. Simply know: It’s not without its threats. It is really simple to make a mistake
and deindex your whole site. That stated, if you’re reading this piece, you’re probably not an SEO newbie.
So long as you use it sensibly, take your time and check your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel