How to Create Robots.txt file & Submit to Google

A robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want accessed by search engine crawlers.
How to create robot.txt file & submit to Google
The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).

In simples sense, robots.txt file helps a site to crawl or block its URL's. So, it's a pros to have robots.txt file in a individual site.

What is Robots.txt used for?

Non-image files
For non-image files (that is, web pages) robots.txt should only be used to control crawling traffic, typically because you don't want your server to be overwhelmed by Google's crawler or to waste crawl budget crawling unimportant or similar pages on your site.

You should not use robots.txt as a means to hide your web pages from Google Search results. This is because other pages might point to your page, and your page could get indexed that way, avoiding the robots.txt file. If you want to block your page from search results, use another method such as password protection or noindex tags or directives.

• Image files
robots.txt does prevent image files from appearing in Google search results. (However it does not prevent other pages or users from linking to your image.)

• Resource files
You can use robots.txt to block resource files such as unimportant image, script, or style files, if you think that pages loaded without these resources will not be significantly affected by the loss. However, if the absence of these resources make the page harder to understand for Google's crawler, you should not block them, or else Google won't do a good job of analyzing your pages that depend on those resources.

Understand the Limitations of Robots.txt

Before you build your robots.txt, you should know the risks of this URL blocking method. At times, you might want to consider other mechanisms to ensure your URLs are not findable on the web.

• Robots.txt instructions are directives only
The instructions in robots.txt files cannot enforce crawler behavior to your site; instead, these instructions act as directives to the crawlers accessing your site. While Googlebot and other respectable web crawlers obey the instructions in a robots.txt file, other crawlers might not.

Therefore, if you want to keep information secure from web crawlers, it’s better to use other blocking methods, such as password-protecting private files on your server.

• Different crawlers interpret syntax differently
Although respectable web crawlers follow the directives in a robots.txt file, each crawler might interpret the directives differently. You should know the proper syntax for addressing different web crawlers as some might not understand certain instructions.

 Your robots.txt directives can’t prevent references to your URLs from other sites
While Google won't crawl or index the content blocked by robots.txt, we might still find and index a disallowed URL from other places on the web. As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results.

You can stop your URL from appearing in Google Search results completely by using other URL blocking methods, such as password-protecting the files on your server or using the noindex meta tag or response header.

Create a Robots.txt file

In order to make a robots.txt file, you need access to the root of your domain. If you're unsure about how to access the root, you can contact your web hosting service provider. Also, if you know you can't access to the root of the domain, you can use alternative blocking methods, such as password-protecting the files on your server, and inserting meta tags into your HTML.

You can make or edit an existing robots.txt file using the robots.txt Tester tool. This allows you to test your changes as you adjust your robots.txt.

Learn robots.txt syntax
The simplest robots.txt file uses two key words, User-agent and Disallow. User-agents are search engine robots (or web crawler software); most user-agents are listed in the Web Robots Database. Disallow is a command for the user-agent that tells it not to access a particular URL. On the other hand, to give Google access to a particular URL that is a child directory in a disallowed parent directory, then you can use a third key word, Allow.

Google uses several user-agents, such as Googlebot for Google Search and Googlebot-Image for Google Image Search. Most Google user-agents follow the rules you set up for Googlebot, but you can override this option and make specific rules for only certain Google user-agents as well.

The syntax for using the keywords is as follows:

User-agent: [the name of the robot the following rule applies to]

Disallow: [the URL path you want to block]


Allow: [the URL path in of a subdirectory, within a blocked parent directory, that you want to unblock]

These two lines are together considered a single entry in the file, where the Disallow rule only applies to the user-agent(s) specified above it. You can include as many entries as you want, and multiple Disallow lines can apply to multiple user-agents, all in one entry.

You can set the User-agent command to apply to all web crawlers by listing an asterisk (*) as in the example: User-agent: *

URL blocking commands to use in your robots.txt file
• The entire site with a forward slash (/):
Disallow: /

 A directory and its contents by following the directory name with a forward slash:
Disallow: /sample-directory/

 A webpage by listing the page after the slash:
Disallow: /private_file.html

 A specific image from Google Images:
User-agent: Googlebot-Image

Disallow: /images/dogs.jpg

 All images on your site from Google Images:
User-agent: Googlebot-Image

Disallow: /

 Files of a specific file type (for example, .gif):
User-agent: Googlebot

Disallow: /*.gif$

 Pages on your site, but show AdSense ads on those pages, disallow all web crawlers other than Mediapartners-Google. This implementation hides your pages from search results, but the Mediapartners-Google web crawler can still analyze them to decide what ads to show visitors to your site.
User-agent: *

Disallow: /

User-agent: Mediapartners-Google

Allow: /


Note that directives are case-sensitive. For instance, Disallow: /file.asp would block http://www.example.com/file.asp, but would allow http://www.example.com/File.asp. Googlebot also ignores white-space, and unknown directives in the robots.txt.
Save your Robots.txt file and Test your Robots.txt file
You must apply the following saving conventions so that Googlebot and other web crawlers can find and identify your robots.txt file:

You must save your robots.txt code as a text file,
You must place the file in the highest-level directory of your site (or the root of your domain), and
The robots.txt file must be named robots.txt.
As an example, a robots.txt file saved at the root of example.com, at the URL address http://www.example.com/robots.txt, can be discovered by web crawlers, but a robots.txt file at http://www.example.com/not_root/robots.txt cannot be found by any web crawler.

Open the tester tool for your site, and scroll through the robots.txt code to locate the highlighted syntax warnings and logic errors. The number of syntax warnings and logic errors is shown immediately below the editor. 
Type in the URL of a page on your site in the text box at the bottom of the page.

1. Select the user-agent you want to simulate in the drop down list to the right of the text box.
Test your robots.txt file Step1
2. Click the TEST button to test access.
Test your robots.txt file Step2
3. Check to see if TEST button now reads ACCEPTED or BLOCKED to find out if the URL you entered is blocked from Google web crawlers.
Test your robots.txt file Step3
4. Edit the file on the page and retest as necessary. Note that changes made in the page are not saved to your site! See the next step.

5. Copy your changes to your robots.txt file on your site. This tool does not make changes to the actual file on your site, it only tests against the copy hosted in the tool.

Submit your Updated Robots.txt to Google
The Submit function of the robots.txt Tester tool allows you to easily put in place and ask Google to more quickly crawl and index a new robots.txt file for your site. Update and notify Google of changes to your robots.txt file by following the steps below.

1. Click Submit in the bottom-right corner of the robots.txt editor. This action opens up a Submit dialog.
Submit robots.txt file to Google Step1
2. Download your edited robots.txt code from the robots.txt Tester page by clicking Download in the Submit dialog.
Submit robots.txt file to Google Step2
3. Upload your new robots.txt file to the root of your domain as a text file named robots.txt (the URL for your robots.txt file should be /robots.txt).
Submit robots.txt file to Google Step3
If you do not have permission to upload files to the root of your domain, you should contact your domain manager to make changes.

For example, if your site home page resides under subdomain.example.com/site/example/, you likely cannot update the robots file subdomain.example.com/robots.txt. In this case, you should contact the owner of example.com/ to make any necessary changes to the robots.txt file.

4. Click Verify live version to see that your live robots.txt is the version that you want Google to crawl.
Submit robots.txt file to Google Step4
5. Click Submit live version to notify Google that changes have been made to your robots.txt file and request that Google crawl it.
Submit robots.txt file to Google Step5
6. Check that your newest version was successfully crawled by Google by refreshing the page in your browser to update the tool’s editor and see your live robots.txt code. After you refresh the page, you can also click the dropdown above the text editor to view the timestamp of when Google first saw the latest version of your robots.txt file. 
Submit robots.txt file to Google Step6

Note: Warning! Use with caution. Incorrect use of these features can result in your blog being ignored by search engines.

Conclusion
By Performing above steps you can create a robots.txt file very easily & also you will learn how to submit it to Google. Robots.txt file plays a important role for bloggers in Search Engine Optimization. That's why, A site must have a robots.txt file in order to get orgnic traffic from SERP's. 

If you this post, please do share our post with your friends via Facebook, Twitter & Google plus. Thank you.

No comments:

Recently Published Posts