How to Create Robots.txt file & Submit to Google

Share:
A robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want to be accessed by search engine crawlers.

How to create robot.txt file & submit to Google

The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).

In simples sense, robots.txt file helps a site to crawl or block its URL's. So, it's a pros to have a robots.txt file in an individual site.

What is Robots.txt used for?

Non-image files
For non-image files (that is, web pages) robots.txt should only be used to control crawling traffic, typically because you don't want your server to be overwhelmed by Google's crawler or to waste crawl budget crawling unimportant or similar pages on your site.

You should not use robots.txt as a means to hide your web pages from Google Search results. This is because other pages might point to your page, and your page could get indexed that way, avoiding the robots.txt file. If you want to block your page from search results, use another method such as password protection or noindex tags or directives.

• Image files
robots.txt does prevent image files from appearing in Google search results. (However, it does not prevent other pages or users from linking to your image.)

Understand the Limitations of Robots.txt

Before you build your robots.txt, you should know the risks of this URL blocking method. At times, you might want to consider other mechanisms to ensure your URLs are not findable on the web.

Robots.txt instructions are directives only
The instructions in robots.txt files cannot enforce crawler behavior to your site; instead, these instructions act as directives to the crawlers accessing your site. While Googlebot and other respectable web crawlers obey the instructions in a robots.txt file, other crawlers might not.

Therefore, if you want to keep information secure from web crawlers, it’s better to use other blocking methods, such as password-protecting private files on your server.

Different crawlers interpret syntax differently
Although respectable web crawlers follow the directives in a robots.txt file, each crawler might interpret the directives differently. You should know the proper syntax for addressing different web crawlers as some might not understand certain instructions.

Your robots.txt directives can’t prevent references to your URLs from other sites
While Google won't crawl or index the content blocked by robots.txt, we might still find and index a disallowed URL from other places on the web. As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results.

You can stop your URL from appearing in Google Search results completely by using other URL blocking methods, such as password-protecting the files on your server or using the no index meta tag or response header.

Creating a Robots.txt file

In order to make a robots.txt file, you need access to the root of your domain. If you're unsure about how to access the root, you can contact your web hosting service provider.

Also, if you know you can't access to the root of the domain, you can use alternative blocking methods, such as password-protecting the files on your server and inserting meta tags into your HTML.

You can make or edit an existing robots.txt file using the robots.txt Tester tool. This allows you to test your changes as you adjust your robots.txt.

Save your Robots.txt file and Test your Robots.txt file
You must apply the following saving conventions so that Googlebot and other web crawlers can find and identify your robots.txt file:

You must save your robots.txt code as a text file,
You must place the file in the highest-level directory of your site (or the root of your domain), and
The robots.txt file must be named robots.txt.
As an example, a robots.txt file saved at the root of example.com, at the URL address http://www.example.com/robots.txt, can be discovered by web crawlers, but a robots.txt file at http://www.example.com/not_root/robots.txt cannot be found by any web crawler.

Open the tester tool for your site, and scroll through the robots.txt code to locate the highlighted syntax warnings and logic errors. The number of syntax warnings and logic errors is shown immediately below the editor. 
Type in the URL of a page on your site in the text box at the bottom of the page.

1. Select the user-agent you want to simulate in the drop-down list to the right of the text box.

Test your robots.txt file Step1

2. Click the TEST button to test access.

Test your robots.txt file Step2

3. Check to see if TEST button now reads ACCEPTED or BLOCKED to find out if the URL you entered is blocked from Google web crawlers.

Test your robots.txt file Step3

4. Edit the file on the page and retest as necessary. Note that changes made in the page are not saved to your site! See the next step.

5. Copy your changes to your robots.txt file on your site. This tool does not make changes to the actual file on your site, it only tests against the copy hosted in the tool.

Submit your Updated Robots.txt to Google
The Submit function of the robots.txt Tester tool allows you to easily put in place and ask Google to more quickly crawl and index a new robots.txt file for your site. Update and notify Google of changes to your robots.txt file by following the steps below.

1. Click Submit in the bottom-right corner of the robots.txt editor. This action opens up a Submit dialog.

Submit robots.txt file to Google Step1

2. Download your edited robots.txt code from the robots.txt Tester page by clicking Download in the Submit dialog.

Submit robots.txt file to Google Step2

3. Upload your new robots.txt file to the root of your domain as a text file named robots.txt (the URL for your robots.txt file should be /robots.txt).

Submit robots.txt file to Google Step3

If you do not have permission to upload files to the root of your domain, you should contact your domain manager to make changes.

For example, if your site home page resides under subdomain.example.com/site/example/, you likely cannot update the robots file subdomain.example.com/robots.txt. In this case, you should contact the owner of example.com/ to make any necessary changes to the robots.txt file.

4. Click Verify live version to see that your live robots.txt is the version that you want Google to crawl.

Submit robots.txt file to Google Step4

5. Click Submit live version to notify Google that changes have been made to your robots.txt file and request that Google crawl it.

Submit robots.txt file to Google Step5

6. Check that your newest version was successfully crawled by Google by refreshing the page in your browser to update the tool’s editor and see your live robots.txt code. After you refresh the page, you can also click the drop-down above the text editor to view the timestamp of when Google first saw the latest version of your robots.txt file. 

Submit robots.txt file to Google Step6

Note: Warning! Use with caution. Incorrect use of these features can result in your blog being ignored by search engines.

Conclusion
By performing the above steps you can create a robots.txt file very easily & also you will learn how to submit it to Google. Robots.txt file plays an important role for bloggers in Search Engine Optimization.

That is why A site must have a robots.txt file in order to get organic traffic from SERP's. 

If you this post, please do share our post with your friends via Facebook, Twitter & Google plus. Thank you.

No comments