Home
/
Website Help
/
Other
/
How to use the robots.txt file to improve the way search bots crawl your website?

How to use the robots.txt file to improve the way search bots crawl your website?

The purpose of the robots.txt file is to tell the search bots which files should and which should not be indexed by them. Most often it is used to specify the files which should not be indexed by search engines.

To allow search bots to crawl and index the entire content of your website, add the following lines in your robots.txt file:

User-agent: *
Disallow:

On the other hand, if you wish to disallow your website from being indexed entirely, use the lines below:

User-agent: *
Disallow: /

For more advanced results you will need to understand the sections in the robots.txt file. The “User-agent:” line specifies for which bots the settings should be valid. You can use “*” as a value to create the rule for all search bots or the name of the bot you wish to make specific rules for.

The “Disallow:” part defines the files and folders that should not be indexed by search engines. Each folder or file must be defined on a new line. For example, the lines below will tell all the search bots not to index the “private” and “security” folders in your public_html folder:

User-agent: *
Disallow: /private
Disallow: /security

Note that the “Disallow:” statement uses your website root folder as a base directory, therefore the path to your files should be /sample.txt and not  /home/user/public_html/sample.txt for example.

Share This Article