How to Stop Search Engines from Crawling your Website

In order for your website to be found by other people, search engine crawlers, also often known as bots or spiders, will crawl each page on your website looking for changes to text and links to update their search indexes.

How to Control search engine crawlers with a robots.txt file

Website owners can use a file called robots.txt to define instructions how these search engine crawlers should crawl a website and when a search engine starts crawling a site, it should request the robots.txt file first and follow the rules within it.  We say "should", because there nothing to force crawlers to look for this file and obey the instructions and most "bad" crawlers either don't read the file or simply ignore any instructions that they want to !

 

  • Search Engines, Robots.txt, Crawling
  • 0 Users Found This Useful
Was this answer helpful?

Related Articles

How to exclude a subfolder from password protection with htaccess file

A very useful and simple to use method of protecting access to a directory or an entire account...