/ˈroʊˌbɑts dat tɛkst/
A file in the root directory that controls whether or not a web page can be accessed by search engines.
Robots.txt will tell spiders whether or not they are allowed to index certain web pages. Spiders are bots commonly used by search engines to gather and organize information that allows those search engines to find and display the website. Usually, website owners want to have their ecommerce web pages displayed as prominently as possible on search engines. However, a business may not want web pages meant for internal use only to be indexed. The Robots.txt file is honored by all major search engines and the Wayback Machine.
Incremental Crawl, Full Crawl, Search Engine Optimization, Spider