top of page
Robots.txt
n.
Pronunciation
/ˈroʊˌbɑts dat tɛkst/
Acronym
Definition
A file in the root directory that controls whether or not a web page can be accessed by search engines.
Robots.txt will tell spiders whether or not they are allowed to index certain web pages. Spiders are bots commonly used by search engines to gather and organize information that allows those search engines to find and display the website. Usually, website owners want to have their ecommerce web pages displayed as prominently as possible on search engines. However, a business may not want web pages meant for internal use only to be indexed. The Robots.txt file is honored by all major search engines and the Wayback Machine.
Category
Technology
Related Terms
Incremental Crawl, Full Crawl, Search Engine Optimization, Spider
Index
Product Information Encyclopedia
bottom of page