Robots.txt

n.

Pronunciation

/ˈroʊˌbɑts dat tɛkst/

Definition

A file in the root directory that controls whether or not a web page can be accessed by search engines.

Robots.txt will tell spiders whether or not they are allowed to index certain web pages. Spiders are bots commonly used by search engines to gather and organize information that allows those search engines to find and display the website. Usually, website owners want to have their ecommerce web pages displayed as prominently as possible on search engines. However, a business may not want web pages meant for internal use only to be indexed. The Robots.txt file is honored by all major search engines and the Wayback Machine.

Category

Technology

Related Terms

Incremental Crawl, Full Crawl, Search Engine Optimization, Spider

Product Information Encyclopedia

©2019 by Industrial Data Associates, Inc.

4041 N. Milwaukee, Suite 301, Chicago, IL 60641, US

Have an Edit?

Logo-2CRev-34px-TM.png