is the process performed by search engine crawler, when searching for relevant websites on the index. For instance,Google is constantly sending out "spiders" or "bots" which is a search engine's automatic navigator to discover which websites contain the most relevant information related to certain keywords.
So there are basically three steps that are involved in the web crawling procedure.
the search bot starts by crawling the pages of your site.
it continues indexing the words and content of the site.
it visit the links (web page addresses or URLs) that are found in your site.
When the spider doesn't find a page, it will eventually be deleted from the index. However, some of the spiders will check again for a second time to verify that the page really is offline.
The first thing a spider is supposed to do when it visits your website is look for a file called "robots.txt". This file contains instructions for the spider on which parts of the website to index, and which parts to ignore. The only way to control what a spider sees on your site is by using a robots.txt file. All spiders are supposed to follow some rules, and the major search engines do follow these rules for the most part. Fortunately, the major search engines like Google or Bing are finally working together on standards.
Techment Technology End to EndSoftware and app solutions