[wpdreams_ajaxsearchpro_results id=1 element='div']

First search engine?

[ad_1]

Search engines retrieve information from databases based on user-defined criteria. The first search engine, Archie, indexed FTP servers in the late 1980s. The World Wide Web Wanderer indexed URLs in 1993, and Excite incorporated page content analysis in 1993. WebCrawler and Lycos, released in 1994, were hugely successful and ranked results based on relevance.

A search engine is a computer program that acts as a way of retrieving information from a database, based on certain user-defined criteria. Modern ones look for databases that contain huge amounts of data, culled from the World Wide Web, newsgroups and directory projects.

Before the World Wide Web existed, but after the advent of the Internet and its subsequent popularity on the university circuit, the first search engine was created. At this point in history, in the late 1980s and early 1990s, one of the major protocols used on the Internet was the File Transfer Protocol (FTP). FTP servers existed all over the world, usually on university campuses, research facilities or government agencies. Some students at McGill University in Montreal decided that a centralized database of files available on various popular FTP servers would help save time and provide a great service to others. This was the origin of the Archie search engine.

Archie, which was short for archive, was a program that regularly accessed the FTP servers on its list and created an index of what files were on the server. Since processor time and bandwidth were still a pretty valuable commodity, Archie only checked for updates every month or so. At first, the index Archie created was meant to be checked using the Unix grep command, but a better user interface was soon developed to allow easy searching of the index. Following Archie, a handful of search engines have sprung up to look for the similar Gopher protocol: two of the most famous are Jughead and Veronica. Archie has become relatively obsolete with the advent of the World Wide Web and subsequent search engines, but Archie’s servers still exist.

In 1993, not long after the World Wide Web was created, Matthew Gray developed the World Wide Web Wanderer, which was the first web robot. The World Wide Web Wanderer indexed all existing websites on the Internet by capturing their URLs, but did not track the actual content of the websites. The index associated with the Wanderer, which was an early type of search engine, was called Wandex.

A few other small projects grew up after the Tramp, which started catching up with the modern search engine. These included the World Wide Web Worm, Repository-Based Software Engineering (RBSE) spider, and JumpStation. All three used data collected by web robots to feed that information back to users. However, the information was returned mostly unfiltered, although RBSE has attempted to rank the value of the pages.

In 1993, a company founded by some Stanford students called Excite released what is probably the first search engine to actually incorporate page content analysis. This initial offering was intended for search within a site, however, not search of the web as a whole.
In 1994, however, the world of search engines took a major turn. A company called WebCrawler went on the air with a search engine that not only grabbed the title and header of pages on the internet, it also grabbed all the content. WebCrawler was hugely successful, so successful that most of the time it couldn’t even be used because its system resources were all being used up.
A little later that year Lycos was released, which included many of the same features as WebCrawler and built on them. Lycos ranked its results based on relevance and allowed the user to change a number of settings to get results that fit better. Lycos was also huge: it had archived over a million websites that year, and within two years it had reached 60 million.

[ad_2]