Bottom Content goes here.
Wikipedia content requires these links.....
Wikipedia content is licensed under the GNU Free Documentation License.
A search engine is a program designed to help the user access files stored
on a computer, for example on the World Wide Web, by allowing the user to
ask for documents meeting certain criteria (typically those containing a
given word, a set of words, or a phrase) and retrieving files that match
those criteria. Unlike an index document that organizes files in a
predetermined way, a search engine looks for files only after the user has
entered search criteria.
In the context of the Internet, search engines usually refer to the World
Wide Web and not other protocols or areas. Because the data collection is
automated, they are distinguished from Web directories, which are maintained
How search engines work
Web search engines work by storing information about a large number of web
pages which they retrieve from the WWW itself. These pages are retrieved by
a web crawler -- an automated web browser which follows every link it sees.
The contents of each page are then analyzed to determine how it should be
indexed (for example, words are extracted from the titles, headings, or
special fields called meta tags). This data about the web pages is stored in
some form of an index database for use in later queries. Some search
engines, such as Google, store all or part of the source page (referred to
as a cache) in addition to the information about the web pages.
When a user comes to the search engine and makes a query, typically by
giving some key words, the engine looks up the index and provides a listing
of best-matching web pages according to its criteria, usually with a short
summary having at least the document's title and sometimes parts of the
The usefulness of a search engine to most people is based on the relevance
of results it gives back. While there may be millions of Web pages that
include a particular word or phrase, often particular pages are more
relevant, popular, or authoritative. Most search engines employ methods to
rank the results to provide the "best" results first. How a search engine
decides which pages are the best matches, and what order the results should
be shown in, varies widely from one engine to another. The methods also
change over time as Internet usage changes and techniques improve.
Most Web search engines are commercial ventures supported by advertising
revenue, and as a result some employ the controversial practice of allowing
advertisers to pay money to have their listings ranked higher in search results.
The first Web search engine was Lycos which started at Carnegie Mellon
University as a research project in 1994.
Soon after, many search engines vied for popularity and gained and lost top
standing, such as Lycos, WebCrawler, HotBot, Excite, Infoseek, Inktomi, and
AltaVista. In some ways they competed with popular directories such as
Yahoo!. Later, the directories integrated or added on search engine
technology for greater functionality.
Search engines were also known as some of the brightest stars in the
Internet investing frenzy that occurred in the late 1990s. Several companies
entered the market spectacularly, recording record gains during their
initial public offerings.
Prior to the Web, there were search engines for other protocols or uses,
such as the Archie search engine for anonymous FTP sites and the Veronica
search engine for the Gopher protocol.
Osmar R. Za•ane's "From Resource Discovery to Knowledge Discovery on the
Internet (1998)" is a pretty good, pre-Google history of search-engine technology.
Recent additions to the list of search engines are Ask Jeeves, Vivisimo and Kartoo.
In around 2001-2002, the Google search engine rose to prominence. Its
success was based in part on the concept of link popularity and PageRank.
Each page is ranked by how many pages link to it, on the premise that good
or desireable pages are linked to more than others. The PageRank of linking
pages and the number of links on these pages contribute to the PageRank of
the linked page. This makes it possible for Google to first present pages
that are highly linked to by quality websites.
Researchers at NEC Research Institute claim to have improved upon Google's
patented PageRank technology by using web crawlers to find "communities" of
websites. Instead of ranking pages, this technology uses an algorithm that
follows links on a webpage to find other pages that link back to the first
one and so on from page to page. The algorithm "remembers" where it has been
and indexes the number of cross-links and relates these into groupings. In
this way virtual communities of webpages are found.
Challenges faced by search engines
* The web is growing much faster than any present-technology search
engine can possibly index (see distributed crawling).
* Many web pages are updated frequently, which forces the search engine
to revisit them periodically.
* The queries one can make are currently limited to searching for key
words, which may results in many false positives.
* Dynamically generated sites, which may be slow or difficult to index,
or may result in excessive results from a single site.
* Some search engines do not order the results by relevance, but rather
according to how much money the sites have paid them.