* Wrox Real World Sharepoint 2010 – Chapter 17

[Chapter 17: Understanding Sharepoint 2010 Search]

07/27/2011, 07:05PM, Page 605

Basically, two major engines can be used for Sharepoint 2010 Search:
(1) Sharepoint 2010 Search Engine
(2) FAST Search Server for Sharepoint 2010 (expensive)

How to deploy and configure a Sharepoint Search service application. 
(1) Go to Central Admin –> Manager Service Application, crate a new search service application.
(2) Associate your web application with the new search service application: Central Application –> Application management –> Configure service application association.
(3) In service Application Association, check/uncheck the search service application.

Search engine components:
(1) Crawler: browse the contents automatically on a regular basis. to provide up-to-date data from the data source to the indexer.
(2) Indexer: collecting and storing relavant data of crawled contents, in order to make available fast and precise information during the queries.
(3) Query: provides UI for entering the user queries, and presents the result set to the end users. It communicates with the indexer directly to get teh results for the user query, and to put the proper result set together.

Configuring crawling and building index files is the first step to building a Sharepoint 2010 search architecture.

** Content Source: (where to search?) Three important information about a content source: the type (Sharepoint site? web site? file share?), location/address and schedule. By default there is one content source — the local Sharepoint site. You can ad multiple content sources so the crawlers can crawl.
** Crawl Rules: Include or exclude a path/address; also provide authentication if needed. use the Test button to test if the address is valid.
** Crawl Log: check each contetn source, what shoul dbe crawled, what warnings/errors etc.
** Server name mapping: Override how URL’s are displayed in the result.
** Host distribution rules: Only works for a farm with more than one crawler database; associate a host with a specific crawler databade.
** File types: What file type/extension to include in the content index?
** Crawler Impact Rules: Adjust the load the crawl applies to a specif content source (a URL/address). incuding the number of simultaneour requests, or time to wait between each request.

Role of Indexr is complicated – includes the following responsibilities:
(1) Processing the crawls
(2) making the indces available to the query servers.
(3) managing the content source properties such as location and scheduling
(4) creating and maintaining an index database.

Under Queries and Results:
* Athoritative pages: First, 2nd and 3rd level; the higher the level (1st), the higher the ranks. You can demote a web ste too.
* Federated Locations: usually remote search engines that prvide results to the queries initiated in Sharepoint 2010. But also can be local if you want to run simultaneous searches on the same content.
* Metadata properties: properties that user can use in their query, such as title: “xxxxxx”.
* Scopes: e.g., search “All Site” or “People”. (values in the drop down next to the search box)
* Search result removal: URL’s to remove from the search results.

**** Customizing Search UI:
Page 637, 07/27/2011, 09:58PM

Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: