Visual Search Ranking - Relevance Ranking for Vertical Search Engines, FIRST EDITION (2014)

Relevance Ranking for Vertical Search Engines, FIRST EDITION (2014)

Chapter 4. Visual Search Ranking

Abstract

This chapter introduces some fundamental and practical technologies as well as some major emerging trends in visual search ranking. We first describe the generic visual search system, in which three categories of visual search are presented: text-based, query example-based, and concept-based visual search ranking. Then we discuss the three categories in detail, including a review of various popular algorithms. To further improve the performance of initial search results, visual search reranking of four paradigms is presented: (1) self-reranking, which focuses on detecting relevant patterns from initial search results without any external knowledge; (2) example-based reranking, in which the query examples are provided by users so that the relevant patterns can be discovered from these examples; (3) crowd reranking, which mines relevant patterns from crowdsourcing information available on the Web; and (4) interactive reranking, which utilizes user interaction to guide the reranking process. In addition, we also discuss the relationship between learning and visual search, since most recent visual search ranking frameworks are developed based on machine learning technologies. Last, we conclude with several promising directions for future research.

Keywords

Text-based search ranking

query example-based search ranking

concept-based search ranking

visual search reranking

learning to rank

Introduction

With rapid advances in data capturing, storage devices, networks and social communication technologies, large-scale multimedia data has become available to the general public. Flickr [123], the most popular photo-sharing site on the Web, reached 5 billion photo uploads in 2011, with thousands of photos uploaded every minute (about 4.5 million daily). As of 2011, Facebook held more than 60 billion photos shared by its communities [117]. YouTube streamed more than 1 billion videos per day worldwide in 2012 [390].

Such explosive growth and widespread accessibility of visual content have led to a surge in research activities related to visual searches. The key problem is to retrieve visual documents (such as images, video clips, and Web pages containing images or videos) that are relevant to a given query or users’ search intent from a large-scale database. Unlike text search, visual search is a more challenging task because they require the semantic understanding of visual content. There are two main issues in creating an efficient visual search mechanism. One is how to represent queries and index visual documents. The other is how to map the representations of queries and visual documents and find the relevance between queries and visual documents. In the last decade, visual search has attracted much attention, though it has been studied since the early 1990s (then referred to as content-based image/video retrieval [214,217,297]). All previous research has aimed at addressing our two stated issues.

This chapter introduces the basic techniques for visual search ranking. We first briefly introduce the generic visual search system, and then we categorize the approaches to video search ranking. Specifically, we present several basic techniques for video search ranking: (1) text-based search ranking, which leverages query keywords and documents’ textual features; (2) query example-based search ranking, which examines query examples because they may provide rich information about the user’s intent; and (3) concept-based search ranking, which utilizes the results from concept detection to aid search. Due to the great success of text document retrieval, most popular image and video search engines only rely on the surrounding text associated with the images or videos. However, visual relevance cannot be merely judged by text search techniques, since the textual information is usually too noisy to precisely describe visual content, or it could even be unavailable. To address this problem, visual search reranking has received increasing attention in recent years. Therefore, we present the current approaches to visual search reranking as well. In addition, we discuss search models by using machine learning techniques, including classification-based ranking and learning to rank. We conclude this chapter with a list of promising directions for future research.

4.1 Generic Visual Search System

A typical visual search system consists of several main components, including query preprocessing, visual feature extraction, semantic analysis,(e.g., text, visual, and concept searches), search models, and reranking. Figure 4.1 shows a generic visual search framework. Usually the query in a visual search system consists of a piece of textual query (e.g., “find shots in which a boat moves past”) and probably a set of query examples (e.g., images or video keyframes/clips). Query preprocessing is mainly used to obtain more accurate text-based search results based on a given speech recognition transcript, the closed captions available from a program channel, or the recognized captions embedded in video frames through optical character recognition (OCR). Visual feature extraction is used to detect a set of low-level visual features (global, local, and region features) to represent the query examples. Query analysis is used to map the query to some relevant high-level concepts with pretrained classifiers (e.g., “boat,” “water,” “outdoor”) for concept-based search. These multimodal queries are fed into individual search models, such as text, concept, and visual-based searches, respectively. Based on these initial search results as well as some knowledge, a reranking module is applied to aggregate the search results and reorder the initial document list to improve search performance. Learning-based search, a popular method that includes classification and learning to rank, has attracted the attention of researchers’ published works in recent years. Therefore we further discuss related works on learning-based visual search.

image

FIGURE 4.1 A generic visual search system.

4.2 Text-Based Search Ranking

Due to the great success of text search, most popular image and video search engines, such as Google [141], Yahoo! [377], and Bing [32], build on text search techniques by using text information associated with media content. The text search module aims to retrieve a number of top-ranked documents based on the similarity between query keywords and documents’ textual features. The textual features can be extracted from a number of information sources such as speech recognition transcripts, closed captions (CC), and video OCR. Unless the textual features are unavailable in some visual documents (such as surveillance videos), they are usually the most reliable sources to handle semantic queries in visual search systems. In this section, we describe several key components of the text-based search module, including text search models, query preprocessing approaches, and text sources.

4.2.1 Text Search Models

In a visual search system, the text search is often explored to generate the initial ranked results, which are used in achieving multimodal fusion or reranking. Thus nearly all existing text-based information retrieval (IR) models can be explored. The classic IR models can be classified into three kinds of model, i.e., set-theoretic, algebraic, and probabilistic models. In set-theoretic models, the documents are represented as sets of words or phrases. Similarities are usually derived from set-theoretic operations on these sets. Standard Boolean model [208], extended Boolean model [137] and fuzzy model [125,195,394] are some popular set-theoretic models. Algebraic models represent documents and queries as vectors, matrices, or tuples. The similarity of the query vector and document vector is represented as a scalar value. Vector space model [301], generalized vector space model [351,371], latent semantic indexing [93,109], and neural networks models [287] are some common algebraic models. Probabilistic models treat the process of document retrieval as a probabilistic inference. Similarities are computed as probabilities that a document is relevant to a given query. Binary independence model [289,295], language models [245,250], and divergence-from-randomness model [9] are some of the main probabilistic models. Probabilistic theorems like the Bayes theorem are often used in these models [29]. A more complete survey of text retrieval models can be found in [205,307].

4.2.2 Textual Query Preprocessing

Using textual queries in practical visual search engines is generally the most popular way to express users’ intent; however, the user-supplied queries are often complex and subjective, so it is unreliable to use them directly. The following steps are the typical preprocessing in generic visual search systems.

4.2.2.1 Query Expansion

Query expansion is used as a term for adding related words to a query in order to increase the number of returned documents and improve recall accordingly. Typically, all the keywords in each query should first be extracted, and then for each keyword the synonyms and acronyms are automatically selected [360]. In addition, content-based [162,225] and conceptual suggestions [163] have been proven effective in visual content search during the past decade.

4.2.2.2 Stemming Algorithm

A stemming algorithm, a procedure to reduce all words with the same stem to a common form, is useful in many areas of computational linguistics and information retrieval work, for example, “stemmer,” “stemming,” and “stemmed” can be reduced to “stem.” The first ever published stemmer was written by J. B. Lovins in 1968 [236]. This paper was one of the first of its kind and had great influence on later work in this area. A later stemmer was written by Martin Porter [282] and was very widely used, becoming the standard algorithm for English stemming. More details of Porter’s stemmer and the other stemming algorithms can be found in [310].

4.2.2.3 Stopword Removal

All stopwords—for example, common words such as “a” and “the”—are removed from multiple-word queries to increase search performance. To remove the stopwords, a list of stopwords for a language is useful. Making a list of stopwords can be done by sorting through the vocabulary of a text corpus for words used with great frequency and going down the list, picking off words to be discarded. Given a query like “find shots of one or more people reading a newspaper” (a typical query in TRECVID search tasks [350]), the key terms (“people,” “read,” and “newspaper” in this example) are retained after stemming (such as converting “reading” to “read”) and removing stopwords (such as “a” and “of”).

4.2.2.4 N-Gram Query Segmentation

The query strings are segmented into term sequences based on the N-gram method [46] before being input to the search engine. For our “find shots of one or more people reading a newspaper” example, if it has three levels of N-gram (i.e., N is from 1 to 3), seven total query segments will be generalized as (1) uni-gram: people, read, and newspaper; (2) bi-gram: people read, people newspaper, and read newspaper; and (3) tri-gram: people read newspaper. These segments were submitted to the search engine as different forms of the query, and the relevance scores of visual documents retrieved by different query segments can be aggregated with different weights, which can be set empirically [225]. The higher gram a query segment has, the higher the weight that should be assigned.

4.2.2.5 Part-of-Speech Tagging

In corpus linguistics, part-of-speech tagging (POS tagging, or POST), also called grammatical tagging or word-category disambiguation, is the process of marking a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition as well as its context, i.e., its relationship with adjacent and related words in a phrase, sentence, or paragraph. Some current major algorithms for part-of-speech tagging include the Viterbi algorithm [191], Brill Tagger [43], Constraint Grammar [200], and the Baum-Welch algorithm [366] (also known as the forward-backward algorithm). Many machine learning methods have also been applied to the problem of POS tagging. Methods such as support vector machine (SVM) [75], maximum entropy classifier [28], Perceptron [127], and nearest-neighbor [42] have all been tried, and most can achieve accuracy above 95%. A simplified method is to identify query terms as nouns, verbs, adjectives, adverbs, and so on. For a long and complex query [225,255], we can label the query topic with POS tags and extract the terms with noun or noun phrase tags as the “targeted objects,” since the noun and noun phrases often describe the centric objects that the query is seeking. For example, given a query “find shots of one or more people reading a newspaper,” “people” and “newspaper” will be tagged as nouns and extracted as the targeted objects in the query.

4.2.3 Text Sources

The text data in visual document collection are not always generated from a single source alone. Instead, a lot of visual document corpora, such as broadcast news, are associated with multiple text sources that can be extracted via manual annotation as well as some well-established automatic techniques such as audio signal processing or visual appearance analysis [378]. Given the many retrieval sources available, it is interesting to study the distinctive properties for each type of text information and what kinds of text sources can most contribute to retrieval. Generally speaking, the text sources processed in multimedia corpora span the following dimensions:

• Automatic speech transcripts (ASRs), which are converted from raw audio signals by speech recognizers

• Closed captioning (CC), which contains accurate spoken text written by a person, but usually no time markers for individual words

• Video optical character recognition (VOCR) extracted from the text visible on the screen

• Production metadata such as titles, tags, and published descriptions of the visual documents (e.g., surrounding text on Web pages)

4.3 Query Example-Based Search Ranking

Query example-based search, often calledquery by example (QBE), aims to use query example image/video data (e.g., video clips and keyframes), given by the user on input, to find visual documents in the database that are most visually, semantically, or perceptually similar to the query examples. The setting of using query examples is very similar to the traditional content-based visual retrieval(CBVR), where a user is required to provide a visual example [87]. The typical query-by-example (QBE) systems are often built based on a vector space model in which both documents and queries are represented by a set of visual features, and the similarity between two documents is measured through a distance metric between their feature vectors. In the remainder of this section, we discuss these two intrinsic elements in QBE systems, i.e., visual features and distance metrics.

4.3.1 Low-Level Visual Features

As stated earlier, QBE systems rely on multiple visual features to describe the content of visual documents. These features can be categorized into three types according to the pixels used: global, region, and local features. Global features are extracted over the entire image or subimage based on grid partition; regional features are computed based on the results of image segmentation, which attempts to segment the image into different homogenous regions or objects; and local features focus on robust descriptors invariant to scale and orientation based on local maxima.

4.3.1.1 Global Feature

In published research on the subject, there are three main types of (low-level) visual features that have been applied: color-based features, texture-based features, and shape-based features [378].

Color has been an active area of research in image retrieval, more than in any other branch of computer vision [138]. Color-based features have also proven to be the most effective features in TRECVID evaluation video search tasks [255]. This is because color features maintain strong cues that capture human perception in a low dimensional space, and they can be generated with less computational effort than other advanced features. Most of them are independent of variations of view and resolution, and thus they possess the power to locate target images robustly [138,378].

The simplest representation of color-based features is the color histogram, where each component in the color histogram is the percentage of pixels that are most similar to the represented color in the underlying color space [118,321]. Another type of color-based image feature is called color moment, which computes only the first two or three central moments of color distributions in the image [323,330], with other information discarded. The aim is to create a compact and effective representation in image retrieval. Other than these independent color representations, Pass et al. [278] developed color coherence histograms (CCHs) to address the matching problems of standard color histograms. They include both spatial information along with color density information in a single representation. Other color-based features include color coherence, color correlogram, and so on.

Texture-based features aim to capture the visual characteristics of homogeneous regions that do not come from the presence of a single color or intensity [322]. These regions may have unique visual patterns or spatial arrangements of pixels, and the gray levels or color features in a region may not sufficiently describe them. The basic texture features include Tamura, multi-resolution simultaneous auto-regressive model (MRSAR), Gabor filter, and wavelet transform [124,320].

To capture the information of object shapes, a huge variety of shape-based features have been proposed and evaluated [68,215,254,395]. Shape features can be either generated from boundary-based approaches that use only the outer contour of the shape segments or generated from region-based approaches that consider the entire shape regions [298]. Typical shape features include normalized inertia [62], moment invariants, Fourier descriptors [124], and so on.

Using global features, an image can be represented by a single vector corresponding to a unimodality or a set of vectors and weights corresponding to multimodal features and their importance.

4.3.1.2 Region Features

The region featuresare similar to global features except that they are computed over a local region with homogeneous texture rather than the whole image. Therefore, an image is often segmented into several regions and represented as a set of visual feature vectors, each of which represents one homogenous region. The underlying motivation of region-based image representation is that many objects such as “cat,” “tiger,” and “plane” usually appear at a small portion of image. If a satisfying segmentation could be achieved, i.e., each object could be segmented as a homogenous and distinctive region, region-based representation would be very useful. The most widely used features for describing a region include color moment [62], color correlogram [121], wavelet transform texture, and normalized inertia [62].

4.3.1.3 Local Features

Local invariants such as salient points from which descriptors are derived, traditionally used for stereo matching and object recognition, are being used in image similarity. For example, the algorithm proposed by Lowe [237] constructs a scale-space pyramid using difference-of-Gaussian (DoG) filters and finds the local 3D maxima (i.e., salient point) on the pyramid.

A robust scale-invariant feature transform (SIFT) descriptor is computed for each point. An image is thus represented by a set of salient points and 128 dimensional SIFT features, or a histogram of code words built on a large visual vocabulary [318].

4.3.2 Distance Metrics

The other intrinsic element of QBE is the distance metric between query examples and indexed visual documents. Therefore, a large number of distance metrics have been proposed and tested in past research. In the rest of this section, we discuss several common distance metrics under the assumption that only one query image is available.

Let image denote the image-th feature of the query example and image denote the image-th feature of the indexed visual document to be compared, where image. The most widely adopted similarity is computed based on Minkowski-form distance, defined by

image(4.1)

For image, this yields the Euclidean distance. The histogram intersection is a special case of L1 distance, defined as

image(4.2)

It has been shown that histogram intersection is fairly sensitive to the changes of image resolution, occlusion, and viewing point [124]. A distance robust to noise is the Jeffery distance (JD), which is based on the Kullback-Leibler (KL) divergence given by

image(4.3)

Although it is often intuited as a distance metric, the KL divergence is not a true metric since it is not symmetric.

The JD is defined as

image(4.4)

where image. In contrast to KL divergence, JD is symmetric and numerically stable in comparing two empirical distributions.

Hausdorff distance [87] is another matching method that is symmetrized by further computing the distance with image image and image reversed and choosing the larger one of the two distances

image(4.5)

where image can be any form of distance such as L1 and L2 distances. The Mahalanobis distance metric deals with the case that each dimension of the vector is dependent and has different importance, given by

image(4.6)

where image is the covariance matrix of the feature vectors. The preceding distance measures are all derived from a linear feature space that has been noted as the main challenge in measuring perceptual or semantic image distance.

Manifold ranking replaces traditional Euclidean distance using the geodesic distance in a nonlinear manifold [408]. The similarity is often estimated based on a distance measure image and a positive radius parameter image along a manifold

image(4.7)

where L1 distance is usually selected for image.

In a more general situation, an image image is represented by a set of vectors and weights. Each vector (also referred to as distribution) corresponds to a specific region or modality, and the weight indicates the significance of associating this vector to the others. The earth mover’s distance (EMD) represents a soft matching scheme for features in the form of a set of vectors [391]. The EMD “lifts” the distance from individual features to full distributions. The EMD distance is given by

image(4.8)

where image is the ground distance between two vectors that can be defined in diverse ways depending on the system, image minimizes the value of Eq. (4.8) subject to the following constraints:

image(4.9)

When image and image are probabilities, EMD is equivalent to the Mallows distance [87]. Another matching-based distance is the integrated region matching (IRM) distance [87]. The IRM distance uses the most similar highest priority (MSHP) principle to match different modalities or regions. The weights image are subject to the same constraints as in the Mallows distance, except that image is not computed by minimization. Another way to perform the adjustment of weights image in image similarity is relevance feedback, which captures the user’s precise needs through iterative feedback and query refinement. The goal of relevance feedback is to find the appropriate weights to model the user’s information need [392]. The weights are classified into intra- and inter-weights. The intra-weights represent the different contributions of the components within a single vector (i.e., region or modality), whereas the inter-weights represent the contributions of different vectors. Intuitively, the intra-weights are decided based on the variance of the same vector components in the relevant feedback examples, whereas the inter-weights are directly updated according to user feedback in terms of the similarity based on each vector. For the comparison among these distance metrics, refer to [87] for more details.

In the context of an image being represented by a set of salient points and their corresponding local descriptors, image similarity is computed based on the Euclidean distance between each pair of salient points [237]. An alternative method is representing each image into a bag of code words that are obtained through the unsupervised learning of the local appearance [120]. A large vocabulary of code words is built by clustering a large amount of local features, and then the distribution of these code words is obtained by counting all the points or patches within an image. Since the image is described by a set of distributions, histogram intersection and EMD, defined in Eq. (4.2) and Eq. (4.8) can be employed to compute the similarity.

4.4 Concept-Based Search Ranking

For visual search by QBE, the visual features are used to find visual documents in the database that are most similar to the query image. A drawback, however, is that these low-level visual features are often too restricted to describe visual documents on a conceptual or semantic level, which constitutes the so-called semantic gap problem.

To alleviate such a problem, visual search with a set of high-level concept detectors has attracted increasing attention in recent years [201,222,230,234,265,325,363]. The basic idea of concept-based methods is to utilize the results from concept detection to aid search, thereby leveraging human annotation on a finite concept lexicon to help answer infinite search queries. As a fundamental point, the rich set of predefined concepts and their corresponding training and testing samples available in the community have made it possible to explore the semantic description of a query in a large concept space. For example, 101 concepts are defined in MediaMill [324], 374 in LSCOM-Light [380], 834 in LSCOM [262], 17,624 in ImageNet [96], and so on. Except for the concept detectors, the key factor of the concept-based search is how to recognize related concepts and search with the recognized concepts.

4.4.1 Query-Concept Mapping

Intuitively, if queries can be automatically mapped to related concepts, search performance will benefit significantly. For example, the “face” concept can benefit people-related queries, and the “sky” concept can also be high-weighted for outdoor-related queries. Motivated by these observations, the problem of recognizing related concepts, also called “query-concept mapping,” has been the focus of many researchers. For example, Kennedy et al. [201] mine the top-ranked and bottom-ranked search results to discover related concepts by measuring mutual information. The basic idea is that if a concept has high mutual information with the top-ranked results and low mutual information with the bottom-ranked results, it will be considered as a related concept. Avoiding the ambiguity problem, Li and Liu et al. leverage a few query examples to find related concepts [222,230]; specifically, Li et al. [222] use the tf-idf-like scheme, and Liu et al. [230] explore mutual information measurement. Both methods are motivated by the information-theoretic point of view, that is, the more query examples bear more information of a concept, the more the concept will be related to the corresponding query.

However, these methods leverage only the visual information extracted from either the top-ranked results or the query examples. The other types of information, such as text, are entirely neglected. To solve this problem, Mei et al. use WordNet to compute the lexical similarity between the textual query and the descriptions for each concept detector [255]. Wang et al. linearly combine the text and visual information extracted from the text query and visual examples, respectively [363].

Nevertheless, most practical text queries are very short, often represented by one or two words or phrases, from which it is difficult to obtain robust concept-relatedness information. In addition, the problem of ambiguity also cannot be avoided, such as when the query “jaguar” may be related to both an “animal” and a “car,” but the two concepts have little relation to each other.

To address this problem, Liu et al. first fed the text query to a commercial visual Web search engine and collected the visual documents along with the associated text; to avoid the ambiguity problem, query examples were utilized to filter the Web results, and the cleaner “Web examples” could then be obtained. By combining the filtered visual Web examples and associated text, the following two methods are explored to detect the related concepts [232]:

• Using pretrained concept detectors over Web examples.

The detectors are trained by SVM over three visual features: color moments on a 5-by-5 grid, an edge distribution histogram, and wavelet textures. The confidence scores of the three SVM models over each visual document are then averaged to generate the final concept detection confidence. The details of the features and concept detection can be found in [255], in which a set of concept detectors are built mainly based on the low-level visual features and SVM for “high-level feature detection task.”

• Mining the surrounding text of Web examples.

The standard stemming and stopword removal [255] are first performed as a preprocess; then image terms with the highest frequency are selected to form a keyword set image and match the concepts in the lexicon. Here Google Distance (GD) [72] is adopted to measure two textual words:

image(4.10)

where image and image are the numbers of images containing words image and image, respectively, and image is the number of images containing both image and image. These numbers can be obtained by performing a search of textual words on the Google image search engine [141]. image is the total number of images indexed in the Google search engine. The operation image is used to avoid an extremely large value. We can see that GD is a measure of semantic interrelatedness derived from the number of hits returned by the Google search engine for a given set of keywords. Keywords with the same or similar meanings in a natural language sense tend to be “close” in the units of GD, whereas the words with dissimilar meanings tend to be separated far away from each other. If the two search terms never occur together on the same Web page but do occur separately, the GD between them is infinite. If both terms always occur together, their GD is zero.

By combining these two methods, the relatedness of the image concept to a given query, i.e., image, is given by:

image(4.11)

where image is the confidence score of the concept image of the Web example image obtained from the pretrained concept detectors. image is a parameter to tune the contribution of concept detectors and surrounding text. Empirically, a relatively lower image would be more suitable for the concept detector with limited performance.

4.4.2 Search with Related Concepts

Researchers have proven that when the number of semantic concepts is relatively large, even if the accuracy of the concept detectors is low, semantic concepts can still significantly improve the accuracy of the search results [232,325].

The straightforward way is to represent the query (with the query examples) as well as visual documents as multiple related concepts and perform the search with text-based technologies. For example, Li et al. have shown that when provided with a visual query example, searching through concept space is a good supplemental procedure in the text and low-level feature spaces [222,235]. They first built a concept space (with 311 concepts) over the whole dataset, where each document was associated with multiple relevant concepts (called visual terms). Given a query, they employed concept detectors over the query example to obtain the presence of concepts, and then they adopted c-tf-idf, a tf-idf like scheme to measure the usefulness of the concepts to the query. The c-tf-idf is used in a traditional text-based search pipeline, e.g., a vector model or a language model, to measure the relevance between the given query and a document. These concept-based search results are finally combined with those from other modalities (e.g., text and visual) in a linear way. Liu et al. propose a multi-graph-based query independent learning for video search by using a set of attributional features and relational features based on the LSCOM-Lite lexicon (composed of 39 concepts) [234,235]. The attributional features are generated using detection scores from concept detectors, whereas relational features indicate the relationship between query and video shots by viewing videos shots as visual documents and the concepts as visual terms, such as “visual TFIDF,” “visual BM25,” and “visual query term distribution.” By using these concept-based features, they propose a query-independent learning framework for video search. Under this framework, various machine learning technologies can be explored for visual search.

Mei et al. first obtained confidence scores from those concept detectors and treated them as the weights for the corresponding concepts (i.e., hidden text), further used them in a text alike search (e.g., inverted index based on term and document frequency) or as a feature vector in a concept space for searching via QBE [255]. Ngo et al. present a concept-driven multimodality fusion approach in their automatic video search system [266]. They first generated a set of concepts for a given query. To obtain the optimal weight for combining the search results based on each concept, they conducted a simulated search evaluation, in which a concept is treated as a simulated query associated with concepts and 10 randomly chosen positive visual samples. Then, the unimodal search performance for the concept and its related visual samples against a training dataset were manually labeled. With the simulated search evaluation, given a testing query, they estimated the concept-based fusion weights by jointly considering query-concept relatedness and the simulated search performance of all concepts.

4.5 Visual Search Reranking

Due to the great success of text search, most popular image and video search engines, such as Google [141], Yahoo! [377], and Bing [32], build on text search techniques by using text associated with visual content, such as the title, description, user-provided tags, and surrounding text of the visual content. However, this kind of visual search approach has proven unsatisfying, since it often entirely ignores the visual content as a ranking signal [57,87,164,296].

To address this issue, search reranking has received increasing attention in recent years [164,165,201,230,264,284,304,343,346,379]. It is defined as reordering visual documents based on multimodal cues to improve search performance. The documents might be images or video shots. The research on visual search reranking has proceeded along four paradigms from the perspective of the knowledge exploited: (1) self-reranking, which mainly focuses on detecting relevant patterns (recurrent or dominant patterns) from the initial search results without any external knowledge; (2) example-based reranking, in which the query examples are provided by users so that the relevant patterns can be discovered from these examples; (3) crowd reranking, which mines relevant patterns from the crowdsourcing knowledge available on the Web, e.g., the multiple image/video search engines or sites or user-contributed online encyclopedias like Wikipedia [368]; and (4) interactive reranking, which involves user interaction to guide the reranking process. Figure 4.2 illustrates the four paradigms for visual search reranking.

image

FIGURE 4.2 The four paradigms for visual search reranking.

4.5.1 First Paradigm: Self-Reranking

The topmost flowchart of Figure 4.2 shows the first paradigm, self-reranking. In this paradigm, the reranking objective is to discover recurrent patterns from the initial ranked list that can provide relevant clues for reranking. Although quite “noisy” due to the unsatisfying text-only search performance, the initial search results, especially the top-ranked documents, can be regarded as the main resource for mining some relevant clues, since the analysis on clickthrough data from a very large search engine log shows that users are usually interested in the top-ranked portion of a set of search results [337].

Generally, the key problem in self-reranking is finding a way to mine the relevant patterns from the noisy initial ranked list and to treat these documents for reranking. In many cases, we can assume that the top-ranked documents are the few relevant (calledpseudo relevant) documents that can be viewed as “positive.” Those pseudo-relevant samples can be further used in any learning method to classify the remaining documents into relevant or irrelevant classes, be used as query examples to compute the distance to the remaining documents, or be the feedback to the system for query term re-weighting or reformulation. For example, Hsu et al. formulate the reranking process as a random walk over a context graph, where video stories are nodes and the edges between them are weighted by multimodal similarities [164]. Zisserman et al. first performed the visual clustering on initial returned images by probabilistic latent semantic analysis (pLSA), learning the visual object category, and then reranking the images according to the distance to the learned categories [284].

Most existing self-reranking methods mainly exploit the visual cues from initial search results. In addition to the commonly employed visual information, other modalities associated with the documents, such as text (e.g., caption, keywords, and surrounding text), audio (e.g., speech and music in video shots), and linkage in the Web pages are also worth taking into account for judging the relevance. To address this issue, co-reranking and circular reranking have recently been proposed to leverage multimodal cues via mutual reinforcement [384,385]. Different from existing techniques, the reranking procedure encourages interaction among modalities to seek consensus that is useful for reranking.

Although self-reranking relies little on external knowledge, it cannot deal with the ambiguity problem that is derived from the text queries. Taking the query “jaguar” as an example, the search system cannot determine what the user is really searching for, whether it is an animal or a car. As illustrated in Figure 4.3, results with different meanings but all related to “jaguar” can be found in the top-ranked results of “jaguar.” To address this problem, other paradigms leverage some auxiliary knowledge, aiming to better understand the query.

image

FIGURE 4.3 Examples of top 30 ranked results from a commercial Web image search engine.

4.5.2 Second Paradigm: Example-Based Reranking

The second paradigm, example-based reranking, leverages a few query examples to mine the relevant information. The search performance can be improved due to relevant information derived from these query examples.

There are many ways to use these query examples. The most common way is QBE-like approaches, which aim to find and rank documents higher that are visually similar to the query examples [348]. Another popular way is to use these query examples to mine relevant semantics, often represented by a set of high-level concepts. The motivation is similar to the concept-based search. If the documents have correlated semantics with query examples, they will be ranked higher than the others.

For example, a search query like “Find shots of bridge” could be handled by searching against the transcript to find occurrences of “bridge,” but also by giving positive weight to the shots that are positive for the concepts of “bridge,” “water,” and “river” (since “bridge,” “water,” and “river” are highly correlated concepts) and negative for “indoor.” The key problems here are how to select relevant concepts from a predefined lexicon with hundreds of concepts and how to leverage these concept detectors for reranking. The first problem is often called query-concept mapping, which is explained in Section 4.4.1. For the second problem, Kennedy et al. leveraged a large pool of 374 concept detectors for unsupervised search reranking [201]. Each document in the database was represented by a vector of concept confidence scores by running the 374 concept detectors. They form a feature vector for each document consisting of the related concept confidence scores and train a SVM model based on the pseudo positive and negative samples from the initial search results. The SVM testing results are finally combined with the initial ranking scores on average to rerank the documents. Liu et al. [232] formulated reranking as an optimization problem in which a ranked list is globally optimal only if any arbitrary two documents in the list are correctly ranked in terms of relevance. To find the optimal ranked list, they convert the individual documents to document pairs, each represented as an ordinal relation. Then they detect the optimal document pairs that can maximally preserve the initial rank order while simultaneously keeping consistency with the auxiliary knowledge represented by the mined concept relatedness.

4.5.3 Third Paradigm: Crowd Reranking

Considering that the approaches in the second paradigm often suffer from the limitation of lack of query examples, the third paradigm, crowd reranking, aims to leverage crowdsourcing knowledge collected from multiple search engines [228]. This idea was inspired by the following two observations: (1) Different search engines have different search results, since they might have different data sources and metadata for indexing, as well as different search and filtering methods for ranking. Using search results from different engines can inform and complement the relevant visual information in a single engine. Thus, reranking performance can be significantly improved due to the richer knowledge involved. (2) Although a single search engine cannot always have enough cues for reranking, it is reasonable to assume that across the search results from multiple engines there exist common visual patterns relevant to a given query. Therefore, the basis of crowd reranking is then to find the representative visual patterns as well as their relations in multiple search results.

First, a textual query is fed into multiple search engines to obtain lists of initial search results. Then the representative visual words are constructed based on the local image patches from these search results. Two visual patterns are explicitly detected from the visual words through a graph propagation process: salient and concurrent patterns. The former pattern indicates the importance of each visual word; the latter expresses the interdependence among those visual words. Intuitively, a visual word with a strong salient pattern for a given query indicates that other concurring words (i.e., with strong concurrent patterns) would be prioritized. The reranking is then formalized as an optimization problem on the basis of the mined visual patterns and the initial ranked list. The objective is to maximize the consistence image between the learned knowledge (i.e., visual patterns) image and the reranked list image as well as minimizing the disagreement image between the initial ranked list image and the reranked list image as follows:

image(4.12)

where image tunes the contribution of the learned knowledge image to the reranked list. The distance function could be formalized as either pointwise or pairwise distance, whereas the consistence is defined as the cosine similarity between a document and the mined visual patterns.

4.5.4 Fourth Paradigm: Interactive Reranking

Although automatic reranking methods have improved over the initial search results, visual search systems with a human user in the loop have consistently outperformed fully automatic search systems. This has been validated every year through a search performance evaluation in the TRECVID video search evaluation forum [350], since human beings can provide more concise information to guide the ranking procedure.

In the interactive reranking procedure, a user is required to either issue additional queries or annotate a part of the initial results, whether they are relevant or not. Such a setting is very similar to the traditional relevance feedback, where users are required to annotate a subset of initial search results, whether each of them is relevant or not at each iteration [4,347,392,412]. The works in [27,101,166,192,347] leverage relevance feedback to identify the relevant clusters for improving browsing efficiency. They first employ clustering techniques such as Bregman bubble clustering (BBC) [101,166] and latent Dirichlet allocation (LDA) [27] to cluster the top image search results, and then they ask users to label the relevance of those clusters. The images within the clusters are then ranked according to their similarities to the cluster centers. Cui et al. developed a real-time search reranking system [80,81], which enables users to first indicate a query image from the initial search results and then classify the query image into one of several predefined categories. The feature weights for grouping together a visual similarity within each category are learned by minimizing a rank loss for all query images in a training set through RankBoost.

The MediaMill system introduces the concept of video threads to visualize the reranked video search results [324]. A thread is a linked sequence of shots in specific order types, including query result thread, visual thread, semantic thread, top-rank thread, textual thread, and time thread. The system further supports two models for displaying threads: CrossBrowser and RotorBrowser. The former is limited to show only two fixed threads (the query result and time threads), whereas the latter shows all possible relevant threads for each retrieved shot to users. Users can browse along any thread that catches their interest.

Different from conventional sketch- or shape-based systems, which retrieve images with similar sketch or shape [53,113,383], the search-by-color-map system enables users to indicate how their preferred colors are spatially distributed in the desired image by scribbling a few color strokes or dragging an image and highlighting a few regions of interest in an intuitive way [364]. The concept-based interactive visual search system leverages human interaction to reformulate a new query, leading to a better search performance based on the spatial layout of concepts [376,397]. Given a textual query, CuZero, developed by Columbia University, is able to automatically discover relevant visual concepts in real time and allows users to navigate seamlessly in the concept space at will [397]. The search results are then reranked according to the arbitrary permutations of multiple concepts given by users.

4.6 Learning and Search Ranking

4.6.1 Ranking by Classification

Most current approaches to visual search ranking and reranking take classification performance as the optimization objective, in which the ranking and reranking is formulated as a binary classification problem to determine whether or not a visual document is relevant and then increase the ranking order of the relevant documents.

According to whether unlabeled data are utilized in the training stage or not, these methods can be classified into inductive and transductive ones. The goal of an inductive method is to create a classifier that separates the relevant and irrelevant images and generalizes well on unseen examples. For instance, Tong et al. [328] first computed a large number of highly selective features then used boosting to learn a classification function in this feature space. Similar in spirit, the relevance feedback method proposed in [403]trains a SVM model from labeled examples, hoping to obtain a small generalization error by maximizing the margin between the two classes of images. To speed up the convergence to the target concept, active learning methods are also utilized to select the most informative images that will be presented to and marked by the user. For example, the support vector machine active learning algorithm (image) proposed by Tong et al. [328] selects the points near the SVM boundary so as to maximally shrink the size of the version space. Another active learning scheme, the maximizing expected generalization algorithm (MEGA) [216], judiciously selects samples in each round and uses positive examples to learn the target concept while using negative examples to bound the uncertain region. One major problem with inductive methods is the insufficiency of labeled examples, which might bring great degradation to the performance of the trained classifier.

On the other hand, transductive methods aim to accurately predict the relevance of unlabeled images that are attainable during the training stage. For example, the discriminant-EM (DEM) algorithm proposed by Wu et al. [373] makes use of unlabeled data to construct a generative model, which will be used to measure relevance between the query and database images. However, as pointed out in [373], if the components of data distribution are mixed up, which is often the case in CBIR, the performance of DEM will be compromised. Despite the immaturity of transductive methods, we see their great potential, since they provide a way to solve the small sample size problem by utilizing unlabeled data to make up for the insufficiency of labeled data. He et al. [151] propose a transductive learning framework called manifold-ranking-based image retrieval (MRBIR), which is initially inspired by a manifold-ranking algorithm [408,409]. In MRBIR, relevance between the query and database images is evaluated by exploring the relationship of all the data points in the feature space, which addresses the limitation of present similarity metrics based on pair-wise distance. However, manifold ranking has its own drawbacks in handling large-scale data sets: it has cost in both graph construction and ranking computation stages, which significantly limits its applicability to very large datasets. Xu et al. [375] extend the original manifold ranking algorithm and propose a new framework named efficient manifold ranking (EMR). Specifically, an anchor graph instead of the traditional k-nearest neighbor graph is built on the dataset, and a new form of adjacency matrix is utilized to speed up the ranking computation.

A similar idea to transductive learning has been implemented for visual search reranking [164,169,181,182,225].

4.6.2 Classification vs. Ranking

Although some systems of ranking by classification have obtained a high search performance, it is known that an optimal classification performance cannot guarantee an optimal search performance [393]. In this section, we present the relationship betweenclassification and ranking models.

Suppose that we have a hypothesis space with the two hypothesis functions, image and image. The two hypotheses predict a ranking for a query over a document corpus. image is the indicator of document relevance (image: relevant, image: irrelevant) to the query image. Using the example shown in Tables 4.1 and 4.2, we can demonstrate that models that optimize for classification performance are not directly concerned with ranking, which is often measured by average precision (AP) [350].

Table 4.1

Classification and reranking.

image

Table 4.2

Performance comparison between classification and ranking.

image

When we learn the models that optimize for the classification performance (such as “accuracy”), the objective is to find a threshold such that documents scoring higher than the threshold can be classified as relevant and documents scoring lower as irrelevant. Specifically, with image, a threshold between the documents 8 and 9 gives two errors (documents 1–2 incorrectly classified as relevant), yielding an accuracy of 0.80. Similarly, with image, a threshold between documents 3 and 4 gives three errors (i.e., documents 7–9 are incorrectly classified as relevant), yielding an accuracy of 0.70. Therefore, a learning method that optimizes for classification performance would choose image, since it results in a higher level of accuracy. However, this will lead to a suboptimal ranking measured by the AP scores and the number of pairs correctly ranked. In other words, conventional approaches to ranking by classification fail to provide a global optimal ranking list.

4.6.3 Learning to Rank

To address the issues of ranking by classification, learning to rank has attracted much attention in recent years in computer vision and visual search field. The main task is to learn the ranking model from the ranking orders of the documents. The objective is to automatically create a ranking model by using labeled training data and machine learning techniques. It is mostly studied in the setting of supervised learning, which typically includes a learning stage followed by a ranking stage [48,54]. In the learning stage, a ranking function is built using the training samples with relevance degrees or ranking orders, whereas in the ranking stage, the documents are ranked according to the relevance obtained by the ranking function. When applied to visual search, the ranking order represents the relevance degree of the visual documents (i.e., images or video shots) with respect to the given query.

Generally, existing methods for learning to rank fall into three dimensions: pointwise, pairwise, and listwise learning. For a given query and a set of retrieved documents, pointwise learning tries to directly estimate the relevance label for each query-document pair. For example, Prank [77] aimed to find a rank-prediction rule that assigns each sample a rank that is as close as possible to the sample’s true rank. Although one may show that these approaches are consistent for a variety of performance measures [76], they ignore relative information within collections of documents.

Pairwise learning, as proposed by [48,156,185], takes the relative nature of the scores into account by comparing pairs of documents. For example, ranking SVM is such a method that reduces ranking to classification on document pairs. Pairwise learning approaches ensure that we obtain the correct order of documents, even in the case when we may not be able to obtain a good estimate of the ratings directly [76]. Content-aware ranking is proposed to simultaneously leverage the textual and visual information in the ranking learning process [136]. It is formulated based on large margin structured output learning by modeling the visual information into a regularization term. The direct optimization of the learning problem is nearly infeasible, since the number of constraints is huge. The efficient cutting plane algorithm is adopted to learn the model by iteratively adding the most violated constraints.

Finally, listwise learning, as proposed by [54,341,374], treats the ranking in its totality. In learning, it takes ranked lists as instances and trains a ranking function through the minimization of a listwise loss function defined on the predicted list and the ground truth list [374]. Each of those dimensions focuses on a different aspect of the dataset while largely ignoring others. Recently, Moon et al. proposed a new learning-to-rank algorithm, IntervalRank, by using isotonic regression to balance the tradeoff between the three dimensions [259].

It is observed that most of these methods for learning to rank are based on the assumption that a large collection of “labeled” data (training samples) is available in learning stage. However, the labeled data are usually too expensive to obtain, since users are reluctant to provide enough query examples (which can be regarded as “labeled” data) while searching. To alleviate the problem of lacking labeled data, it is desired to leverage the vast amount of “unlabeled” data (i.e., all the documents to be searched). For this purpose, a graph-based semisupervised learning-to-rank method (GLRank) was proposed for visual search [229]. Specifically, the query examples are first combined with the randomly selected samples to form the labeled pairs and then form the unlabeled pairs with all the samples to be searched. Second, a relation graph is constructed in which sample pairs, instead of the individual samples, are used as vertices. Each vertex represents the relevance relation of a pair in a semantic space, which is defined by the vector of confidence scores of concept detectors [222,235,342]. Then the relevance relationships are propagated from the labeled pairs to the unlabeled pairs. When all the unlabeled pairs receive the propagated relevance relation, a round-robin criterion is explored to obtain the final ranking list. Clearly, the GLRank belongs to pairwise learning.

In recent years, some approaches to learning to rerank have been proposed. For instance, Liu et al. claim that the best ranked list cannot be obtained until any two arbitrary documents from the list are correctly ranked in terms of relevance [233]. They first cluster the initial search results. Then they propose to incrementally discover the so-called “pseudo-preference pairs” from the initial search results by considering both the cluster typicality and the local typicality within the cluster. Here, typicality (i.e., the visual representativeness of a visual document with respect to a query) is a higher-level definition than relevance. For example, an image with “boat” may be relevant to the query “find the images with boat” but may not be typical, since the boat object is quite small in the image. The ranking support vector machine (RSVM) is then employed to perform pairwise classification [156]. The documents are finally reranked by predicting the probabilities of the RSVM. In [233], the optimal pairs are identified purely based on the low-level visual features from the initial search results. Later, the authors observed that leveraging concept detectors to associate a set of relevant high-level concepts to each document will improve discovery of optimal pairs [232]. Kang et al. [197] proposed exploring multiple pairwise relationships between documents in a learning setting to rerank search results. In particular, a set of pairwise features was utilized to capture various kinds of pairwise relationships, and two machine learned reranking methods were designed to effectively combine these features with a base ranking function: a pairwise comparison method and a pairwise function decomposition method. Jain et al. [172] hypothesized that images clicked in response to a query are most relevant to the query, thus reranking the initial search results so as to promote images that are likely to be clicked to the top of the ranked list. The reranking algorithm employs Gaussian process regression to predict the normalized click count for each image and combines it with the initial ranking score.

4.7 Conclusions and Future Challenges

In this chapter we presented an overview of visual search ranking, including three main dimensions: text-based, query example-based, and concept-based search ranking. The related technologies and representative works in each dimension were also introduced. To address the issues of text-based visual search approaches, visual search reranking has received increasing attention in recent years, resulting in introduction of paradigms and techniques on visual search reranking. In addition, as a promising paradigm, learning-based visual search ranking has attracted considerable attention in the literature; we discussed the relationships between learning and ranking problems. Although significant progress in visual search ranking has been made during the past few decades, many emerging topics deserve further investigation and research. We summarize the future challenges here:

• Personalized visual search ranking. Generally speaking, the primary goal of a search system is to understand the user’s needs. Rich personal clues mined from search logs and community-based behaviors or mobile devices can be considered to better understand user preference and guide the ranking process. Keeping users in the search loop is another way to perform the human-centered search.

• Efficient visualization tools for visual search ranking. An efficient and user-friendly interaction tool is the key to achieving an efficient search, since it is convenient for users to express their search intent, and visual data are particularly suited for interaction because a user can quickly grasp the vivid visual information and thus judge the relevance at a quick glance.

• Context-aware visual search ranking. When a user is conducting a search task, she actually provides rich context to the search system, e.g., past behaviors in the same session, the browsed Web pages if a search is triggered from browsing behavior, geographic location and time of the user, social network if the user remains signed in, and so on. All these contexts provide valuable cues for contextual search.

• Ranking aided by the crowdsourcing. Crowdsourcing is a popular concept in the so-called Web 2.0 era, indicative of the trend of enabling open community as content creators or knowledge providers. Specifically, crowdsourcing may include the heterogeneous relationship between various entities in a social network, search results from multiple visual search engines available on the Internet, well-organized online knowledge sites such as Wikipedia [368] and Mediapedia [253], and user profiles on mobile devices. All these contexts provide valuable cues for contextual search. Crowdsourcing-aided reranking brings a new challenge to the computer vision community.