This is an extended version of (the only) post in my personal blog.

In this post I will introduce locality-sensitive hashing (for which Andrei Broder, Moses Charikar and Piotr Indyk have been recently awarded Paris Kanellakis Theory and Practice Award) and sketch recent developments by Alexandr Andoni, Piotr Indyk, Nguyen Le Huy and myself (see video by Alexandr Andoni, where he explains the same result from somewhat different perspective).

One problem that one encounters a lot in machine learning, databases and other areas is the near neighbor search problem (NN). Given a set of points in a -dimensional space and a threshold the goal is to build a data structure that given a query reports any point from within distance at most from .

Unfortunately, all known data structures for NN suffer from the so-called “curse of dimensionality”: if the query time is (hereinafter we denote the number of points in our dataset ), then either space or query time is .

To overcome this obstacle one can consider the approximate near neighbor search problem (ANN). Now in addition to and we are also given an approximation parameter . The goal is given a query report a point from within distance from , provided that the neighborhood of radius is not empty.

It turns out that one can overcome the curse of dimensionality for ANN (see, for example, this paper and its references). If one insists on having near-linear (in ) memory and being subexponential in the dimension, then the only known technique for ANN is locality-sensitive hashing. Let us give some definitions. Say a hash family on a metric space is -sensitive, if for every two points

- if , then ;
- if , then .

Of course, for to be meaningful, we should have . Informally speaking, the closer two points are, the larger probability of their collision is.

Let us construct a simple LSH family for hypercube , equipped with Hamming distance. We set , where . It is easy to check that this family is -sensitive.

In 1998 Piotr Indyk and Rajeev Motwani proved the following theorem. Suppose we have a -sensitive hash family for the metric we want to solve ANN for. Moreover, assume that we can sample and evaluate a function from relatively quickly, store it efficiently, and that . Then, one can solve ANN for this metric with space roughly and query time , where . Plugging the family from the previous paragraph, we are able to solve ANN for Hamming distance in space around and query time . More generally, in the same paper it was proved that one can achieve for the case of norms for (via an embedding by William Johnson and Gideon Schechtman). In 2006 Alexandr Andoni and Piotr Indyk proved that one can achieve for the norm.

Thus, the natural question arises: how optimal are the abovementioned bounds on (provided that is not too tiny)? This question was resolved in 2011 by Ryan O’Donnell, Yi Wu and Yuan Zhou: they showed a lower bound for and for matching the upper bounds. Thus, the above simple LSH family for the hypercube is in fact, optimal!

Is it the end of the story? Not quite. The catch is that the definition of LSH families is actually too strong. The real property that is used in the ANN data structure is the following: for every pair of points we have

- if , then ;
- if , then .

The difference with the definition of -sensitive family is that we now restrict one of the points to be in a prescribed set . And it turns out that one can indeed exploit this dependency on data to get a slightly improved LSH family. Namely, we are able to achieve for , which by a simple embedding of into -squared gives for (in particular, Hamming distance over the hypercube). This is nice for two reasons. First, we are able to overcome the natural LSH barrier. Second, this result shows that what “practitioners” have been doing for some time (namely, data-dependent space partitioning) can give advantage in theory, too.

In the remaining text let me briefly sketch the main ideas of the result. From now on, assume that our metric is . The first ingredient is an LSH family that simplifies and improves upon for the case, when all data points and queries lie in a ball of radius . This scheme has strong parallels with an SDP rounding scheme of David Karger, Rajeev Motwani and Madhu Sudan.

The second (and the main) ingredient is a two-level hashing scheme that leverages the abovementioned better LSH family. First, let us recall, how the standard LSH data structure works. We start from a -sensitive family and then consider the following simple “tensoring” operation: we sample functions from independently and then we hash a point into a tuple . It is easy to see that the new family is -sensitive. Let us denote this family by . Now we choose to have the following collision probabilities:

- at distance ;
- at distance

(actually, we can not set to achieve these probabilities exactly, since must be integer, that’s exactly why we need the condition ). Now we hash all the points from the dataset using a random function from , and to answer a query we hash and enumerate all the points in the corresponding bucket, until we find anything within distance . To analyze this simple data structure, we observe that the average number of “outliers” (points at distance more than ) we encounter is at most one due to the choice of . On the other hand, for any near neighbor (within distance at most ) we find it with probability at least , so, to boost it to constant, we build independent hash tables. As a result, we get a data structure with space and query time .

Now let us show how to build a similar two-level data structure, which achieves somewhat better parameters. First, we apply the LSH family for with , but only partially. Namely, we choose a constant parameter and such that the collision probabilities are as follows:

- at distance ;
- at distance ;
- at distance .

Now we hash all the data points with and argue that with high probability every bucket has diameter . But now given this “bounded buckets” condition, we can utilize the better family designed above! Namely, we hash every bucket using our new family to achieve the following probabilities:

- at distance ;
- at distance .

Overall, the data structure consists of an outer hash table that uses the LSH family of Andoni and Indyk, and then every bucket is hashed using the new family. Due to independence, the collision probabilities multiply, and we get

- at distance ;
- at distance .

Then we argue as before and conclude that we can achieve .

After carefully optimizing all the parameters, we achieve, in fact, . Then we go further, and consider a multi-level scheme with several distance scales. Choosing these scales carefully, we achieve .

Welcome! – Not so Great Ideas in Theoretical Computer Science

Nice! To make sure I understand the final result: the eventual constant is $1/(2\ln 2)\simeq 0.721$ (last paragraph) for multilevel hashing, and the $7/8=0.875$ is an intermediate result if one stops at two-level hashing?

Also, is there any lower bound or intuition of lower bound with this new, weaker definition of LSH? And what part of your approach (if it’s obvious, I apologize) actually hinges on the new definition (restricting to the set $P$)?

Thanks!

You are right.

As for the lower bound, there is a paper by Motwani, Naor and Panigraphy (http://arxiv.org/pdf/cs/0510088.pdf), where they prove a data-dependent lower bound of 1/2 for a concrete instance (basically, for a bunch of random points). For this instance our LSH family can be shown to achieve 1/2, too.

As for dependency on data, we “refuse” to hash a point on the second level, if it is too far from the corresponding center. For the ANN application it is fine, since we will fail to find a near neighbor anyway.

Thanks! How do you implement such a “refusal”? You first hash all the points in your $P$ and compute the corresponding medium-level “ball”, and then on a point query do the first layer and then check whether it landed within the ball (and only proceed if so)?

Yes, basically. We will land into the ball with high probability (if there is a near neighbor), but still there is a chance we will have to prune.

Two questions here:

(1) why bounded (small) diameter is preferred? Or say, why the first level hashing is inevitable? In my current understanding, the first level hashing is used to generate clusters with small bounded and rounded diameter and introduce randomness. Are there any other purposes?

(2) why drop points before the second level hashing? What’s the intuition behind?

First of all, sorry for such a late reply!

(1) Your intuition is correct: the outer level is used to achieve the low-diameter condition (it is not used for introducing randomness though).

(2) We drop points that we can not handle. If a point is too far from a respective center, then the inner family will not achieve good . But as we can see, we drop every fixed point with low probability.