Beyond Locality-Sensitive Hashing

This is an extended version of (the only) post in my personal blog.

In this post I will introduce locality-sensitive hashing (for which Andrei Broder, Moses Charikar and Piotr Indyk have been recently awarded Paris Kanellakis Theory and Practice Award) and sketch recent developments by Alexandr Andoni, Piotr Indyk, Nguyen Le Huy and myself (see video by Alexandr Andoni, where he explains the same result from somewhat different perspective).

One problem that one encounters a lot in machine learning, databases and other areas is the near neighbor search problem (NN). Given a set of points P in a d-dimensional space and a threshold r > 0 the goal is to build a data structure that given a query q reports any point from P within distance at most r from q.

Unfortunately, all known data structures for NN suffer from the so-called “curse of dimensionality”: if the query time is o(n) (hereinafter we denote n the number of points in our dataset P), then either space or query time is 2^{\Omega(d)}.

To overcome this obstacle one can consider the approximate near neighbor search problem (ANN). Now in addition to P and r we are also given an approximation parameter c > 1. The goal is given a query q report a point from P within distance cr from q, provided that the neighborhood of radius r is not empty.

It turns out that one can overcome the curse of dimensionality for ANN (see, for example, this paper and its references). If one insists on having near-linear (in n) memory and being subexponential in the dimension, then the only known technique for ANN is locality-sensitive hashing. Let us give some definitions. Say a hash family \mathcal{H} on a metric space \mathcal{M} = (X, D) is (r, cr, p_1, p_2)-sensitive, if for every two points x,y \in X

  • if D(x, y) \leq r, then \mathrm{Pr}_{h \sim \mathcal{H}}[h(x) = h(y)] \geq p_1;
  • if D(x, y) \geq cr, then \mathrm{Pr}_{h \sim \mathcal{H}}[h(x) = h(y)] \leq p_2.

Of course, for \mathcal{H} to be meaningful, we should have p_1 > p_2. Informally speaking, the closer two points are, the larger probability of their collision is.

Let us construct a simple LSH family for hypercube \{0, 1\}^d, equipped with Hamming distance. We set \mathcal{H} = \{h_1, h_2, \ldots, h_d\}, where h_i(x) = x_i. It is easy to check that this family is (r, cr, 1 - r / d, 1 - cr / d)-sensitive.

In 1998 Piotr Indyk and Rajeev Motwani proved the following theorem. Suppose we have a (r, cr, p_1, p_2)-sensitive hash family \mathcal{H} for the metric we want to solve ANN for. Moreover, assume that we can sample and evaluate a function from \mathcal{H} relatively quickly, store it efficiently, and that p_1 = 1 / n^{o(1)}. Then, one can solve ANN for this metric with space roughly O(n^{1 + \rho}) and query time O(n^{\rho}), where \rho = \ln(1 / p_1) / \ln(1 / p_2). Plugging the family from the previous paragraph, we are able to solve ANN for Hamming distance in space around O(n^{1 + 1 / c}) and query time O(n^{1/c}). More generally, in the same paper it was proved that one can achieve \rho \leq 1 / c for the case of \ell_p norms for 1 \leq p \leq 2 (via an embedding by William Johnson and Gideon Schechtman). In 2006 Alexandr Andoni and Piotr Indyk proved that one can achieve \rho \leq 1 / c^2 for the \ell_2 norm.

Thus, the natural question arises: how optimal are the abovementioned bounds on \rho (provided that p_1 is not too tiny)? This question was resolved in 2011 by Ryan O’Donnell, Yi Wu and Yuan Zhou: they showed a lower bound \rho \geq 1/c - o(1) for \ell_1 and \rho \geq 1/c^2-o(1) for \ell_2 matching the upper bounds. Thus, the above simple LSH family for the hypercube is in fact, optimal!

Is it the end of the story? Not quite. The catch is that the definition of LSH families is actually too strong. The real property that is used in the ANN data structure is the following: for every pair of points x \in P, y \in X we have

  • if D(x, y) \leq r, then \mathrm{Pr}_{h \sim \mathcal{H}}[h(x) = h(y)] \geq p_1;
  • if D(x, y) \geq cr, then \mathrm{Pr}_{h \sim \mathcal{H}}[h(x) = h(y)] \leq p_2.

The difference with the definition of (r,cr,p_1,p_2)-sensitive family is that we now restrict one of the points to be in a prescribed set P. And it turns out that one can indeed exploit this dependency on data to get a slightly improved LSH family. Namely, we are able to achieve \rho \leq 7 / (8c^2) + O(1 / c^3) + o(1) for \ell_2, which by a simple embedding of \ell_1 into \ell_2-squared gives \rho \leq 7 / (8c) + O(1 / c^{3/2}) + o(1) for \ell_1 (in particular, Hamming distance over the hypercube). This is nice for two reasons. First, we are able to overcome the natural LSH barrier. Second, this result shows that what “practitioners” have been doing for some time (namely, data-dependent space partitioning) can give advantage in theory, too.

In the remaining text let me briefly sketch the main ideas of the result. From now on, assume that our metric is \ell_2. The first ingredient is an LSH family that simplifies and improves upon \rho = 1 / c^2 for the case, when all data points and queries lie in a ball of radius O(cr). This scheme has strong parallels with an SDP rounding scheme of David Karger, Rajeev Motwani and Madhu Sudan.

The second (and the main) ingredient is a two-level hashing scheme that leverages the abovementioned better LSH family. First, let us recall, how the standard LSH data structure works. We start from a (r, cr, p_1, p_2)-sensitive family \mathcal{H} and then consider the following simple “tensoring” operation: we sample k functions h_1, h_2, \ldots, h_k from \mathcal{H} independently and then we hash a point x into a tuple (h_1(x), h_2(x), \ldots, h_k(x)). It is easy to see that the new family is (r, cr, p_1^k, p_2^k)-sensitive. Let us denote this family by \mathcal{H}^k. Now we choose k to have the following collision probabilities:

  • 1/n at distance cr;
  • 1/n^{\rho} at distance r

(actually, we can not set k to achieve these probabilities exactly, since k must be integer, that’s exactly why we need the condition p_1 = 1 / n^{o(1)}). Now we hash all the points from the dataset using a random function from \mathcal{H}^k, and to answer a query q we hash q and enumerate all the points in the corresponding bucket, until we find anything within distance cr. To analyze this simple data structure, we observe that the average number of “outliers” (points at distance more than cr) we encounter is at most one due to the choice of k. On the other hand, for any near neighbor (within distance at most r) we find it with probability at least n^{-\rho}, so, to boost it to constant, we build O(n^{\rho}) independent hash tables. As a result, we get a data structure with space O(n^{1 + \rho}) and query time O(n^{\rho}).

Now let us show how to build a similar two-level data structure, which achieves somewhat better parameters. First, we apply the LSH family \mathcal{H} for \ell_2 with \rho \approx 1/c^2, but only partially. Namely, we choose a constant parameter \tau > 1 and k such that the collision probabilities are as follows:

  • 1/n at distance \tau cr;
  • 1/n^{1/\tau^2} at distance cr;
  • 1/n^{1/(\tau c)^2} at distance r.

Now we hash all the data points with \mathcal{H}^k and argue that with high probability every bucket has diameter O(cr). But now given this “bounded buckets” condition, we can utilize the better family designed above! Namely, we hash every bucket using our new family to achieve the following probabilities:

  • 1/n^{1 - 1/\tau^2} at distance cr;
  • 1/n^{(1 - \Omega_{\tau}(1))(1 - 1 / \tau^2) / c^2} at distance r.

Overall, the data structure consists of an outer hash table that uses the LSH family of Andoni and Indyk, and then every bucket is hashed using the new family. Due to independence, the collision probabilities multiply, and we get

  • 1/n at distance cr;
  • 1/n^{(1 - \Omega_{\tau}(1)) / c^2} at distance r.

Then we argue as before and conclude that we can achieve \rho \leq (1 - \Omega(1)) / c^2.

After carefully optimizing all the parameters, we achieve, in fact, \rho \approx 7 / (8c^2). Then we go further, and consider a multi-level scheme with several distance scales. Choosing these scales carefully, we achieve \rho \approx 1 / (2 c^2 \ln 2).

Ilya Razenshteyn

About these ads

5 thoughts on “Beyond Locality-Sensitive Hashing

  1. Welcome! – Not so Great Ideas in Theoretical Computer Science

  2. Nice! To make sure I understand the final result: the eventual constant is $1/(2\ln 2)\simeq 0.721$ (last paragraph) for multilevel hashing, and the $7/8=0.875$ is an intermediate result if one stops at two-level hashing?

    Also, is there any lower bound or intuition of lower bound with this new, weaker definition of LSH? And what part of your approach (if it’s obvious, I apologize) actually hinges on the new definition (restricting to the set $P$)?

    Thanks!

    • You are right.

      As for the lower bound, there is a paper by Motwani, Naor and Panigraphy (http://arxiv.org/pdf/cs/0510088.pdf), where they prove a data-dependent lower bound of 1/2 for a concrete instance (basically, for a bunch of random points). For this instance our LSH family can be shown to achieve 1/2, too.

      As for dependency on data, we “refuse” to hash a point on the second level, if it is too far from the corresponding center. For the ANN application it is fine, since we will fail to find a near neighbor anyway.

      • Thanks! How do you implement such a “refusal”? You first hash all the points in your $P$ and compute the corresponding medium-level “ball”, and then on a point query do the first layer and then check whether it landed within the ball (and only proceed if so)?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s