In this talk, I will discuss some of my recent and surprising findings on the use of hashing algorithms for large-scale statistical estimations. Locality Sensitive Hashing (LSH) is a hugely popular algorithm for sub-linear near neighbor search. However, if we look at the proofs of LSH, then it turns out that fundamentally LSH is a constant time (amortized) adaptive sampler which is being used for the near-neighbor search. This observation adds another feather in the cap for LSH. LSH offers a unique capability to do smart sampling and statistical estimations. In particular, we view (K, L) LSH algorithm that adaptively samples (very efficiently) data x with probability 1 - (1 -p^K)^L then this can be turned into efficient unbiased estimations algorithms. (p being the collision probability between query and data x). The sampling process, which we call Locality Sensitive Sampling (LSS) has near constant cost and significantly more efficient than near-neighbor queries. I will demonstrate how this dynamic and efficient sampling beak the computational barriers in adaptive estimations where, for the first time, it is possible that we pay the cost of uniform sampling but get the benefits of adaptive sampling. I will show how one fundamental idea can break what we call computational chicken-and-egg loop on three favorite problems of 1) partition function estimation, 2) adaptive gradient estimation and also 3) Deep Learning.