In many situations we want to search and navigate a collection of objects in a space with unknown underlying relationship between the objects. More precisely, consider a database with some form of similarity or distance between objects which can not be quantified. More importantly, the similarity relationship does not necessarily satisfy nice properties. Further, tagging the objects is either too costly, or simply not possible because it is hard to define meaningful labels. Such situations are common when human perception is involved, for instance in comparing images, music, videos, and websites. A fundamental challenge here is to design perceptual human-assisted search algorithms. Sorting and ordering is a much harder task for human being compared to pairwise comparison. This motivates designing efficient algorithms to find the most similar object to a given query by asking human users questions of the type: ``does object A look more like object B or object C?''.
We proposed a new algorithm, and analyzed its performance based on the number of question have to be answered by users as well as the amount of memory required to store the data. We believe that progress on this problem would lead to a large number of novel applications, ranging from a criminal identification system for law-enforcement to a company image phonebook, and perhaps to social networks, recommendation systems or new e-commerce applications.