Images are searched for in web search engines more frequently than any other content type except basic text. “People want to find pictures,” says Chris Sherman, executive editor of Searchwise. However, he adds, “image search in general is pretty bad.” Right now, searching for an image relies on text, whether in the name of the image or somewhere in a description, which means the searcher is dependent on how an image is labeled. Search engines cannot “see” an image and report back based on description.
Researchers at the University of California, San Diego plan to change that. A team led by Nuno Vasconcelos is developing content-based image retrieval software called Supervised Multiclass Labeling (SML). “We want to teach the computer how to recognize what is shown in images,” says Vasconcelos. “Right now you get a lot of false positives because of the lack of description.”
“The idea behind the software is to use noisy annotated data to train a classifier capable of annotating, retrieving, and segmenting images,” explains Gustave Carneiro, a former UCSD postdoctoral researcher now with Siemens Corporate Research. “Currently, with tools like Google image search, it is possible to gather quite a number of images of a specific visual class. For example, if a user types the query ‘sky,’ several images containing sky will be returned. However, these images rarely show the sky alone. We call this collection of images noisy annotated data. A classifier is an algorithm that looks for statistical regularities in data. If we give enough images of a specific class, like, sky, the classifier will be able to identify what makes images containing sky different from other types of images.”
The classifier would “learn” the specifics of a particular image (i.e., a sky is usually blue but can have clouds or stars or other colors). Once the learning process takes place, the classifier can take previously unseen images and make particular determinations about them.
SML software is one step toward improving computer image recognition. Vasconcelos is quick to say that it won’t solve every problem. “There are many interpretations to what an image really is. There are many different ideas of what a chair is, for example. A person can make that visualization, but it doesn’t transfer well to the computer.”
While the software will be a boon to anyone who needs to search for specific digital images online, Vasconcelos sees it having benefits beyond the internet; the software could be used for surveillance, to help blind people, and to develop robots that could work in homes. “Those are long-term goals,” Vasconcelos says. “Improving and understanding computer vision is the core. The short-term goal is to help improve image searches online.”
“Besides the advantages of using visual information for searching image databases, our system is one of the few showing a great potential to work in large-scale image databases,” Carneiro adds. “Also, different from current state-of-the-art image search systems still present in academic circles, ours has superior accuracy in terms of the annotation and retrieval, and runs faster than our competitors. We are currently working to make it scalable to gigantic databases, such as the entire internet.”
The improvements to image search and can be used now in limited capacity, but Vasconcelos says it will take a while to truly harness the technology. He likens it to speech-recognition software, which has been around for years but is still being perfected.
However, once it reaches its full potential, Vasconcelos believes it will revolutionize computer use, much like the internet did. “This may be the last variable for computers,” he says. “Once they can sense the world through vision, they will be even more useful.”
(www.ucsd.edu; www.vonliebig.ucsd.edu)