(archive site)

Crowdsourcing for Multimedia Research

Crowdsourcing for Multimedia Research

Crowdsourcing of paid research work is generally used to accomplish tasks that are easy for humans but difficult or impossible for computers. However, we are demonstrating that it is also feasible to train and qualify a set of people to do more specialized work through a crowdsourcing service like Mechanical Turk — i.e., that it is possible to use crowdsourcing to accomplish tasks that are difficult for both humans and computers. In fact, for some types of tasks, targeted crowdsourcing may produce better results than hiring, for example, interns or undergraduate assistants; setting a high bar for crowd qualification can effectively compete with the direct-hire advantages of in-person training and monitoring.

The Audio and Multimedia research group has successfully experimented with crowdsourcing as part of the Multimodal Location Estimation project. In that study, we developed a tutorial for Mechanical Turk workers and a qualification task to screen their performance, then asked qualified workers to determine the location of non-GPS-tagged videos, providing a human baseline against which to assess our automatic system. We are investigating other areas in which targeted crowdsourcing can be used to gather and annotate large datasets for multimedia research, leading the multimedia field in exploring applications for this approach.

 

Recent Projects

Crowdsourcing Location Estimation:

We used crowdsourced labor to provide a human baseline against which to evaluate automatic location-estimation systems. How well does a human do at this task? And what can that tell us about how to improve computational approaches?

 

Honors

Members of the Audio & Multimedia team were selected as winners in an Ideas Competition sponsored by the ACM Workshop on Crowdsourcing for Multimedia, and we participated in organizing the Crowdsourcing Task for the 2013 MediaEval Benchmark.