Thursday May 15, 2014
I was exploring Google Papers the other day and came across Quizz: Targeted Crowdsourcing with a Billion (Potential) Users by Ipeirotis and Gabrilovich. Downside: occasionally reads like a Google ad. Upside: really interesting results from an experimental Q&A system which is still live. It's very cool. Here are some quotes with my commentary:
... the strong self-selection of high-quality users to continue contributing, while low-quality users self-select to drop out. ... there is little incentive for unpaid users to continue participating when there is no monetary reward and they are not good at the task.
The goal of the system was not educational, so they celebrate the fact that it isn't fun if you suck.
These results indicate that users may be more interested in learning about the topic rather than just knowing whether they answered correctly.
The results included that people answer more questions when the interface shows the correct answer as "feedback" rather than just showing "correct" or "incorrect." This section of experimental results was particularly interesting, including commentary on possible failures of leaderboards.
... as more and more users participate, the achievements of the top users are difficult to match, effectively discouraging users from trying harder.
They did say that a leaderboard including only the last week's worth of results was more effective.
I'm less interested in the application of this kind of system for crowd-sourcing information, more interested in educational applications, but there is some clear overlap, and cited papers such as The multidimensional wisdom of crowds seem very interesting. Also through Ipeirotis' blog I found out about Smarterer, which is interesting as well. There's some sort of spectrum, or multi-dimensional thing going on, with education, crowdsourcing, and evaluation all in the mix.
The authors' application of information gain and a Markov Decision Process are also interesting.
This post was originally hosted elsewhere.