Abstract
The web affords connections by which end-users can receive paid, expert help—such as programming, design, and writing—to reach their goals. While a number of online marketplaces have emerged to facilitate such connections, most end-users do not approach a market to hire an expert when faced with a challenge. To reduce friction in hiring from peer-to-peer expert crowd work markets, we propose Ink, a system that crowd workers can use to showcase their services by embedding tasks inside web tutorials—a common destination for users with information needs. Workers have agency to define and manage tasks, through which users can request their help to review or execute each step of the tutorial, for example, to give feedback on a paper outline, perform a statistical analysis, or host a practice programming interview. In a public deployment, over 25,000 pageviews led 168 tutorial readers to pay crowd workers for their services, most of whom had not previously hired from crowdsourcing marketplaces. A field experiment showed that users were more likely to hire crowd experts when the task was embedded inside the tutorial rather than when they were redirected to the same worker’s Upwork profile to hire them. Qualitative analysis of interviews showed that Ink framed hiring expert crowd workers within users’ well-established information seeking habits and gave workers more control over their work.
- Lada A. Adamic, Jun Zhang, Eytan Bakshy, and Mark S. Ackerman. 2008. Knowledge sharing and Yahoo answers: Everyone knows something. In Proceedings of the 17th International Conference on World Wide Web. ACM, 665--674. Google ScholarDigital Library
- Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A word processor with a crowd inside. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology. ACM, 313--322. Google ScholarDigital Library
- Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: Nearly real-time answers to visual questions. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology. ACM, 333--342. Google ScholarDigital Library
- Joel Brandt, Philip J. Guo, Joel Lewenstein, Mira Dontcheva, and Scott R. Klemmer. 2009. Two studies of opportunistic programming: Interleaving web foraging, learning, and writing code. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1589--1598. Google ScholarDigital Library
- Andrea Bunt, Patrick Dubois, Ben Lafreniere, Michael A. Terry, and David T. Cormack. 2014. TaggedComments: Promoting and integrating user comments in online application tutorials. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems. ACM, 4037--4046. Google ScholarDigital Library
- Yan Chen, Teck-Hua Ho, and Yong-Mi Kim. 2010. Knowledge market design: A field experiment at google answers. Journal of Public Economic Theory 12, 4 (2010), 641--664.Google ScholarCross Ref
- Snehalkumar Neil S. Gaikwad, Durim Morina, Adam Ginzberg, Catherine Mullings, Shirish Goyal, Dilrukshi Gamage, Christopher Diemert, Mathias Burton, Sharon Zhou, Mark Whiting, and others. 2016. Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 625--637. Google ScholarDigital Library
- Mary L. Gray, Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. 2016. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work 8 Social Computing. ACM, 134--147. Google ScholarDigital Library
- Lei Guo, Enhua Tan, Songqing Chen, Xiaodong Zhang, and Yihong Eric Zhao. 2009. Analyzing patterns of user content generation in online social networks. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 369--378. Google ScholarDigital Library
- Lilly C. Irani and M. Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 611--620. Google ScholarDigital Library
- Lilly C. Irani and M. Silberman. 2016. Stories we tell about labor: Turkopticon and the trouble with design. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 4573--4586. Google ScholarDigital Library
- Sheena S. Iyengar and Mark R. Lepper. 2000. When choice is demotivating: Can one desire too much of a good thing?Journal of Personality and Social Psychology 79, 6 (2000), 995.Google Scholar
- Daniel Kahneman. 2011. Thinking, Fast and Slow. Macmillan.Google Scholar
- Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. ACM, 1301--1318. Google ScholarDigital Library
- Ben Lafreniere, Andrea Bunt, Matthew Lount, and Michael A. Terry. 2013. Understanding the roles and uses of web tutorials. In ICWSM.Google Scholar
- Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1603--1612. Google ScholarDigital Library
- Lena Mamykina, Bella Manoim, Manas Mittal, George Hripcsak, and Björn Hartmann. 2011. Design lessons from the fastest Q&A site in the west. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2857--2866. Google ScholarDigital Library
- David Martin, Benjamin V. Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work 8 Social Computing. ACM, 224--235. Google ScholarDigital Library
- Meredith Ringel Morris, Jeffrey P. Bigham, Robin Brewer, Jonathan Bragg, Anand Kulkarni, Jessie Li, and Saiph Savage. 2017. Subcontracting microwork. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM. Google ScholarDigital Library
- Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, and Michael S. Bernstein. 2014. Expert crowdsourcing with flash teams. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology. ACM, 75--85. Google ScholarDigital Library
- Niloufar Salehi, Lilly C. Irani, Michael S. Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, and others. 2015. We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1621--1630. Google ScholarDigital Library
- Niloufar Salehi, Jaime Teevan, Shamsi Iqbal, and Ece Kamar. 2017. Communicating context to the crowd for complex writing tasks. In Proceedings of the ACM 2017 Conference on Computer Supported Cooperative Work. ACM. Google ScholarDigital Library
- Ryo Suzuki, Niloufar Salehi, Michelle S. Lam, Juan C. Marroquin, and Michael S. Bernstein. 2016. Atelier: Repurposing expert crowdsourcing tasks as micro-internships. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2645--2656. Google ScholarDigital Library
- Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5, 4 (1992), 297--323.Google ScholarCross Ref
- Benjamin O. Weeks. 2004. The paradox of choice: Why more is less. The Academy of Management Executive 18, 4 (2004), 170--171.Google Scholar
- Howard M. Weiss and Daniel R. Ilgen. 1985. Routinized behavior in organizations. Journal of Behavioral Economics 14, 1 (1985), 57--67.Google ScholarCross Ref
- Mark E. Whiting, Dilrukshi Gamage, Snehalkumar Neil S. Gaikwad, Aaron Gilbee, Shirish Goyal, Alipta Ballav, Dinesh Majeti, Nalin Chhibber, Angela Richmond-Fuller, Freddie Vargus, Tejas Seshadri Sarma, Varshine Chandrakanthan, Teogenes Moura, Mohamed Hashim Salih, Gabriel Bayomi Tinoco Kalejaiye, Adam Ginzberg, Catherine A. Mullings, Yoni Dayan, Kristy Milland, Henrique Orefice, Jeff Regino, Sayna Parsi, Kunz Mainali, Vibhor Sehgal, Sekandar Matin, Akshansh Sinha, Rajan Vaish, and Michael S. Bernstein. 2017. Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 1902–1913. Google ScholarDigital Library
Index Terms
- Ink: Increasing Worker Agency to Reduce Friction in Hiring Crowd Workers
Recommendations
An Analysis of the Use of Qualifications on the Amazon Mechanical Turk Online Labor Market
Several human computation systems use crowdsourcing labor markets to recruit workers. However, it is still a challenge to guarantee that the results produced by workers have a high enough quality. This is particularly difficult in markets based on micro-...
Quantifying the Invisible Labor in Crowd Work
CSCW2Crowdsourcing markets provide workers with a centralized place to find paid work. What may not be obvious at first glance is that, in addition to the work they do for pay, crowd workers also have to shoulder a variety of unpaid invisible labor in these ...
Work experiences on MTurk
Amazon's Mechanical Turk (MTurk) is an online marketplace for work, where Requesters post Human Intelligence Tasks (HITs) for Workers to complete for varying compensation. Past research has focused on the quality and generalizability of social and ...
Comments