skip to main content
10.1145/2463676.2465307acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

An online cost sensitive decision-making method in crowdsourcing systems

Authors Info & Claims
Published:22 June 2013Publication History

ABSTRACT

Crowdsourcing has created a variety of opportunities for many challenging problems by leveraging human intelligence. For example, applications such as image tagging, natural language processing, and semantic-based information retrieval can exploit crowd-based human computation to supplement existing computational algorithms. Naturally, human workers in crowdsourcing solve problems based on their knowledge, experience, and perception. It is therefore not clear which problems can be better solved by crowdsourcing than solving solely using traditional machine-based methods. Therefore, a cost sensitive quantitative analysis method is needed.

In this paper, we design and implement a cost sensitive method for crowdsourcing. We online estimate the profit of the crowdsourcing job so that those questions with no future profit from crowdsourcing can be terminated. Two models are proposed to estimate the profit of crowdsourcing job, namely the linear value model and the generalized non-linear model. Using these models, the expected profit of obtaining new answers for a specific question is computed based on the answers already received. A question is terminated in real time if the marginal expected profit of obtaining more answers is not positive. We extends the method to publish a batch of questions in a HIT. We evaluate the effectiveness of our proposed method using two real world jobs on AMT. The experimental results show that our proposed method outperforms all the state-of-art methods.

References

  1. http://www.mturk.com.Google ScholarGoogle Scholar
  2. O. Alonso, D. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. In SIGIR Forum, volume 42, pages 9--15. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Feng, M. Franklin, D. Kossmann, T. Kraska, S. Madden, S. Ramesh, A. Wang, and R. Xin. Crowddb: Query processing with the vldb crowd. VLDB, 4(12), 2011.Google ScholarGoogle Scholar
  4. M. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin. Crowddb: answering queries with crowdsourcing. In SIGMOD, pages 61--72, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Guo, A. Parameswaran, and H. Garcia-Molina. So who won?: dynamic max discovery with the crowd. SIGMOD, pages 385--396. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Ipeirotis, F. Provost, and J. Wang. Quality management on amazon mechanical turk. SIGKDD workshop, pages 64--67. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Kazai, J. Kamps, M. Koolen, and N. Milic-Frayling. Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. SIGIR, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Kittur, E. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. SIGCHI, pages 453--456. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. X. Liu, M. Lu, B. Ooi, Y. Shen, S. Wu, and M. Zhang. Cdas: a crowdsourcing data analytics system. VLDB, 5(10):1040--1051, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. Marcus, E. Wu, D. Karger, S. Madden, and R. Miller. Crowdsourced databases: Query processing with people. CIDR, 2011.Google ScholarGoogle Scholar
  11. A. Marcus, E. Wu, D. Karger, S. Madden, and R. Miller Demonstration of Qurk: a query processor for humanoperators. SIGMOD, pages 1315--1318. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Parameswaran, H. Garcia-Molina, H. Park, N. Polyzotis, A. Ramesh, and J. Widom. Crowdscreen: Algorithms for filtering data with humans. SIGMOD, pages 361--372. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Parameswaran and N. Polyzotis. Answering queries using humans, algorithms and databases. CIDR, 2011.Google ScholarGoogle Scholar
  14. A. Parameswaran, A. Sarma, H. Garcia-Molina, N. Polyzotis, and J. Widom. Human-assisted graph search: it's okay to ask questions. VLDB, 4(5):267--278, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. V. Raykar, S. Yu, L. Zhao, G. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. JMLR, 11:1297--1322, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. Selke, C. Lofi, and W. Balke. Pushing the boundaries of crowd-enabled databases with query-driven schema expansion. VLDB, 5(6):538--549, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. Wang, T. Kraska, M. Franklin, and J. Feng. Crowder: crowdsourcing entity resolution. VLDB, 5(11):1483--1494, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. P. Welinder and P. Perona. Online crowdsourcing: rating annotators and obtaining cost-effective labels. CVPR workshop, pages 25--32. IEEE, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  19. T. Yan, V. Kumar, and D. Ganesan. Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. MobiSys, pages 77--90, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An online cost sensitive decision-making method in crowdsourcing systems

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGMOD '13: Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
        June 2013
        1322 pages
        ISBN:9781450320375
        DOI:10.1145/2463676

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 June 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        SIGMOD '13 Paper Acceptance Rate76of372submissions,20%Overall Acceptance Rate785of4,003submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader