skip to main content
10.1145/2637248.2637260acmotherconferencesArticle/Chapter ViewAbstractPublication PagesecceConference Proceedingsconference-collections
research-article

Providing Web Credibility Assessment Support

Published:01 September 2014Publication History

ABSTRACT

Presence of information from multiple sources on the internet requires evaluating the credibility of the information, before its utilization. Researchers have suggested that internet users experience difficulty in accessing necessary information and do not pay enough attention to its credibility. We present here the design and implementation of an automated Web Credibility Assessment Support Tool (WebCAST) that considers multiple factors (type of website, popularity, sentiment, date of last update, reputation and review based on users' ratings reflecting personal experience) for assessing the credibility of information and returns a summary indication of the credibility of a website. We use Potentially All Pairwise RanKings of all possible Alternatives (PAPRIKA) method of Multi-Criteria Decision Analysis (MCDA) to give weights to the scale values on each factor, representing the relative importance of the attributes. An empirical evaluation of the tool was conducted by computing the correlation between the tool-generated credibility scores and that of human judges. The correlation was found to be 0.89, thus verifying the validity of the tool. In the future the proposed tool can be made useful to students in their learning process of credibility assessment.

References

  1. Aggarwal, S. and Van Oostendorp, H. An attempt to automate the process of source evaluation. ACEEE International Journal on Communication 2, no. 2 (2011).Google ScholarGoogle Scholar
  2. Amin, A., Zhang, J., Cramer, H., Hardman, L., and Evers, V. The effects of source credibility ratings in a cultural heritage information aggregator. In Proc. 3rd workshop on Information credibility on the web, ACM Press (2009), 35--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Beck, S. The Good, The Bad & The Ugly: or, Why It's a Good Idea to Evaluate Web Sources. (1997). http://lib.nmsu.edu/instruction/evalcrit.html.Google ScholarGoogle Scholar
  4. Bråten, I., Strømsø, H. I., and Britt, M. A. Trust matters: Examining the role of source evaluation in students' construction of meaning within and across multiple texts. Reading Research Quarterly 44, no. 1 (2009), 6--28.Google ScholarGoogle ScholarCross RefCross Ref
  5. Corritore, C. L., Kracher, B., and Wiedenbeck, S. Online trust: concepts, evolving themes, a model. International Journal of Human-Computer Studies 58, no. 6 (2003), 737--758. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. De Smedt, T., and Daelemans, W. Pattern for Python. Journal of Machine Learning Research, 13: 2031--2035, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Ennals, R., Byler, D., Agosta, J. M., and Rosario, B. What is Disputed on the Web? In Proc. workshop on Information credibility, ACM Press (2010), 67--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Evaluating Internet Information http://www.usg.edu/galileo/skills/unit07/internet07_08.phtmlGoogle ScholarGoogle Scholar
  9. Fogg, B. J., and Tseng, H. The elements of computer credibility. In Proc. SIGCHI conference on Human factors in computing systems, ACM Press (1999), 80--87. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Grassian, E. Thinking Critically about World Wide Web Resources. UCLA Library (1995) http://www2.library.ucla.edu/libraries/college/11605_12337.cfmGoogle ScholarGoogle Scholar
  11. Hansen, P., and Ombler, F. A new method for scoring additive multi-attribute value models using pairwise rankings of alternatives. Journal of Multi-Criteria Decision Analysis 15.3-4 (2008), 87--107.Google ScholarGoogle ScholarCross RefCross Ref
  12. Lucassen, T., and Schraagen, J. M. The role of topic familiarity in online credibility evaluation support. In Proc. Human Factors and Ergonomics Society Annual Meeting, vol. 56, no. 1, SAGE Publications (2012), 1233--1237.Google ScholarGoogle ScholarCross RefCross Ref
  13. Lucassen, T., and Schraagen, J. M. Trust in wikipedia: how users trust information from an unknown source. In Proc. 4th workshop on Information credibility, ACM Press (2010) 19--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Lucassen, T., and Schraagen, J. M. The influence of source cues and topic familiarity on credibility evaluation. Computers in Human Behavior 29, no. 4 (2013), 1387--1392. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Lucassen, T., Muilwijk, R., Noordzij, M. L., and Schraagen, J. M. Topic familiarity and information skills in online credibility evaluation. Journal of the American Society for Information Science and Technology 64, no. 2 (2013), 254--264.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Metzger, M. J., Flanagin, A. J., and Zwarun, L. College student Web use, perceptions of information credibility, and verification behavior. Computers & Education 41, no. 3 (2003), 271--290. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Metzger, M.J. Understanding how Internet users make sense of credibility: A review of the state of our knowledge and recommendations for theory, policy, and practice. In Symposium on Internet Credibility and the User, 2005.Google ScholarGoogle Scholar
  18. Olteanu, A., Peshterliev, S., Liu, X., and Aberer, K. Web credibility: Features exploration and credibility prediction. In Advances in Information Retrieval, Springer Berlin Heidelberg (2013), 557--568. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Pattanaphanchai, J., O'Hara, K., and Hall, W. Trustworthiness criteria for supporting users to assess the credibility of web information. In Proc. 22nd international conference on World Wide Web companion, (2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Ranvier, J. E. M., Olteanu, A., Aberer, K., and Papaioannou, T. G. A decentralized recommender system for effective web credibility assessment. In Proc. International conference on Information and Knowledge Management, ACM Press (2012), 704--713. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Schwarz, J., and Morris, M. Augmenting web pages and search results to support credibility assessment. In Proc. SIGCHI Conference on Human Factors in Computing Systems. ACM Press (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Sentiment and Text Analysis Engine - AlchemyAPI. http://www.alchemyapi.com/apiGoogle ScholarGoogle Scholar
  23. Sondhi, P., Vydiswaran, V. V., and Zhai, C. Reliability prediction of webpages in the medical domain. In Advances in Information Retrieval, Springer Berlin Heidelberg (2012), 219--231. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. UC Berkeley - Teaching Library Internet Workshops, Finding Information on the Internet: A Tutorial (2010) http://www.lib.berkeley.edu/TeachingLib/Guides/Internet/Evaluate.htmlGoogle ScholarGoogle Scholar
  25. Walraven, A., Brand-Gruwel, S., and Boshuizen, H. How students evaluate information and sources when searching the World Wide Web for information. Computers & Education 52, no. 1 (2009), 234--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Walraven, A., Brand-Gruwel, S., and Boshuizen, H. Fostering students' evaluation behaviour while searching the internet. Instructional Science 41 (2013), 125--146.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Providing Web Credibility Assessment Support

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ECCE '14: Proceedings of the 2014 European Conference on Cognitive Ergonomics
        September 2014
        191 pages
        ISBN:9781450328746
        DOI:10.1145/2637248

        Copyright © 2014 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 September 2014

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate56of91submissions,62%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader