ABSTRACT
The elements of a surrogate serve as clues to relevance. They may be seen as operationalized relevance criteria by which users judge the relevance of a search result according to their information need. In addition to short textual summaries, today's academic search systems integrate additional data into their search results presentation, for example, the number of citations or the number of downloads. This kind of data can be described as popularity data, serving as factors also incorporated in search engines' ranking algorithms. Past research shows that there are diverse criteria and factors involved in relevance judgements from the user perspective. However, previous empirical studies on relevance criteria and clues examined surrogates that did not include popularity data. The goal of my doctoral research is to gain significant knowledge on the criteria by which users in an academic search situation make relevance judgements based on surrogates that include popularity data. This paper describes the current state of the experimental research design and method of data collection.
- Balatsoukas, P. and Ruthven, I. 2012. An eye-tracking approach to the analysis of relevance judgments on the Web: The case of Google search engine. Journal of the American Society for Information Science and Technology. 63, 9 (Sep. 2012), 1728--1746. Google ScholarDigital Library
- Barry, C.L. 1998. Document representations and clues to document relevance. Journal of the American Society for Information Science. 49, 14 (Jan. 1998), 1293--1303. Google ScholarDigital Library
- Barry, C.L. and Schamber, L. 1998. Users' criteria for relevance evaluation: A cross-situational comparison. Information Processing & Management. 34, 2--3 (März 1998), 219--236. Google ScholarDigital Library
- Bateman, J. 1998. Changes in Relevance Criteria: A Longitudinal Study. Proceedings of the 61st ASIS Annual Meeting (1998), 23--32.Google Scholar
- Behnert, C. and Lewandowski, D. 2015. Ranking search results in library information systems - Considering ranking approaches adapted from web search engines. The Journal of Academic Librarianship. 41, 6 (Nov. 2015), 725--735.Google ScholarCross Ref
- Beresi, U.C. et. al. 2010. Why did you pick that? Visualising relevance criteria in exploratory search. International Journal on Digital Libraries. 11, 2 (June 2010), 59--74. Google ScholarDigital Library
- Borlund, P. und Ingwersen, P. 1997. The development of a method for the evaluation of interactive information retrieval systems. Journal of Documentation. 53, 3 (Aug. 1997), 225--250.Google ScholarCross Ref
- Bruce, H.W. 1994. A cognitive view of the situational dynamism of user-centered relevance estimation. Journal of the American Society for Information Science. 45, 3 (Apr. 1994), 142--148. Google ScholarDigital Library
- Buckland, M.K. 2017. Information and society. MIT Press.Google Scholar
- Cool, C. u. a. 1993. Characteristics of texts affecting relevance judgments. Proceedings of the 14th Annual National Online Meeting, New York, May 4-6, 1993 (1993), 77--84.Google Scholar
- Howard, D.L. 1994. Pertinence as reflected in personal constructs. Journal of the American Society for Information Science. 45, 3 (1994), 172--185. Google ScholarDigital Library
- Kelly, D. 2009. Methods for evaluating interactive information retrieval systems with users. Foundations and Trends® in Information Retrieval. 3, 1-2 (2009). Google ScholarDigital Library
- Kelly, D. and Cresenzi, A. 2016. From design to analysis: Conducting controlled laboratory experiments with users. Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '16 (New York, New York, USA, 2016), 1207--1210. Google ScholarDigital Library
- Mizzaro, S. 1997. Relevance: The whole history. Journal of the American Society for Information Science. 48, 9 (Sep. 1997), 810--832. Google ScholarDigital Library
- Papaeconomou, C. et. al. 2008. Searchers' relevance judgments and criteria in evaluating web pages in a learning style perspective. Proceedings of the second international symposium on Information interaction in context - IIiX '08 (New York, New York, USA, 2008), 123--132. Google ScholarDigital Library
- Park, T.K. 1993. The nature of relevance in Information Retrieval: An empirical study. The Library Quarterly. 63, 3 (July 1993), 318--351.Google Scholar
- Plassmeier, K. et. al. 2015. Evaluating popularity data for relevance ranking in library information systems. Proceedings of the 78th ASIS&T Annual Meeting (2015). Google ScholarDigital Library
- Rieh, S.Y. 2009. Credibility and cognitive authority of information. Encyclopedia of Library and Information Sciences. CRC Press. 1337--1344.Google Scholar
- Rieh, S.Y. 2002. Judgment of information quality and cognitive authority in the Web. Journal of the American Society for Information Science and Technology. 53, 2 (Jan. 2002), 145--161. Google ScholarDigital Library
- Rieh, S.Y. and Belkin, N.J. 1998. Understanding judgment of information quality and cognitive authority in the WWW. Proceedings of the 61st ASIS Annual Meeting (1998), 279--289.Google Scholar
- Saracevic, T. 2007. Relevance: A review of the literature and a framework for thinking on the notion in information science. Part III: Behavior and effects of relevance. Journal of the American Society for Information Science and Technology. 58, 13 (Nov. 2007), 2126--2144. Google ScholarDigital Library
- Saracevic, T. 2016. The Notion of relevance in information science: Everybody knows what relevance is. But, what is it really?. Morgan & Claypool.Google Scholar
- Savolainen, R. and Kari, J. 2006. User-defined relevance criteria in web searching. Journal of Documentation. 62, 6 (Nov. 2006), 685--707.Google ScholarCross Ref
- Surowiecki, J. 2005. The wisdom of crowds. Anchor Books. Google ScholarDigital Library
- Tang, R. and Solomon, P. 1998. Toward an understanding of the dynamics of relevance judgment: An analysis of one person's search behavior. Information Processing & Management. 34, 2-3 (March 1998), 237--256. Google ScholarDigital Library
- Tang, R. and Solomon, P. 2001. Use of relevance criteria across stages of document evaluation: On the complementarity of experimental and naturalistic studies. Journal of the American Society for Information Science and Technology. 52, 8 (Jan. 2001), 676--685. Google ScholarDigital Library
- Taylor, A. 2013. Examination of work task and criteria choices for the relevance judgment process. Journal of Documentation. 69, 4 (July 2013), 523--544.Google ScholarCross Ref
- Taylor, A. 2012. User relevance criteria choices and the information search process. Information Processing & Management. 48, 1 (Jan. 2012), 136--153. Google ScholarDigital Library
- Wang, P. 1994. A cognitive model of document selection of real users of information retrieval systems. University of Maryland; College of Library and Information Science.Google Scholar
- Wang, P. and Soergel, D. 1998. A cognitive model of document use during a research project. Study I. Document selection. Journal of the American Society for Information Science. 49, 2 (1998), 115--133. Google ScholarCross Ref
- Wilson, P. 1983. Second-hand knowledge: An inquiry into cognitive authority. Greenwood Press.Google Scholar
Recommendations
Query polyrepresentation for ranking retrieval systems without relevance judgments
Ranking information retrieval (IR) systems with respect to their effectiveness is a crucial operation during IR evaluation, as well as during data fusion. This article offers a novel method of approaching the system-ranking problem, based on the widely ...
Probabilistic document-context based relevance feedback with limited relevance judgments
CIKM '06: Proceedings of the 15th ACM international conference on Information and knowledge managementThis paper presents our novel relevance feedback (RF) algorithm that uses the probabilistic document-context based retrieval model with limited relevance judgments for document re-ranking. Probabilities of the document-context based retrieval model are ...
Ranking retrieval systems without relevance judgments
SIGIR '01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrievalThe most prevalent experimental methodology for comparing the effectiveness of information retrieval systems requires a test collection, composed of a set of documents, a set of query topics, and a set of relevance judgments indicating which documents ...
Comments