skip to main content
10.1145/3404835.3462888acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Long-Tail Hashing

Published:11 July 2021Publication History

ABSTRACT

Hashing, which represents data items as compact binary codes, has been becoming a more and more popular technique, e.g., for large-scale image retrieval, owing to its super fast search speed as well as its extremely economical memory consumption. However, existing hashing methods all try to learn binary codes from artificially balanced datasets which are not commonly available in real-world scenarios. In this paper, we propose Long-Tail Hashing Network (LTHNet), a novel two-stage deep hashing approach that addresses the problem of learning to hash for more realistic datasets where the data labels roughly exhibit a long-tail distribution. Specifically, the first stage is to learn relaxed embeddings of the given dataset with its long-tail characteristic taken into account via an end-to-end deep neural network; the second stage is to binarize those obtained embeddings. A critical part of LTHNet is its dynamic meta-embedding module extended with a determinantal point process which can adaptively realize visual knowledge transfer between head and tail classes, and thus enrich image representations for hashing. Our experiments have shown that LTHNet achieves dramatic performance improvements over all state-of-the-art competitors on long-tail datasets, with no or little sacrifice on balanced datasets. Further analyses reveal that while to our surprise directly manipulating class weights in the loss function has little effect, the extended dynamic meta-embedding module, the usage of cross-entropy loss instead of square loss, and the relatively small batch-size for training all contribute to LTHNet's success.

Skip Supplemental Material Section

Supplemental Material

fp0434-Sigir2021-[Long-Tail Hashing]-Shotcut.mp4

mp4

56.9 MB

References

  1. Reda Alhajj and Jon G. Rokne. 2018. Learning to Rank .Encyclopedia of Social Network Analysis and Mining, 2nd Edition, Springer.Google ScholarGoogle Scholar
  2. Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017. HashNet: Deep Learning to Hash by Continuation. In ICCV. 5609--5618.Google ScholarGoogle Scholar
  3. Suthee Chaidaroon, Travis Ebesu, and Yi Fang. 2018. Deep Semantic Text Hashing with Weak Supervision. In SIGIR. ACM , 1109--1112.Google ScholarGoogle Scholar
  4. Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic Minority Over-sampling Technique. JAIR , Vol. 16 (2002), 321--357.Google ScholarGoogle ScholarCross RefCross Ref
  5. Laming Chen, Guoxin Zhang, and Eric Zhou. 2018. Fast Greedy MAP Inference for Determinantal Point Process to Improve Recommendation Diversity. In NeurIPS. 5627--5638.Google ScholarGoogle Scholar
  6. Yong Chen, Zhibao Tian, Hui Zhang, Jun Wang, and Dell Zhang. 2020 a. Strongly Constrained Discrete Hashing. TIP , Vol. 29 (2020), 3596--3611.Google ScholarGoogle Scholar
  7. Yong Chen, Hui Zhang, Zhibao Tian, Jun Wang, Dell Zhang, and Xuelong Li. 2020 c. Enhanced Discrete Multi-modal Hashing: More Constraints yet Less Time to Learn. IEEE TKDE (2020), 1--13.Google ScholarGoogle Scholar
  8. Zhihong Chen, Rong Xiao, Chenliang Li, Gangfeng Ye, Haochuan Sun, and Hongbo Deng. 2020 b. ESAM: Discriminative Domain Adaptation with Non-Displayed Items to Improve Long-Tail Performance. In SIGIR. 579--588.Google ScholarGoogle Scholar
  9. Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. Power-Law Distributions in Empirical Data. SIAM Rev. , Vol. 51, 4 (2009), 661--703.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge J. Belongie. 2019. Class-Balanced Loss Based on Effective Number of Samples. In CVPR. 9268--9277.Google ScholarGoogle Scholar
  11. Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge J. Belongie. 2018. Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. In CVPR . 4109--4118.Google ScholarGoogle Scholar
  12. Cheng Deng, Zhaojia Chen, Xianglong Liu, Xinbo Gao, and Dacheng Tao. 2018. Triplet-Based Deep Hashing Network for Cross-Modal Retrieval. TIP , Vol. 27, 8 (2018), 3893--3903.Google ScholarGoogle ScholarCross RefCross Ref
  13. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR. 248--255.Google ScholarGoogle Scholar
  14. Doug Downey, Susan T. Dumais, and Eric Horvitz. 2007. Heads and Tails: Studies of Web Search with Common and Rare Queries. In SIGIR . 847--848.Google ScholarGoogle Scholar
  15. Norbert Fuhr. 2018. Some Common Mistakes in IR Evaluation, and How They Can Be Avoided. , Vol. 51, 3 (2018), 32--41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lu Gan, Diana Nurbakova, Lé a Laporte, and Sylvie Calabretto. 2020. Enhancing Recommendation Diversity using Determinantal Point Processes on Knowledge Graphs. In SIGIR. 2001--2004.Google ScholarGoogle Scholar
  17. Ming Gao, Leihui Chen, Xiangnan He, and Aoying Zhou. 2018. BiNE: Bipartite Network Embedding. In SIGIR. 715--724.Google ScholarGoogle Scholar
  18. Dar'io Garigliotti, Dyaa Albakour, Miguel Martinez, and Krisztian Balog. 2019. Unsupervised Context Retrieval for Long-tail Entities. In SIGIR. 225--228.Google ScholarGoogle Scholar
  19. Aristides Gionis, Piotr Indyk, and Rajeev Motwani. 1999. Similarity Search in High Dimensions via Hashing. In VLDB. 518--529.Google ScholarGoogle Scholar
  20. Yunchao Gong and Svetlana Lazebnik. 2011. Iterative Quantization: A Procrustean Approach to Learning Binary Codes. In CVPR . 817--824.Google ScholarGoogle Scholar
  21. Yunchao Gong, Svetlana Lazebnik, Albert Gordo, and Florent Perronnin. 2013. Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval. IEEE TPAMI , Vol. 35, 12 (2013), 2916--2929.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Jie Gui and Ping Li. 2018. R2SDH: Robust Rotated Supervised Discrete Hashing. In KDD. 1485--1493.Google ScholarGoogle Scholar
  23. Jie Gui, Tongliang Liu, Zhenan Sun, Dacheng Tao, and Tieniu Tan. 2018. Fast Supervised Discrete Hashing. IEEE TPAMI , Vol. 40, 2 (2018), 490--496.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Hui Han, Wenyuan Wang, and Binghuan Mao. 2005. Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning. In ICIC . 878--887.Google ScholarGoogle Scholar
  25. Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, and Christina Lioma. 2019. Unsupervised Neural Generative Semantic Hashing. In SIGIR. 735--744.Google ScholarGoogle Scholar
  26. Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, and Christina Lioma. 2020 a. Content-aware Neural Hashing for Cold-start Recommendation. In SIGIR . 971--980.Google ScholarGoogle Scholar
  27. Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, and Christina Lioma. 2020 b. Unsupervised Semantic Hashing with Pairwise Reconstruction. In SIGIR . 2009--2012.Google ScholarGoogle Scholar
  28. Haibo He and Edwardo A. Garcia. 2009. Learning from Imbalanced Data. IEEE TKDE , Vol. 21 (2009), 1263--1284.Google ScholarGoogle Scholar
  29. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR. 770--778.Google ScholarGoogle Scholar
  30. Xiangyu He, Peisong Wang, and Jian Cheng. 2019. K-Nearest Neighbors Hashing. In CVPR. 2839--2848.Google ScholarGoogle Scholar
  31. Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira. 2018. Learning to Cluster in order to Transfer Across Domains and Tasks. In ICLR . 1--20.Google ScholarGoogle Scholar
  32. Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. 2020. Decoupling Representation and Classifier for Long-Tailed Recognition. In ICLR . 1--16.Google ScholarGoogle Scholar
  33. Qi Kang, Xiaoshuang Chen, Sisi Li, and MengChu Zhou. 2017. A Noise-Filtered Under-Sampling Scheme for Imbalanced Classification. IEEE Transactions on Cybernetics , Vol. 47, 12 (2017), 4263--4274.Google ScholarGoogle ScholarCross RefCross Ref
  34. Wang-Cheng Kang, Wu-Jun Li, and Zhi-Hua Zhou. 2016. Column Sampling based Discrete Supervised Hashing. In AAAI. 1230--1236.Google ScholarGoogle Scholar
  35. Gou Koutaki, Keiichiro Shirai, and Mitsuru Ambai. 2018. Hadamard Coding for Supervised Discrete Hashing. TIP , Vol. 27, 11 (2018), 5378--5392.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images . Technical Report. University of Toronto. 1----60 pages.Google ScholarGoogle Scholar
  37. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In NeurIPS . 1106--1114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Qi Li, Zhenan Sun, Ran He, and Tieniu Tan. 2017. Deep Supervised Discrete Hashing. In NeurIPS. 2482--2491.Google ScholarGoogle Scholar
  39. Qi Li, Zhenan Sun, Ran He, and Tieniu Tan. 2020. A General Framework for Deep Supervised Discrete Hashing. IJCV , Vol. 128, 8 (2020), 2204--2222.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang. 2016. Feature Learning Based Deep Supervised Hashing with Pairwise Labels. In IJCAIs . 1711--1717.Google ScholarGoogle Scholar
  41. Guohua Liang and Chengqi Zhang. 2012. An efficient and simple under-sampling technique for imbalanced time series classification. In CIKM. 2339--2342.Google ScholarGoogle Scholar
  42. Guosheng Lin, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, and David Suter. 2014. Fast Supervised Hashing with Decision Trees for High-Dimensional Data. In CVPR . 1971--1978.Google ScholarGoogle Scholar
  43. Kevin Lin, Jiwen Lu, Chu-Song Chen, and Jie Zhou. 2016. Learning Compact Binary Descriptors with Unsupervised Deep Neural Networks. In CVPR . 1183--1192.Google ScholarGoogle Scholar
  44. Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollá r. 2017. Focal Loss for Dense Object Detection. In ICCV. 2999--3007.Google ScholarGoogle Scholar
  45. Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollá r. 2020. Focal Loss for Dense Object Detection. IEEE TPAMI , Vol. 42, 2 (2020), 318--327.Google ScholarGoogle ScholarCross RefCross Ref
  46. Jack Lindsey, Samuel A. Ocko, Surya Ganguli, and Sté phane Deny. 2019. A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs. In ICLR. 1--17.Google ScholarGoogle Scholar
  47. Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep Learning for Extreme Multi-Label Text Classification. In SIGIR . 115--124.Google ScholarGoogle Scholar
  48. Song Liu, Shengsheng Qian, Yang Guan, Jiawei Zhan, and Long Ying. 2020. Joint-modal Distribution-based Similarity Hashing for Large-scale Unsupervised Deep Cross-modal Retrieval. In SIGIR. 1379--1388.Google ScholarGoogle Scholar
  49. Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval .Foundations and Trends in Information Retrieval.Google ScholarGoogle Scholar
  50. Wei Liu, Cun Mu, Sanjiv Kumar, and Shih-Fu Chang. 2014. Discrete Graph Hashing. In NeurIPS. 3419--3427.Google ScholarGoogle Scholar
  51. Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. 2012. Supervised hashing with kernels. In CVPR. 2074--2081.Google ScholarGoogle Scholar
  52. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X. Yu. 2019. Large-Scale Long-Tailed Recognition in an Open World. In CVPR. 2537--2546.Google ScholarGoogle Scholar
  53. Fuchen Long, Ting Yao, Qi Dai, Xinmei Tian, Jiebo Luo, and Tao Mei. 2018. Deep Domain Adaptation Hashing with Adversarial Learning. In SIGIR. 725--734.Google ScholarGoogle Scholar
  54. Xu Lu, Lei Zhu, Zhiyong Cheng, Liqiang Nie, and Huaxiang Zhang. 2019. Online Multi-modal Hashing with Dynamic Query-adaption. In SIGIR . 715--724.Google ScholarGoogle Scholar
  55. Xu Lu, Lei Zhu, Jingjing Li, Huaxiang Zhang, and Heng Tao Shen. 2020. Efficient Supervised Discrete Multi-View Hashing for Large-Scale Multimedia Search. TMM , Vol. 22, 8 (2020), 2048--2060.Google ScholarGoogle ScholarCross RefCross Ref
  56. Xin Luo, Liqiang Nie, Xiangnan He, Ye Wu, Zhen-Duo Chen, and Xin-Shun Xu. 2018. Fast Scalable Supervised Hashing. In SIGIR. 735--744.Google ScholarGoogle Scholar
  57. Mark EJ Newman. 2005. Power Laws, Pareto Distributions and Zipf's Law. Contemporary Physics , Vol. 46, 5 (2005), 323--351.Google ScholarGoogle ScholarCross RefCross Ref
  58. Casper Petersen, Jakob Grue Simonsen, and Christina Lioma. 2016. Power Law Distributions in Information Retrieval. ACM Transactions on Information Systems (TOIS) , Vol. 34, 2 (2016), 8:1--8:37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. William J. Reed. 2001. The Pareto, Zipf and other Power Laws. Economics Letters , Vol. 74, 1 (2001), 15--19.Google ScholarGoogle ScholarCross RefCross Ref
  60. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning Representations by Back-Propagating Errors. Nature , Vol. 323 (1986), 533----536.Google ScholarGoogle ScholarCross RefCross Ref
  61. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. 2015. ImageNet Large Scale Visual Recognition Challenge. IJCV , Vol. 115, 3 (2015), 211--252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Tetsuya Sakai. 2020. On Fuhr's Guideline for IR Evaluation. , Vol. 54, 1 (2020), p14.Google ScholarGoogle Scholar
  63. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. 2016. Meta-Learning with Memory-Augmented Neural Networks. In ICML. 1842--1850.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Fumin Shen, Chunhua Shen, Wei Liu, and Heng Tao Shen. 2015. Supervised Discrete Hashing. In CVPR. 37--45.Google ScholarGoogle Scholar
  65. Shaoyun Shi, Weizhi Ma, Min Zhang, Yongfeng Zhang, Xinxing Yu, Houzhi Shan, Yiqun Liu, and Shaoping Ma. 2020. Beyond User Embedding Matrix: Learning to Hash for Modeling Large-Scale Users in Recommendation. In SIGIR. 319--328.Google ScholarGoogle Scholar
  66. Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR . 1--14.Google ScholarGoogle Scholar
  67. Changchang Sun, Xuemeng Song, Fuli Feng, Wayne Xin Zhao, Hao Zhang, and Liqiang Nie. 2019. Supervised Hierarchical Cross-Modal Hashing. In SIGIR. 725--734.Google ScholarGoogle Scholar
  68. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In NeurIPS. 5998--6008.Google ScholarGoogle Scholar
  69. Di Wang, Quan Wang, Yaqiang An, Xinbo Gao, and Yumin Tian. 2020 b. Online Collective Matrix Factorization Hashing for Large-Scale Cross-Media Retrieval. In SIGIR . 1409--1418.Google ScholarGoogle Scholar
  70. Jun Wang, Wei Liu, Andy X. Sun, and Yu-Gang Jiang. 2013a. Learning Hash Codes with Listwise Supervision. In ICCV. 3032--3039.Google ScholarGoogle Scholar
  71. Jingdong Wang, Ting Zhang, Jingkuan Song, Nicu Sebe, and Heng Tao Shen. 2018. A Survey on Learning to Hash. IEEE TPAMI , Vol. 40, 4 (2018), 769--790.Google ScholarGoogle ScholarCross RefCross Ref
  72. Qifan Wang, Luo Si, Zhiwei Zhang, and Ning Zhang. 2014. Active Hashing with Joint Data Example and Tag Selection. In SIGIR. 405--414.Google ScholarGoogle Scholar
  73. Qifan Wang, Dan Zhang, and Luo Si. 2013b. Semantic Hashing Using Tags and Topic Modeling. In SIGIR. 213--222.Google ScholarGoogle Scholar
  74. Qifan Wang, Zhiwei Zhang, and Luo Si. 2015. Ranking Preserving Hashing for Fast Similarity Search. In IJCAI. 3911--3917.Google ScholarGoogle Scholar
  75. Xudong Wang, Long Lian, Zhongqi Miao, Ziwei Liu, and Stella X. Yu. 2020 a. Long-ailed Recognition by Routing Diverse Distribution-Aware Experts. arXiv , Vol. arXiv:2010.01809 (2020), 1--14.Google ScholarGoogle Scholar
  76. Xiaofang Wang, Yi Shi, and Kris M. Kitani. 2016. Deep Supervised Hashing with Triplet Labels. In ACCV. 70--84.Google ScholarGoogle Scholar
  77. Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. 2017. Learning to Model the Tail. In NeurIPS. 7029--7039.Google ScholarGoogle Scholar
  78. Zijian Wang, Zheng Zhang, Yadan Luo, and Zi Huang. 2019. Deep Collaborative Discrete Hashing with Semantic-Invariant Structure. In SIGIR . 905--908.Google ScholarGoogle Scholar
  79. Erkun Yang, Cheng Deng, Tongliang Liu, Wei Liu, and Dacheng Tao. 2018. Semantic Structure-based Unsupervised Deep Hashing. In IJCAI. 1064--1070.Google ScholarGoogle Scholar
  80. Zhan Yang, Jun Long, Lei Zhu, and Wenti Huang. 2020. Nonlinear Robust Discrete Hashing for Cross-Modal Retrieval. In SIGIR . 1349--1358.Google ScholarGoogle Scholar
  81. Li Yuan, Tao Wang, Xiaopeng Zhang, Francis E. H. Tay, Zequn Jie, Wei Liu, and Jiashi Feng. 2020. Central Similarity Quantization for Efficient Image and Video Retrieval. In CVPR . 3080--3089.Google ScholarGoogle Scholar
  82. Weixin Zeng, Xiang Zhao, Wei Wang, Jiuyang Tang, and Zhen Tan. 2020. Degree-Aware Alignment for Entities in Tail. In SIGIR. 811--820.Google ScholarGoogle Scholar
  83. Dan Zhang, Fei Wang, and Luo Si. 2011. Composite Hashing with Multiple Information Sources. In SIGIR. 225--234.Google ScholarGoogle Scholar
  84. Dell Zhang, Jun Wang, Deng Cai, and Jinsong Lu. 2010. Self-Taught Hashing for Fast Similarity Search. In SIGIR. 18--25.Google ScholarGoogle Scholar
  85. Hongfei Zhang, Xia Song, Chenyan Xiong, Corby Rosset, Paul N Bennett, Nick Craswell, and Saurabh Tiwary. 2019. Generic Intent Representation in Web Search. In SIGIR. 65--74.Google ScholarGoogle Scholar
  86. Peichao Zhang, Wei Zhang, Wu-Jun Li, and Minyi Guo. 2014b. Supervised Hashing with Latent Factor Models. In SIGIR. 173--182.Google ScholarGoogle Scholar
  87. Xiao Zhang, Zhiyuan Fang, Yandong Wen, Zhifeng Li, and Yu Qiao. 2017. Range Loss for Deep Face Recognition with Long-Tailed Training Data. In ICCV . 5419--5428.Google ScholarGoogle Scholar
  88. Zhiwei Zhang, Qifan Wang, Lingyun Ruan, and Luo Si. 2014a. Preference Preserving Hashing for Efficient Recommendation. In SIGIR . 183--192.Google ScholarGoogle Scholar
  89. Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. 2020. BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition. In CVPR. 9716--9725.Google ScholarGoogle Scholar
  90. Linchao Zhu and Yi Yang. 2020. Inflated Episodic Memory With Region Self-Attention for Long-Tailed Visual Recognition. In CVPR . 4343--4352.Google ScholarGoogle Scholar

Index Terms

  1. Long-Tail Hashing

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGIR '21: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
        July 2021
        2998 pages
        ISBN:9781450380379
        DOI:10.1145/3404835

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 11 July 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate792of3,983submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader