skip to main content
10.1145/967900.968016acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
Article

On support thresholds in associative classification

Published:14 March 2004Publication History

ABSTRACT

Associative classification is a well-known technique for structured data classification. Most previous works on associative classification use support based pruning for rule extraction, and usually set the threshold value to 1%. This threshold allows rule extraction to be tractable and on the average yields a good accuracy. We believe that this threshold may be not accurate in some cases, since the class distribution in the dataset is not taken into account. In this paper we investigate the effect of support threshold on classification accuracy. Lower support thresholds are often unfeasible with current extraction algorithms, or may cause the generation of a huge rule set. To observe the effect of varying the support threshold, we first propose a compact form to encode a complete rule set. We then develop a new classifier, named L3G, based on the compact form. Taking advantage of the compact form, the classifier can be built also with rather low support rules. We ran a variety of experiments with different support thresholds on datasets from the UCI machine learning database repository. The experiments showed that the optimal accuracy is obtained for variable threshold values, sometime lower than 1%.

References

  1. R. Agrawal, T. Imilienski, and A. Swami. Mining association rules between sets of items in large databases. In SIGMOD'93.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Baralis and P. Garza. A lazy approach to pruning classification rules. In ICDM'02.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Y. Bastide, R. Taouil, N. Pasquier, G. Stumme, and L. Lakhal. Mining frequent patterns with counting inference. SIGKDD Explorations, December 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. J. Bayardo. Efficiently mining long patterns from databases. In ACM SIGMOD'99.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Blake and C. Merz. UCI repository of machine learning databases, 1998.]]Google ScholarGoogle Scholar
  6. J.-F. Boulicaut, A. Bykowski, and C. Rigotti. Free-sets: a condensed representation of boolean data for the approximation of frequency queries. Data Mining and Knowledge Discovery journal, 7(1), pp. 5--22, Kluwer Academics Publishers, May 2003.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Brin, R. Motwani, and C. Silverstein. Beyond market baskets: Generalizing associations rules to correlations. In ACM SIGMOD'97.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Bykowski and C. Rigotti. A condensed representation to find frequent patterns. In PODS'01.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. B. Cremilleux and J.-F. Boulicaut. Simplest rules characterizing classes generated by delta-free sets. In ES'02.]]Google ScholarGoogle Scholar
  10. G. Dong, X. Zhang, L. Wong, and J. Li. CAEP: Classification by aggregating emerging patterns. In Int. Conf. on Discovery Science, 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Baralis and S. Chiusano. Minimal non redundant classification rule sets. IEEE ICDM Workshop on Foundations of Data Mining and Discovery, 2002.]]Google ScholarGoogle Scholar
  12. W. Li, J. Han, and J. Pei. CMAR: Accurate and efficient classification based on multiple class-association rules. In ICDM'01.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. B. Liu, W. Hsu, and Y. Ma. Integrating classification and association rule mining. In KDD'98.]]Google ScholarGoogle Scholar
  14. B. Liu, Y. Ma, and K. Wong. Improving an association rule based classifier. In PKDD'00.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Efficient mining of association rules using closed itemsets lattice. In Information Systems, 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Closed itemsets discovery of small covers for association rules. In Networking and Information Systems, June 2001.]]Google ScholarGoogle Scholar
  17. J. Pei, J. Han, and R. Mao. Closet: An efficient algorithm for mining frequent closed itemsets. In ACM SIGMOD DMKD'00.]]Google ScholarGoogle Scholar
  18. J. Quinlan. C4.5: program for classification learning. Morgan Kaufmann, 1992.]]Google ScholarGoogle Scholar
  19. K. Wang, S. Zhou, and Y. He. Growing decision trees on support-less association rules. In KDD'00.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Zaki. Generating non-redundant association rules. In KDD'00.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Zaki and C.-J. Hsiao. Charm: An efficient algorithm for closed itemset mining. In SIAM'02.]]Google ScholarGoogle Scholar
  1. On support thresholds in associative classification

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SAC '04: Proceedings of the 2004 ACM symposium on Applied computing
        March 2004
        1733 pages
        ISBN:1581138121
        DOI:10.1145/967900

        Copyright © 2004 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 March 2004

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • Article

        Acceptance Rates

        Overall Acceptance Rate1,650of6,669submissions,25%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader