Skip to main content

Constructive Induction of Cartesian Product Attributes

  • Chapter
Feature Extraction, Construction and Selection

Part of the book series: The Springer International Series in Engineering and Computer Science ((SECS,volume 453))

Abstract

Constructive induction is the process of changing the representation of examples by creating new attributes from existing attributes. In classification, the goal of constructive induction is to find a representation that facilitates learning a concept description by a particular learning system. Typically, the new attributes are Boolean or arithmetic combinations of existing attributes and the learning algorithms used are decision trees or rule learners. We describe the construction of new attributes that are the Cartesian product of existing attributes. We consider the effects of this operator on three learning algorithms and compare two different methods for determining when to construct new attributes with this operator

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Aha, D. & Bankert, R. (1995). A Comparative evaluation of sequential feature selection algorithms. Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics. Ft. Lauderdale, Fl.

    Google Scholar 

  • Almuallim, H., and Dietterich, T. G. (1991). Learning with many irrelevant features. In Ninth National Conference on Artificial Intelligence, 547–552. MIT Press.

    Google Scholar 

  • Caruana, R., & Freitag, D. (1994). Greedy attribute selection. In Cohen, W., and Hirsh, H., eds., Machine Learning: Proceedings of the Eleventh International Conference. Morgan Kaufmann.

    Google Scholar 

  • Cost, S. & Salzberg, S. (1993). A weighted nearest neighbor algorithm for learning with symbolic features Machine Learning, 10, 57–78.

    Google Scholar 

  • Danyluk, A. & Provost, F. (1993). Small disjuncts in action: Learning to diagnose errors in the telephone network local loop. Machine Learning Conference, pp 81–88.

    Google Scholar 

  • Dougherty, J., Kohavi, J., & Sahami, M. (1995). Supervised and unsupervised discretization of continuous features. Machine Learning Conference, pp 194–202.

    Google Scholar 

  • Duda, R. & Hart, P. (1973). Pattern classification and scene analysis. New York: John Wiley & Sons.

    Google Scholar 

  • John, G. Kohavi, R., & Pfleger, K. (1994). Irrelevant features and the subset selection problem. Proceedings of the Eleventh International Conference on Machine Learning. New Brunswick, NJ.

    Google Scholar 

  • Kittler, J. (1986). Feature selection and extraction. In Young & Fu, (eds.), Handbook of pattern recognition and image processing. New York: Academic Press.

    Google Scholar 

  • Kohavi, R. (1995). Wrappers for performance enhancement and oblivious decision graphs. Ph.D. dissertation. Stanford University.

    Google Scholar 

  • Kononenko, I. (1990). Comparison of inductive and naive Bayesian learning approaches to automatic knowledge acquisition. In B. Wielinga (Eds.), Current trends in knowledge acquisition. Amsterdam: IOS Press.

    Google Scholar 

  • Kononenko, I. (1991). Semi-naive Bayesian classifier. Proceedings of the Sixth European Working Session on Learning. (pp. 206–219). Porto, Portugal: Pittman.

    Google Scholar 

  • Langley, P. (1993). Induction of recursive Bayesian classifiers. Proceedings of the 1993 European Conference on Machine Learning. (pp. 153–164). Vienna: Springer-Verlag.

    Google Scholar 

  • Langley, P. & Sage, S. (1994). Induction of selective Bayesian classifiers. Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence. Seattle, WA.

    Google Scholar 

  • Matheus C.J., & Rendell L.A. (1989). Constructive Induction On Decision Trees, in Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Los Altos, CA, 645–650.

    Google Scholar 

  • Murphy, P. M., & Aha, D. W. (1995). UCI Repository of machine learning databases. Irvine: University of California, Department of Information & Computer Science.[Machine-readable data repository ftp://www.ics.uci.edu/pub/machine-learning-databases/

    Google Scholar 

  • Murthy, S. & Salzberg, S. (1995). Lookahead and Pathology in Decision Tree Induction Proceedings of the International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Los Altos, CA, 1025–1031.

    Google Scholar 

  • Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.

    Google Scholar 

  • Rachlin, Kasif, Salzberg & Aha, (1994). Towards a better understanding of memory-based reasoning systems. Proceedings of the Eleventh Conference on Machine Learning. New Brunswick, NJ

    Google Scholar 

  • Ragavan, H. & Rendell, L. (1993). Lookahead feature construction for l earning hard concepts. Machine Learning: Proceedings of the Tenth International Conference. Morgan Kaufmann

    Google Scholar 

  • Salzberg, S., Chandar, R., Ford, H., Murthy, S. & White, R. (1995). Decision Trees for Automated Identification of Cosmic Ray Hits in Hubble Space Telescope Images. Publications of the Astronomical Society of the Pacific.

    Google Scholar 

  • Sanger T.D., Sutton R.S., & Matheus C.J.(1992). Iterative Construction of Sparse Polynomial Approximations, in Moody J.E., et al.(eds.), Neural Information Processing Systems 4, Morgan Kaufmann, San Mateo, CA, pp.1064–1071.

    Google Scholar 

  • Stanfill, C. & Waltz, D. (1986). Towards memory-based reasoning. Communications of the ACM, 29, 1213–1228.

    Article  Google Scholar 

  • Wan, S. & Wong S. (1989). A measure for concept dissimilarity and its application in Machine Learning. In R. Janicki and W. Koczkodaj (eds.) Computing and Information 267–274.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer Science+Business Media New York

About this chapter

Cite this chapter

Pazzani, M.J. (1998). Constructive Induction of Cartesian Product Attributes. In: Liu, H., Motoda, H. (eds) Feature Extraction, Construction and Selection. The Springer International Series in Engineering and Computer Science, vol 453. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5725-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-5725-8_21

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7622-4

  • Online ISBN: 978-1-4615-5725-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics