Article PDF
References
Amarel, S. (1968). On the representation of problems of reasoning about actions. In D. Michie (Ed.),Machine Intelligence (Vol. 3). Edinburgh: U. of Edinburgh Press.
Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M.K. (1987). Occam's razor.Information Processing Letters,24, 377–380.
Ehrenfeucht, A., Haussler, D., Kearns, M., & Valiant, L. (1988). A general lower bound on the number of examples needed for learning.COLT 88: Proceedings of the Conference on Learning Theory (pp. 110–120). Cambridge, MA: Morgan Kaufmann.
Fisher, D. (1987). Knowledge acquisition via incremental conceptual clustering.Machine Learning,2, 139–172.
Fisher, D.H., & McKusick, K.B. (1988). An empirical comparison of ID3 and back-propagation.IJCAI-89: Eleventh International Joint Conference on Artificial Intelligence (pp. 788–793), Detroit, MI: Morgan Kaufmann.
Hunt, E.B., Marin, J. & Stone, P.J. (1966).Experiments in induction. New York: Academic Press.
Judd, J.S. (1987). Learning in networks is hard.Proceedings of the First International Conference on Neural Networks (pp. 685–692). San Diego, CA: IEEE.
Kearns, M., & Valiant, L.G. (1988).Learning Boolean formulae or finite automata is as hard as factoring (Technical Report No. 14–88). Cambridge, MA: Harvard University, Aiken Computation Laboratory.
Michalski, R.S. (1969). On the quasi-minimal solution of the general covering problem.Proceedings of the Fifth International Federation on Automatic Control,27, 109–129.
Mingers, J. (1989). An empirical comparison of selection measures for decision-tree induction.Machine Learning,3, 319–342.
Mitchell, T.M. (1978).Version spaces: An approach to concept learning (Technical Report No. STAN-CS-78–711). Stanford, CA: Stanford University, Department of Computer Science.
Mitchell, T.M., Mahade van, S., & SteinbergL.I. (1985). LEAP: A learning apprentice for VLSI design,IJCAI-85: Ninth International Joint Conference on Artificial Intelligence (pp. 573–580). Los Angeles, CA: Morgan Kaufmann.
Mitchell, T.M., Utgoff, P.E. & Banerji, R.B. (1983). Learning by experimentation: Acquiring and refining problem-solving heuristics. In R.S. Michalski, J.G. Carbonell, & T.M. Mitchell (Eds.),Machine learning: An artificial intelligence approach (Vol. 1). San mateo, CA: Morgan Kaufmann.
Mooney, R., Shavlik, J., Towell, G., & Gove, A. (1989). An experimental comparison of symbolic and connectionist learning algorithms.IJCAI-89: Eleventh International Joint Conference on Artificial Intelligence (pp. 775–780). Detroit, MI: Morgan Kaufmann.
Pitt, L., & Warmuth, M.K. (1989). The minimum DFA consistency problem cannot be approximated within any polynomial.Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing (pp. 421–432). ACM.
Quinlan, J.R. (1983). Learning efficient classification procedures and their application to chess endgames. In R.S. Michalski, J.G. Carbonell, & T.M. Mitchell (Eds.),Machine learning: An artificial intelligence approach (Vol. 1), San Mateo, CA: Morgan Kaufmann.
Quinlan, J.R. 1986. Induction of decision trees.Machine Learning,1, 81–106.
Quinlan, J.R. (1988). An empirical comparison of genetic and decision-tree classifiers.Proceedings of the Fifth International Conference on Machine Learning (pp. 135–141). Ann Arbor, MI: Morgan Kaufmann.
Rendell, L. (1983). A new basis for state-space learning systems and a successful implementation.Artificial Intelligence,20, 369–392.
Rissanen, J. (1978). Modeling by shortest data description.Automatica,14, 465–471.
Rosenblatt, F. (1957).The perceptron: A perceiving and recognizing automaton (Technical Report No. 85–460–1). Ithaca, NY: Project PARA, Cornell Aeronautical Laboratory.
Rumelhart, D.E., Hinton, D.E., & Williams, R.I. (1986). Learning internal representations by error propagation. In D.E. Rumelhart, & J.L. McClelland (Eds.),Parallei Distributed Processing (Vol. 1). Cambridge, MA: MIT Press.
Schlimmer, J.C., & Fisher, D. (1986). A case study in incremental concept formation.Proceedings of the National Conference on Artificial Intelligence, AAAI-86 (pp. 496–501). Philadelphia, PA: Morgan Kaufmann.
Schlimmer, J.C., & Grander, R.H. Jr. (1986). Beyond incremental processing: Tracking concept drift.Proceedings of the National Conference on Artificial Intelligence, AAAI-86 (pp. 502–507). Philadelphia, PA: Morgan Kaufmann.
Schlimmer, J.C., & Granger, R.H. Jr. (1986). Incremental learning from noisy data.Machine Learning,1, 317–354.
Utgoff, P.E. (1988). ID5: An incremental ID3.Proceedings of the Fifth International Conference on Machine Learning (pp. 107–120). Ann Arbor, MI: Morgan Kaufmann.
Valiant, L.G. (1984). A theory of the learnable.Communications of the ACM,27, 1134–1142.
Weiss, S. & Kapouless, I. (1989). An empirical comparison of pattern recognition, neural nets, and machine learning classification methods,IJCAI-89: Eleventh International Joint Conference on Artificial Intelligence (pp. 781–787). Detroit, MI: Morgan Kaufmann.
Additional information
The author is with the Department of Computer Science Oregon State University
Rights and permissions
About this article
Cite this article
Dietterich, T.G. Editorial Exploratory research in machine learning. Mach Learn 5, 5–9 (1990). https://doi.org/10.1007/BF00115892
Issue Date:
DOI: https://doi.org/10.1007/BF00115892