Abstract
On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.
Article PDF
Similar content being viewed by others
References
Aha, D. Kibler D., & Albert, M.K (1991). Instance-Based Learning Algorithms. Machine l.earning, 6(1) 37 66
Angluin. D (1988). Queries and Concept Lenlling. Machine l.earning 2(4), 319 342.
Blumer, A, Flirnfcucht, A., Ilaussler, D., & Warlmuth, M. (1989) Learnahility anti the Vapnix-Chervonkirrllei Dimensior. Journal of'the ACM, 36(4), 929–965.
Cost, S & Salzberg, S (1993). A Weighted Nearest Neighbor Algorithm for learning with Symbolic Features Machine Learning 10(1). 57 78
Grossberg, S. (1987) Competitive learning: From Interacirive Activation to Adaptive Resrinane. Cognitive Science I1 23–63
Hecht-Nielsen R. (1990). Neurcomputing. Reading, MA: Addison-Wesley
Helmbold D.P., Littleston, N., & Long, P.M. (1992). Apple-Tasting and Nearly One-Sided Learning. Proceedings of the 33rd Annual Symposium on the Foundation. of Computer Science (pp. 493–502). IEEE Computer Science Press
Helmbold, D.P. & Long, P.M. (1991). Tracking Drifting Concepts Using Random Examples. Proceedingsg of the Fourth Annual Workhop on Computatonal Learning Theory (pp. 13–23). San Mateo, CA: Morgan Kaufmann
Helmbold, D.P. & Long, P.M. (1994) Tracking Drifling Concepts by Minimizing Disagreements. Machine Learning, 14(1), 27–45.
Kaelbling, I.P. (1994). Associative Reinforcement Learning Functions in k-DNF. Machine Learning 15(3) 279–298.
Kilander. F., & Jansson, C.G (1993). COBBIT-A Control Procedure for COBWEB in the the Presence of Concept Drift. Proceedings of the Sixth European Conference on Machine Learning (pp. 244–261) Berlin: Springer Verlag.
Krizakova, I., & Kubal, M. (1992) FAVORIT: Concept Formation with Ageing of Knowledge Pattern Recognition Letlers.,13, 19–25.
Kubat, M. (1989). Floatring Approximation in Time Varying Knowledge Bases. Pattern Recognition Letters 10, 223–227.
Kubat, M. (l991). Conceptual Inductive Learning: The Case of Unreliablc Teachers. Artificial Intelligence. 52, 169–182.
Kubat, M. (1992). A Machine l.earning Based Approach to Load Balancing in Computer Networks Cybernetics and Systems 23 389–400
Kubat. M. (1993). Flexibile Concept Learning in Real Time Systems. Journal of Intelligent and Robotic Systems, 3 155–171
Kuh, A., Petsche, T., & Rivest, R.L. (1991). Learning Time-varying Concepts Advances in Neural Information Processing Systems 3 (pp 183–189) San Mateo, CA: Morgan Kaufmann.
Kuh, A., Petsche, T., & Rivest. R. I. (1992). Incrementally Time varying Half-planes. Advances in Neural Information Processing Systems 4 (pp. 920–927). San Mateo CA: Morgan Kaufmann.
Langley, P., Gennari, J.H. & Iba, W. (1987). Hill-Climbing Theories of Learning. Proceedings of the Fourth International Workshop on Mahine Learning (pp 312–323) Los Altos, CA: Morgan Kaufmann.
Langley. P., & Iba, W. (1993) Average-case Analysis of a Nearest Neghbor Algorithm Proceedings of the Thirteenth International Conference on Artifiical Intelligence (pp.889–894) San Mateo, CA: Morgan Kaufmann
Lebowitz, M (1987) Experiments with Incremental Concept Formation: UNIMEM Machine Learning. 2(2) 103 138
Maass, W. (1991). On line Learning with an Oblivious Environment and the Power of Randomization. Proceedings of the Fourth Annual Workshop on Computational Learning Theory (pp 167-175). San Mateo, CA: Morgan Kaufmann.
Markovich, S. & Scott, P.D. (1988) The Role of Forgetting in Learning. Proceedings of the Fifth International Conference on Machine Learning (pp. 450–465). San Mateo, CA: Morgan Kaufmann.
Michalski, R.S. (1983). A Theory and Methodology of Inductive Learning. In R.S. Michalski, J.G. Carbonell, and E:M. Mitchell (eds.). Machine Learning: An Artificial Intelligence Approah. (Vol. 1). San Mateo, CA: Morgan Kaufmann
Minton, S. (1988). Quantitative Results Concerning the Utility of Explanation-Based Learning Proceedings. of the Seventh National Conference on Artificial Intelligence (pp. 564–569) San Mateo CA: Morgan Kaufmann
Mitchell, T.M., Keller, R.M. & Kedar-Cablli, S.T. (1986). Explanation-Based Generalization: A Unifying View. Machine Learning, 1(1), 47 80.
Moore, A.W. (1992) Fast, Robust Adaptive Control by Learning only Forward Models. Advances in Neural Information Processing System 4 (pp 571–578) San Mateo. CA: Morgan Kaufmann.
Pawlak, Z. (1982). Rough Sets International Journal of Computer and Information Sciences., 11, 341–356
Salganicoff, M. (1993a). Density-Adaptive Learning and Forgetting. Proceedings of the 10 th International Corefrence on Machine Learning (pp. 276–281). San Mateo. CA: Morgan Kaufmann.
Salganicoff, M. (1993b). Explicit Forgetting Algorithms for Memory-Based Learning (Technical Report MS CIS–93–80). Philadelphia, PA: University of Pennsylvania, Department of Computer and Information Science.
Salzberg, S. (1991). A Nearest Hyperrectangle Leamring Mctlhod. Machine Learning, 6(3), 251–276.
Schlimer, J.C., & Granger, R.H. (1986). Incremental Learning front Noisy Data. Machine Learning, 1(3), 317–354.
Tambe, M., & Newell, A. (1988). Some Chunks are Expensive. Proceedings of the Fifth International Conference on Machine Learning (pp. 451–458). San Mateo, CA: Morgan Kaufmann
Turney, P.D. (1993) Robust Classification with Context-Sensitive Features. Proceedings of the Fifth International Conference on Machine Learning (pp. 451–458). San Mateo, CA: Morgan Kaufmann.
Valiant, L. (1984). A Theory of the Learnable. Communication of the ACM. 27(11), 1134—1142.
Widmer, C. (1994). Combining Robustness and Flexibility in Learning Drifting Concepts. Proceedings of the11th European Conference on Artificial Intelligence (pp. 468–472) Chichester: Wiley & Sons.
Widmer G., & Kubat, M (1992). Learning Flexible Concepts from Streams of Examples: FLORA2 Proceedings of the 10 th European Conference on Artificial Intelligence (pp. 463 467). Chichester: Wiley & Sons.
Widmner, G., & Kuhat. M. (1993) Effective Learning in Dynamic Environments by Explicit Context Tracking. Proceedings of the Sixth European Conference on Machine Learning (pp 227–243). Berlin: Springer Verlag.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Widmer, G., Kubat, M. Learning in the Presence of Concept Drift and Hidden Contexts. Machine Learning 23, 69–101 (1996). https://doi.org/10.1023/A:1018046501280
Issue Date:
DOI: https://doi.org/10.1023/A:1018046501280