Abstract
This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances.
Article PDF
Similar content being viewed by others
References
Aslin, R.N., Woodward, J.Z., LaMendola, N.P., & Bever, T.G. (1996). Models of word segmentation in fluent maternal speech to infants. In J.L. Morgan & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 117–134). Mahwah, NJ: Lawrence Erlbaum Associates.
Baayen, H. (1991). A stochastic process for word frequency distributions. Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, CA.
Bernstein-Ratner, N. (1987). The phonology of parent child speech. In K. Nelson & A. van Kleeck (Eds.), Children's language (Vol. 6). Hillsdale, NJ: Erlbaum.
Brent, M.R. (1996). Advances in computational study of language acquisition. Cognition, 61, 1–38.
Brent, M.R. (1997). Toward a unified model of lexical acquisition and lexical access. Journal of Psycholinguistic Research, 26, 363–375.
Brent, M.R., & Cartwright, T.A. (1996). Distributional regularity and phonotactics are useful for segmentation. Cognition, 61, 93–125.
Cartwright, T.A., & Brent, M.R. (1994). Segmenting speech without a lexicon: Evidence for a bootstrapping model of lexical acquisition. Proceedings of the 16th Annual Meeting of the Cognitive Science Society. Hillsdale, NJ: Erlbaum.
Cartwright, T.A., & Brent, M.R. (1997). Syntactic categorization in early language acquisition: Formalizing the role of distributional analysis. Cognition, 63, 121–170.
Christiansen, M.H., Allen, J., & Seidenberg, M. (1998). Learning to segment speech using multiple cues: A connectionist model. To appear in Language and Cognitive Processes, 13, 221–268.
Church, K.W., & Gale, W.A. (1991). A comparison of the enhanced good-turing and deleted estimation methods for estimating probabilities of English bigrams. Computer Speech and Language, 5, 19–54.
Dahan, D., & Brent, M.R. (1999). On the discovery of novelword-like units from utterances: An artificial-language study with implications for native-language acquisition. To appear in Journal of Experimental Psychology: General (in press).
Elman, J.L. (1990). Finding structure in time. Cognitive Science, 14, 179–211.
Gale, W.A., & Church, K.W. (1994). What is wrong with adding one? In Nelleke Oostdijk & P. de Haan (Eds.), Corpus-based research into language (pp. 189–198). Amsterdam: Rodopi.
Hankerson, D., Harris, G.A., & Johnson, P.D., Jr. (1998). Introduction to information theory and data comprestsion. New York: CRC Press.
Harris, Z.S. (1954). Distributional structure. Word, 10, 146–162.
Jelinek, F. (1997). Statistical methods for speech recognition. Cambridge: MIT Press.
Kraft, L.G. (1949). A device for quantizing, grouping and coding amplitude modulated pulses. Unpublished Master's thesis, Massachusetts Institute of Technology.
Li, M., & Vitányi, P.M.B. (1993). An introduction to Kolmogorov complexity and its applications.
MacWhinney, B., & Snow, C. (1985). The child language data exchange system. Journal of Child Language, 12, 271–296.
Mandelbrot, B. (1953). An informational theory of the statistical structure of language. In W. Jackson (Ed.), Communication theory. Butterworths.
de Marcken, C. (1995). The unsupervised acquisition of a lexicon from continuous speech. AI Memo No. 1558, Massachusetts Institute of Technology.
Miller, G.A. (1957). Some effects of intermittent silence. The American Journal of Psychology, 52, 311–314.
Nevill-Manning, C.G., & Witten, I.H. (1997). Compression and explanation using hierarchical grammars. Computer Journal, 40, 103–116.
Olivier, D.C. (1968). Stochastic grammars and language acquisition mechanisms. Unpublished doctoral dissertation, Harvard University.
Quinlan, J.R., & Rivest, R.L. (1989). Inferring decision trees using the minimum description length principle. Information and Computation, 80, 227–248.
Redlich, A.N. (1993). Redundancy reduction as a strategy for unsupervised learning. Neural Computation, 5, 289–304.
Rissanen, J. (1989). Stochastic complexity in statistical inquiry. Singapore: World Scientific Publishing.
Saffran, J.R., Newport, E.L., & Aslin, R.N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606–621.
Stolcke, A. (1994). Bayesian learning of probabilistic language models. Unpublished doctoral dissertation, University of California at Berkeley.
Wallace, C.S., & Boulton, D.M. (1968). An information measure for classification. Computer Journal, 11, 185–194.
Witten, I.H., & Bell, T.C. (1991). The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory, 37, 1085–1094.
Wolff, J.G. (1982). Language acquisition, data compression, and generalization. Language and Communication, 2, 57–89.
Zipf, G.K. (1935). The psycho-biology of language. Boston: Houghton Mifflin.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Brent, M.R. An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery. Machine Learning 34, 71–105 (1999). https://doi.org/10.1023/A:1007541817488
Issue Date:
DOI: https://doi.org/10.1023/A:1007541817488