- AK91.Dana Angluin and Michael Kharitonov. When won't membership queries help? In Proceedings of STOC '91, pages 444-454. ACM, 1991. Google ScholarDigital Library
- BI91.Gyora M. Benedek and Alon Itai. Learnability with Respect to Fixed Distributions. Theoretical Computer Science, 86(2):377- 390, 1991. Google ScholarDigital Library
- CT91.Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. John Wiley and Sons Inc., 1991. Google ScholarDigital Library
- HKLW91.David Haussler, Michael Kearns, Nick Littlestone, and Manfred K. Warmuth. Equivalence of models for polynomial learnability. Information and Computation, 95(2) pages 129-161. 1991. Google ScholarDigital Library
- HM91.Thomas Hancock and Yishay Mansour. Learning Monotone kp DNF Formulas on Product Distributions. In Proceedings of COLT '91, pages 179-183. Morgan Kaufmann, 1991. Google ScholarDigital Library
- Kea93.Michael Kearns. Efficient noise-tolerant learning from statistical queries. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 392-401, 1993. Google ScholarDigital Library
- KM93.D. Koller and N. Megiddo. Constructing small sample spaces satisfying given constraints. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 268-277, 1993. Google ScholarDigital Library
- KN95.Joe Kilian and Moni Naor. On the complexity of statistical reasoning. In Proceedings of the 3rd Israel Symposium on Theory of Computing and Systems, pages 209-217, 1995. Google ScholarDigital Library
- KV89.Michael Kearns and Leslie G. Valiant. Cryptographic Limitations on Learning Boolean Formulae and Finite Automata In Proceedings of the 21st Annual ACM Symposium on Theory of Computing, pages 433-444, 1989. Google ScholarDigital Library
- KLV94.M. Kearns, M. Li and L. Valiant. Learning boolean formulas. Journal of the ACM, 41(6):1298-1328, 1994. Google ScholarDigital Library
- Nat87.B. K. Natarajan. On learning boolean functions. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, pages 296-304, New York, New York, May 1987. Google ScholarDigital Library
- Pap91.Athanasios Papoulis. Probability, Random Variables, and Stochastic Processes, Chapter 15. McGraw-Hill, third edition, 1991.Google Scholar
- PV88.Leonard Pitt and Leslie G. Valiant. Computational limitations on learning from examples. Journal of the ACM, 35(4):965-984, 1988. Google ScholarDigital Library
- SWG85.C.Ray Smith and Jr. W.T. Grandy, editors. Maximum-Entropy and Bayesian Methods in Inverse Problems. D.Reidel Publishing Company, 1985.Google Scholar
- Val84.Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134- 1142, November 1984. Google ScholarDigital Library
Index Terms
- Learning with maximum-entropy distributions
Recommendations
Maximum entropy characterizations of the multivariate Liouville distributions
A random vector X = (X1, X2,...,Xn) with positive components has a Liouville distribution with parameter θ = (θ1, θ2,...,θn) if its joint probability density function is proportional to h(Σi=1n xi) Φi=1n xiθi-1, θi > 0 [R.D. Gupta, D.S.P. Richards, ...
Learning to Parse Natural Language with Maximum Entropy Models
Special issue on natural language learningThis paper presents a machine learning system for parsing natural language that learns from manually parsed example sentences, and parses unseen data at state-of-the-art accuracies. Its machine learning technology, based on the maximum entropy framework, is ...
A maximum-entropy chinese parser augmented by transformation-based learning
Parsing, the task of identifying syntactic components, e.g., noun and verb phrases, in a sentence, is one of the fundamental tasks in natural language processing. Many natural language applications such as spoken-language understanding, machine ...
Comments