Reviewed by Susan Windisch Brown, University of Colorado
Eneko Agirre and Philip Edmonds have done an admirable job of providing a comprehensive look at the natural language processing task of word sense disambiguation (WSD). Rather than gathering papers on individuals’ recent research, the editors have commissioned the top names in the field to present overviews of major issues, methods, and research directions.
The first several chapters establish the context for word sense disambiguation: why it is seen as necessary for natural language processing (NLP) applications, what the parameters of the task are, and how to evaluate a system’s performance. Adam Kilgarriff (29–46) explores the fundamental question of the nature of word senses and the difficulty of establishing a definitive word sense inventory. Nancy Ide and Yorick Wilks (47–74) question the assumption that WSD systems must determine fine-grained sense distinctions. The need to shift to more coarse-grained sense distinctions is echoed by Martha Palmer, Hwee Tou Ng, and Hoa Trang Dang (75–106), within the context of explaining the current methods of WSD evaluation.
Diving into the nuts and bolts of implementing WSD systems, the next three chapters review several broad categories of methodology. Rada Mihalcea (107–32) explores knowledge-based methods, starting with the influential Lesk algorithm and continuing through measures of semantic similarity, selectional restrictions, and heuristic-based methods. Ted Pedersen (133–66) reviews unsupervised corpus-based methods, which he characterizes as ‘knowledge lean’ (134). Supervised corpus-based methods are described by Lluís Màrquez, Gerard Escudero, David Martínez, and German Rigau (167–216), including such algorithms as Naïve Bayes and Support Vector Machines. Each of these chapters covers the history of the approach, state-of-the-art research, and future directions for the methodology. Additionally, the strengths and weaknesses of each approach are explored.
Researchers have hoped to increase the performance of WSD systems with the addition of various linguistic features. Eneko Agirre and Mark Stevenson (217–52) identify and evaluate linguistic knowledge that could be helpful in the task and match these to concrete features available from lexical resources. Given the enormous task of gathering such features by hand, Julio Gonzalo and Felisa Verdejo (253–74) explore methods for automatically acquiring lexical information. Paul Buitelaar, Bernardo Magnini, Carlo Strapparava, and Piek Vossen (275–98) discuss using domain-specific information, such as subject codes or topic signatures.
Philip Resnik (299–338) concludes the book with an excellent discussion of the ultimate goal of creating WSD systems: improvement of actual natural language processing applications. He critiques some of the usual arguments for WSD, asserting that the only true test of WSD is assessing a system’s benefit to actual applications. He describes the level of success in current NLP applications and considers emerging applications and potential reformulations for the WSD task.
Lecturers, researchers, and students will benefit from this well-organized and highly informative book. Whether looking for an introduction to the field, an extension of their knowledge of methods and resources, or insight into future approaches, readers will be satisfied with this book.