text
stringlengths 1k
1k
| year
stringclasses 12
values | No
stringlengths 1
3
|
---|---|---|
J. Norwood Crout Artificial Intelligence Corporation The INTELLECT natural language database query system, a product of Artificial Intelligence Corporation, is the only commercially available system with true English query capability. Based on experience with INTELLECT in the areas of quality assurance and customer support, a number of issues in evaluating a natural language data- base query system, particularly the INTELLECT system, will be discussed. A, I. Corporation offers licenses for customers to use the INTELLECT software on their computers, to access their databases. We now have a number of customer instal- lations, plus reports from companies that are marketing INTELLECT under agreements with us, so that we can begin to discuss user reactions as possible criteria for eval- uating our system. INTELLECT's basic function is to translate typed English queries into retrieval commands for a database manage- ment
|
1981
|
7
|
SELECTIVE PLANNING OF INTERFACE EVALUATION~ William C. Mann USC Information Sciences Institute 1 The Scope of Evaluations The basic ides behind evaluation is 8 simple one: An object is produced and then subjected to trials of its I~trformance. Observing the trials revesJs things about the character of the object, and reasoning about those observations leads tO stJ=tements about the "value" of the object, a collection of such statements bein.3 &n "evaluation." An evaluation thus dlffe~ from a description, a critique or an estimate. For our purl:)oses here, the object is a database system with a natural language interface for users. Ideally. the trials are an instrumented variant of normal uSage. The character of the users, their tasks, the data, and so forth are reDreeentative of the intended use of the system. In thinking about evaluations we need to be clear about the intended scope. Is it the whole system that is to be evaluated,
|
1981
|
8
|
FIELD TESTING THE TRANSFORMATIOHAL qUESTION AHSWERIHG (TqA) SYSTEM S. R. Patrick ~DM T.J. Watson Reseorch Center PO BOX 218 Yorktown Heights, NQW York 10598 The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~], and its semantic Gomponqnt is a Knuth attribute grammor [~, 5]. The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact,
|
1981
|
9
|
TRANSLATING ENGLISH INTO LOGICAL FORM' Stanley J. Rosenscbein Stuart M, Shieber ABSTRACT A scheme for syntax-directed translation that mirrors com- positional model-theoretic semantics is discussed. The scheme is the basis for an English translation system called PArR and was used to specify a semantically interesting fragment of English, including such constructs as tense, aspect, modals, and various iexically controlled verb complement structures. PATR was embedded in a question-answering system that replied appropriately to questions requiring the computa- tion of logical entailments. I INTRODUCTION When contemporary linguists and philosophers speak of "semantics," they usually mean m0del-theoretic semantics-- mathematical devices for associating truth conditions with Sentences. Computational linguists, on the other hand, often use the term "semantics" to denote a phase of processing in which a data structure (e.g., a formula
|
1982
|
1
|
ENGLISH WORDS AND DATA BASES: HOW TO BRIDGE THE GAP Remko J.H. Scha Philips Research Laboratories Eindhoven The Netherlands ABSTRACT If a q.a. system tries to transform an Eng- lish question directly into the simplest possible formulation of the corresponding data base query, discrepancies between the English lexicon and the structure of the data base cannot be handled well. To be able to deal with such discrepancies in a systematic way, the PHLIQAI system distinguishes different levels of semantic representation; it contains modules which translate from one level to another, as well as a module which simplifies expressions within one level. The paper shows how this approach takes care of some phenomena which would be problematic in a more simple-minded set-up. I INTRODUCTION If a question-answering system is to cover a non-trivial fragment of its natural input-language, and to allow for an arbitrarily structured data
|
1982
|
10
|
Problems ¥ith Domain-Independent Natural Language Database Access Systems Steven P. Shvartz Cognitive Systems Inc. 234 Church Street New Haven, Ca. 06510 Zn the past decade, a number of natural lang- uage database access systems have been constructed (e.g. Hendrix 1976; Waltz et el. 1976; Sac- erdoti 1978; Harris 1979; Lehner~ and Shwartz 1982; Shvartz 1982). The level of performance achieved by natural language database access sys- tems varies considerably, with the sore robust systems operating vithtn a narrow domain (i.e., content area) and relying heavily on domain-speci- fic knowledge to guide the language understanding process. Transporting a system constructed for one domain into a new domain is extremely resource-in- tensive because a new set of domain-specific know- ledge must be encoded. In order to reduce the cost o
|
1982
|
11
|
ISSUES IN NATURAL LANGUAGE ACCESS TO DATABASES FROM A LOGIC PROGRAMMING PERSPECTIVE David H D Warren Artificial Intelligence Center SRI International, Menlo Park, CA 94025, USA I INTRODUCTION I shall discuss issues in natural language (NL) access to databases in the light of an experimental NL questlon-answering system, Chat, which I wrote with Fernando Perelra at Edinburgh University, and which is described more fully elsewhere [8] [6] [5]. Our approach was strongly influenced by the work of Alaln Colmerauer [2] and Veronica Dahl [3] at Marseille University. Chat processes a NL question in three main stages: translation planning execution English .... > logic .... > Prolog .... > answer corresponding roughly to: "What does the question mean?", "How shall I answer it?", "What is the answe
|
1982
|
12
|
NATURAL LANGUAGE DATABASE UPDATES Sharon C. Salveter David Maier Computer Science Depar=ment SUNY Stony Brook Stony Brook, NY 11794 ABSTRACT Although a great deal of research effort has been expended in support of natural language (NL) database querying, little effort has gone to NL database update. One reason for this state of affairs is that in NL querying, one can tie nouns and stative verbs in the query to database objects (relation names, attributes and domain values). In many cases this correspondence seems sufficient to interpret NL queries. NL update seems to require database counterparts for active verbs, such as "hire," "schedule" and "enroll," rather than for stative entities. There seem to be no natural can- didates to fill this role. We suggest a database counterpart for active verbs, which we call verbsraphs. The verbgraphs may be used to support NL update. A verbgraph is a structure fo
|
1982
|
13
|
PROCESSING ENGLISH WITH A GENERALIZED PHRASE STRUCTURE GRAMMAR Jean Mark Gawron, Jonathan King, John Lamping, Egon Loebner, Eo Anne Paulson, Geoffrey K. Pullum, Ivan A. Sag, and Thomas Wasow Computer Research Center Hewlett Packard Company 1501 Page Mill Road Palo Alto, CA 94304 ABSTRACT This paper describes a natural language processing system implemented at Hewlett-Packard's Computer Research Center. The system's main components are: a Generalized Phrase Structure Grammar (GPSG); a top-down parser; a logic transducer that outputs a first-order logical representation; and a "disambiguator" that uses sortal information to convert "normal-form" first-order logical expressions into the query language for HIRE, a relational database hosted in the SPHERE system. We argue that theoretical developments in GPSG syntax and in Mo
|
1982
|
14
|
Experience with an Easily Computed Metric for Ranking Alternative Parses George E. Heidorn Computer Sciences Department IBM Thomas J. Watson Research Center Yorktown Heights, New York 10598 Abstract This brief paper, which is itself an extended abstract for a forthcoming paper, describes a metric that can be easily com- puted during either bottom-up or top-down construction of a parse tree for ranking the desirability of alternative parses. In its simplest form, the metric tends to prefer trees in which constituents are pushed as far down as possible, but by appro- priate modification of a constant in the formula other behavior can be obtained also. This paper includes an introduction to the EPISTLE system being developed at IBM Research and a discussion of the results of using this metric with that system. Introduction Heidorn (1976) described a technique for computing a number for each node during the bottom-up construction
|
1982
|
15
|
An Improved Heuristic for Ellipsis Processing* Ralph M. Welschedel Department of Computer & Information Sciences University of Delaware Newark, Delaware 19711 and Norman K. Sondheimer Software Research Sperry Univac MS 2G3 Blue Bell, Pennsylvania 19424 I. Introduction Robust response to ellipsis (fragmen- tary sentences) is essential to acceptable natural language interfaces. For in- stance, an experiment with the REL English query system showed 10% elliptical input (Thompson, 1980). In Quirk, et al. (1972), three types of contextual ellipsis have been identi- fied: I. repetition, if the utterance is a fragment of the previous sentence. 2. replacement, if the input replaces a structure in the previous sentence. 3. expansion, if the input adds a new type of structure to those used in the previous sentence. Instances of th
|
1982
|
16
|
REFLECTIONS ON 20 YEARS OF THE ACL AN INTRODUCTION Donald E. Walker Artificial In~elligence Center SRI International Menlo Park, California 94025, USA Our society was founded on 13 June 1962 as the Association for Machine Translation and Computational Linguistics. Consequently, this 1982 Annual Meeting represents our 20th anniversary. We did, Of course, change our name to the Association for Computational Linguistics in 1968, but that did not affect the continuity of the organization. The date of this panel, 17 June, misses the real anniversary by four days, but no matter; the occasion still allows us to reflect on where we have been and where we are going. I seem to be sensitive to opportunities for celebrations. In looking through my AMTCL/ACL correspondence over the years, I came across a copy
|
1982
|
17
|
OUR DOUBLE ANNIVERSARY Victor H. Yngve University of Chicago Chlcngo, 1111nols 60637 USA ABSTRACT In June of 1952, ten years before the founding of the Association, the first meeting ever held on computational linguistics took place. This meet- ing, the succeeding ten years, and the first year of the Association are discussed. Some thoughts are offered as to what the future may bring. I THE EARLY YEARS When the suggestion came from Don Walker to celebrate our twentieth anniversary by a panel discussion I responded with enthusiasm at the op- portunlty for us all to reminisce. Much has hap- pened in those twenty years to look back on, and there have been many changes: Not many here will remember that founding meeting. As our thoughts go back to the beginnings it must also be with a note of sadness, for some of our most illustrious early members can no longer be counted among
|
1982
|
18
|
2002: ANOTHER SCORE David G. Hays Metagram 25 Nagle Avenue, Apartment 3-G New York, NY 10040 Twenty years is a long time to spend in prison, but it is a short time in intellectual history. In the few years Just prior to the foundation of this Association, we had come from remarkably complex but nevertheless rather superficial analysis of text as strings of characters or, perhaps, lexlcal units to programs for parsing that operated on complex grammatical symbols but according to rather simple general principles; the programs could be independent of the language. And at the moment of foundation, we had--In the words of D. R. Swanson--run up against the stone wall of semantics. No one at the time could say whether it was the wall of a prison yard or another step in the old intellectual pyramid. On my reading, the
|
1982
|
19
|
LINGUISTIC AND COMPUTATIONAL SEMANTICS* Brian Cantwell Smith XEROX Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 ABSTRACT We argue that because the very concept of computation rests on notions of interpretation, the semantics of natural languages and the semantics of computational formalisms are in the deepest sense the same subject. The attempt to use computational formalisms in aid of an explanation of natural language semantics, therefore, is an enterprise that must be undertaken with particular care. We describe a framework for semantical analysis that we have used in the computational realm, and suggest that it may serve to underwrite computadonally-oriented linguistic ser.antics as well. The major feature of this framework is the explicit recognition of both the declarative and the procedural import of meaningful expressions; we argue that whereas these two viewpoints have traditionally been taken as alt
|
1982
|
2
|
MY TERM Winfred P. Lehmann Department of Linguistics The University of Texas Austin, Texas 78712 My term came at the time of the New York World Fair. The Association, still of MT as well as CL, was trying to crash the club that shared profits from the annual meetings of AFIFS. These were producing something over $20,000, a sum which in those days would do more than pay a fraction of one's annual overhead on an NSF grant. ACM of course was grabbing the bulk of this, but ÁEEE wasn't doing badly, to Judge by the magnificence of its Journal. I attended the powwows of the powers national and international. In spite of our run-down heels, they treated me courteously. Among other actlvltfes we Journeyed out for a preview of the World Fair exhibits. IBM's massive show, with a que~tlon-answer demonstration as its high
|
1982
|
20
|
A SOCIETY IN TRANSITION Donald E. Walker Artificial Intelligence Center SRI International Menlo Park, California 94025, USA I was President in 1968, the year during which the Association for Machine Translation and Computational Linguistics became the Association for Computational Linguistics. Names always create controversy, and the founding name, selected in 1962, was chosen in competition with others, not the least of which was the one that subsequently replaced it. In fact, a change of name to the Association for Computational Linguistics was actually approved in 1963, at what has been described as an "unofficial meeting." However, that action was subsequently ruled out of order, since it did not result from a constitutional amendment. Five years later, proper procedure having been followed, the change was made official
|
1982
|
21
|
THEMES FROM 1972 Robert F. Simmons Department of Computer Sciences University of Texas at Austin Austin, TX 78712 Although 1972 was the year that Winograd published his now classic natural language Study of the blocks world, that fact had not yet penetrated to the ACL. At that time people with AI computational interests were strictly in a minority in the association and it was a radical move to appoint Roger Schank as program chairman for the year's meeting. That was also the year that we didn't have a presidential banquet, and my "speech" was a few informal remarks at the roadhouse restaurant somewhere in North Carolina reassuring a doubtful few members that computational understanding of natural language was certainly progressing and that applied natural language systems were distinctly
|
1982
|
22
|
Tw'SNTYYEARSOFREF~C~S* ~avind K. ~shi Department of Computer and Information Science R. 268 Moore School University of Pennsylvania Philadelphia, PA 19104 As I was reflecting deeply in front of the statue of Bodhisattva of G r ~ and Wisdom in the University Muset~n, I was startled to see Jane. Having heard from Don that he had asked the old cats to reflect on the 20 years of ACL, Jane had decided that she should drop in on some of them to seek their advice concerning the future of ACL. *This work is supported by the unfunded grant FSN- RND-HIN-82-57. I wish to thank Alice, the Cheshire Puss, and Lewis Carroll for their help in the pre- paration of this paper. '~4hat brings you here?" I asked with a grin. Jane thought for a while and said: "Would you tell me, please, which way I ought to take ACL in the future?" "That depends a good deal on where you think it should go ~" I replied. "I don't much care where ,
|
1982
|
23
|
ACL IN 1977 Paul G. Chapin National Science Foundation 1800 G St. NW Washington, D.C. 20550 As I leaf through my own "ACL (Historical)" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966), and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people. Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Associa- tion onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication. That was the year when we h
|
1982
|
24
|
P~FLECTIONS ON TWENTY YEARS OF THE ACL Jonathan Allen Research Laboratory of Electronics and Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 I entered the field of computational linguistics in 1967 and one of my earliest recollections is of studying the Harvard Syntactic Analyzer. To this date, this parser is one of the best documented programs and the extensive discussions cover a wide range of English syntax. It is sobering to recall that this analyzer was implemented on an IBM 7090 computer using 32K words of memory with tape as its mass storage medium. A great deal of attention was focussed on means to deal with the main memory and mass storage limitations. It is also interesting to reflect back on the decision made in the Harvard Syntactic Analyzer to use a large number of parts of speech, presumably, to aid the refinement of the analysis.
|
1982
|
25
|
ON THE PRESENT Norman K. Sondheimer Sperry Univac Blue Bell, PA 19424 USA The Association for Computational Linguistics is twenty years old. We have much to be proud of: a fine journal, significant annual meetings, and a strong presence in the professional community. Computational Linguistics, itself, has much to be proud of: influence in the research community, courses in universities, research support in government and industry, and attention in the popular press. Not to spoil the fun, but the same was true twenty years ago and the society and the field has had to go through some difficult times since then. To be sure, much has changed. The ACL has over 1200 members. Computational Linguistics has many new facets and potential applications. However to an outsider, we still appear to be a field with potential rather than one with achievement. Why is that? There are certainly many reasons. One is the
|
1982
|
26
|
PLANNING NATURAL LANGUAGE REFERRING EXPRESSIONS Douglas E. Appelt SRI International Menlo Park, California ABSTRACT This paper describes how a language-planning system can produce natural-language referring expressions that satisfy multiple goals. It describes a formal representation for reasoning about several agents' mutual knowledge us- ing possible-worlds semantics and the general organization of a system that uses the formalism to reason about plans combining physical and linguistic actions at different levels of abstraction. It discusses the planning of concept ac- tivation actions that are realized by definite referring ex- pressions in the planned utterances, and shows how it is possible to integrate physical actions for communicating intentions with linguistic actions, resulting in plans that include pointing as one of the communicative actions avail- able to the speaker. I. INTRODUCTION One of the mo~t impor
|
1982
|
27
|
THE TEXT SYSTEM FO~NATURAL LANGUAGE GENERATION: AN OVERVIEW* Kathleen R. M::Keown Dept. of Computer & Information Science The Moore School University of Pennsylvania Philadelphia, Pa. 19104 ABSTRACT Computer-based generation of natural language requires consideration of two different types of problems: i) determining the content and textual shape of what is to be said, and 2) transforming that message into English. A computational solution to the problems of deciding what to say and how to organize it effectively is proposed that relies on an interaction between structural and semantic processes. Schemas, which encode aspects of discourse structure, are used to guide the generation process. A focusing mechanism monitors the use of the schemas, providing constraints on what can be said at any point. These mechanisms have been implemented as part of a
|
1982
|
28
|
At~3MENTING A DATABASE KNOWLEDGE REPRESENTATION FOR NATURAL LANGUAGE GENERATION* Kathleen F. M~Coy Dept. of Computer and Information Science The Moore School University of Pennsylvania Philadelphia, Pa. 19104 ABSTRACT The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure. Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. It employs three types of world knowledge axioms to ensure that the representation formed is meaningful and contains salient infor
|
1982
|
29
|
THE REPRESENTATION OF INCONSISTENT INFORMATION IN A DYNAMIC MODEL-THEORETIC SEMANTICS Douglas B. Moran Department of Computer Science Oregon State University Corvallis, Oregon 97331 ABSTRACT Model-theoretic semantics provides a computationally attractive means of representing the semantics of natural language. However, the models used in this formalism are static and are usually infinite. Dynamic models are incomplete models that include only the information needed for an application and to which information can be added. Dynamic models are basically approximations of larger conventional models, but differ is several interesting ways. The difference discussed here is the possibility of inconsistent information being included in the model. If a computation causes the model to expand, the result of that computation may be different than the result of performing that same computatio
|
1982
|
3
|
SALIENCE: THE KEY TO THE SELECTION PROBLEM IN NATURAL LANGUAGE GENERATION E. Jeffrey Conklin David D. McDonald Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 USA I ABSTRACT We argue that in domains where a strong notion of salience can be defined, it can be used to provide: (I) an elegant solution to the selection problem, i.e. the problem of how to decide whether a given fact should or should not be mentioned in the text; and (2) a simple and direct control framework for the entire deep generation process, coordinating proposing, planning, and realization. (Deep generation involves reasoning about conceptual and rhetorical facts, as opposed to the narrowly linguistic reasoning that takes place during realization.) We report on an empirical study of salience in pictures of natural scenes, and its use in a
|
1982
|
30
|
A KNOWLEDGE ENGINEERING APPROACH TO NATURAL LANGUAGE UNDERSTANDING Stuart C. Shapiro & Jeannette G. Neal Department of Computer Science State University of New York at Buffalo Amherst, New York 14226 ABSTRACT This paper describes the results of a preliminary study of a Knowledge Engineering approach to Natural Language Understanding. A computer system is being developed to handle the acquisition, representation, and use of linguistic knowledge. The computer system is rule-based and utilizes a semantic network for knowledge storage and representation. In order to facilitate the interaction between user and system, input of linguistic knowledge and computer responses are in natural language. Knowledge of various types can be entered and utilized: syntactic and semantic; assertions and rules. The inference tracing facility is also being developed as a part of the rule-based system with output in natural language
|
1982
|
31
|
A Model of Early Syntactic Development Pat Langley The Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 1521,3 USA ABSTRACT AMBER is a model of first language acquisition that improves its performance through a process of error recovery. The model is implemented as an adaptive production system that introduces new condition-action rules on the basis of experience. AMBER starts with the ability to say only one word at a time, but adds rules for ordering goals and producing grammatical morphemes, based on comparisons between predicted and observed sentences. The morpheme rules may be overly general and lead to errors of commission; such errors evoke a discrimination process, producing more conservative rules with additional conditions. The system's performance improves gradually, since rules must be relearned many times before they are used. AMBER'S learning mechanisms account for some of the major development
|
1982
|
32
|
BUILDING NON-NORMATIVE SYSTEMS - THE SEARCH FOR ROBUSTNESS AN OVERVIEW Mitchell P. Marcus Bell Laboratories Murray Hill, New Jersey 07974 Many natural language understanding systems behave much like the proverbial high school english teacher who simply fails to understand any utterance which doesn't conform to that teacher's inviolable standard of english usage. But while the teacher merely pretends not to understand, our systems really don't. The teacher consciously stonewalls when confronted with non-standard usage to prescribe quite rigidly what is acceptable linguistic usage and what is not. What is so artificial about this behavlour, of course, is that our implicit linguistic models are descriptive and not prescriptive; they model what we expect, not what we demand. People are quite good at understanding language which they, when asked, would consider to he non-standard in some way or o
|
1982
|
33
|
DESIG~ DIMENSIONS FOR NON-NORMATIVE ONDERSTARDING SYSTEMS Robert J. Bobrow Madelelne Bates Bolt Beranek and Newman Inc. 10 Moulton Street Cambridge, Massachusetts 02238 I. Introduction This position paper is not based upon direct experience with the design and implementation of a "non-normative" natural language system, but rather draws upon our work on cascade [11] architectures for understanding systems in which syntactic, semantic and discourse processes cooperate to determine the "best" interpretation of an utterance in a given discourse context. The RUS and PSI-KLONE systems [I, 2, 3, 4, 5], which embody some of these principles, provide a strong framework for the development of non-normatlve systems as illustrated by the work of Sondhelmer and Welschedel [8, 9, 10]and others. Here we pose a number of questions in order to clarify the theoretical and practical is
|
1982
|
34
|
Scruffy Text Understanding: Design and Implementation of 'Tolerant' Understanders Richard H. Granger Artificial Intelligence Project Computer Science Department University of California Irvine, California 92717 AB STRACT Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably "neat" form, e.g., newspaper stories and other edited texts. However, a great deal of natural language texts e.g.~ memos, rough drafts, conversation transcripts~ etc., have features that differ significantly from "neat" texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic constructlon, missing periods, etc. Our solution to these problems is to make use of exoectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and sem
|
1982
|
35
|
ON THE LINGUISTIC CHARACTER OF NON-STANDARD INPUT Anthony S. Kroch and Donald Hindle Department of Linguistics University of Pennsylvania Philadelphia, PA 19104 USA ABSTRACT If natural language understanding systems are ever to cope with the full range of English language forms, their designers will have to incorporate a number of features of the spoken vernacular language. This communication discusses such features as non-standard grammatical rules, hesitations and false starts due to self-correction, systematic errors due to mismatches between the grammar and sentence generator, and uncorrected true errors. There are many ways in which the input to a natural language system can be non-standard without being uninterpretable ~ Most obviously, such input can be the well-formed output of a grammar o
|
1982
|
36
|
Ill-Formed and Non-Standard Language Problems Stan Kwasny Computer Science Department Indiana University Bloomington, IN 47405 Abstract Prospects look good for making real improve- ments in Natural Language Processing systems with regard to dealing with unconventional inputs in a practical way. Research which is expected to have an influence on this progress as well as some predictions about accomplishments in both the short and long term are discussed. i. Intr~ductio~ Developing Natural Language Understanding systems which permit language in expected forms in anticipated environments having a well-defined semantics is in many ways a solved problem with today's technology. Unfortunately, few interest- ing situations in which Natural Language is useful live up to this description. Even a modicum of machine intelligence is not pcsslbl
|
1982
|
37
|
"NATURAL LANGUAGE TEXTS ARE NOT NECESSARILY GRAMMATICAL AND UNAMBIGUOUS OR EVEN COMPLETE." Lance A. Miller Behavioral Sciences and Linguistics Group IBM Watson Research Center P. O. Box 218 Yorktown Heights, NY 10598 The EPISTLE system is being developed in a research project for exploring the feasibility of a variety of intelligent applications for the processing of business and office text (!'Z; the authors of are the project workers). Although ultimately intended functions include text generation (e.g., 4), present efforts focus on text analysis: devel- oping the capability to take in essentially unconstrained business text and to output grammar and style critiques, on a sentence by sentence basis. Briefly, we use a large on-line dictionary and a bottom-up parser in connection with an Augmented Phrase Structure Grammar (5) to obtain an approxi- mately correct
|
1982
|
38
|
SOLUTIONS TO ISSUES DEPEND ON THE KNOWLEDGE REPRESENTATION Frederick B. Tho~psoH ~ l i £ o r n i ~ I n s t i t u t e o£ Technology Pasadena, C~,li?orni~ In orpQnizing This p~nel, our Ch(tirmon, Bob Moore, expressed the view thor too often discussion o? Hoturra'l l,',nguage occess To dol'o buses has focused on whot p~rticulc~r systems c~*n or cQnnot do, ro'ther than on underlying issues. He Then sd~irr4bly proceeded to orgonize the prJnel nr. ound issues r-qther th~n systems. In responding, I qttempted to ?rr.iMe my ~'emr~rk~, on e,ach o? his five issues in r~ gener~l woy that would not reflec~ ~y ,wn pr4rochiul experience qnd i n t e r e s t , At one point I thought th~.~t I h~d s,cceeded q u i t e w e l l . Howe,.,er~ o f f e r t~king a cleorer eyed view~ it wqs qpparent thor my remarks reflec~c;d ~ssumptions obout knowledge representotion theft were b
|
1982
|
39
|
What's in a Semantic Network? James 17. A lien Alan M. Frisch Computer Science Department The University of Rochester Rochester, NY 14627 Abstract Ever since Woods's "What's in a Link" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the ftrst-order predicate calculus (FOPC). The typical argument proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic FOPC representation. For the past two years we have been studying the formalization of knowledge retrievers as well as the representation languages that they operate on. This paper presents a representation language in the notation of FOPC whose form facilitates the design of a semantic-network-like retriever. I. Introduction We are engaged in a lo
|
1982
|
4
|
DEPENDENCIES OF DISCOURSE STRUCTURE ON THE MODALITY OF CCI~4t~ICATION: TELEPHONE vs. TELETYPE Philip R. Cohen Dept. of Computer Science Oregon State University Corvallis, OR 97331 Scott Fertig Bolt, Beranek and Newman, Inc. Cambridge, MA 02239 Kathy Starr Bolt, Beranek and Newman, Inc. Cambridge, MA 02239 ABSTRACT A desirable long-range goal in building future speech understanding systems would be to accept the kind of language people spontaneously produce. We show that people do not speak to one another in the same way they converse in typewritten language. Spoken language is finer-grained and more indirect. The differences are striking and pervasive. Current techniques for engaging in typewritten dialogue will need to be extended to accomodate the structure of spoken language. I. INTRODUCTION If a machine could listen, how would we talk to it? Tnis question will be hard to answ
|
1982
|
5
|
TOWARDS A THEORY OF COMPREHENSION OF DECLARATIVE CONTEXTS Fernando Gomez Department of Computer Science University of Central Florida Orlando, Florida 32816 ABSTRACT An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the infe- rential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmen- tation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be in- terpreted by semantic routi
|
1982
|
6
|
NATURAL-LANGUAGE ACCESS TO DATABASES--THEORETICAL/TECHNICAL ISSUES Robert C. Moore Artificial Intelligence Center SRI International, Menlo Park, CA 94025 I INTRODUCTION Although there have been many experimental systems for natural-language access to databases, with some now going into actual use, many problems in this area remain to be solved. The purpose of this panel is to put some of those problems before the conference. The panel's motivation stems partly from the fact that, too often in the past, discussion of natural-language access to databases has focused, at the expense of the underlying issues, on what particular systems can or cannot do. To avoid this, the discussions of the present panel will be organized around issues rather than systems. Below are descriptions of five problem areas that seem to me not to be adequately handled by any exist
|
1982
|
7
|
TRANSPORTABLE NATURAL-LANGUAGE INTERFACES: PROBLEMS AND TECHNIQUES Barbara J. Grosz Artificial Intelligence Center SRI International, Menlo Park, CA 94025 Department of Computer and Information Science 1 University of Pennsylvania, Philadephia, PA 19104 I OVERVIEW I will address the questions posed to the panel from wlthln the context of a project at SRI, TEAM [Grosz, 1982b], that is developing techniques for transportable natural-language interfaces. The goal of transportability is to enable nonspeciallsts to adapt a natural-language processing system for access to an existing conventional database. TEAM is designed to interact with two different kinds of users. During an acquisition dlalogue, a database expert (DBE) provides TEAM with information about the files and fields in the conventlonal database for which a natural-langua
|
1982
|
8
|
THEORETICAL/TECHNICAL ISSUES IN NATURAL LANGUAGE ACCESS TO DATABASES S. R. Petrick IBM T.J. Watson Research Center INTRODUCTION In responding to the guidelines established by the session chairman of this panel, three of the five topics he set forth will be discussed. These include aggregate functions and quantity questions, querying semantically complex fields, and multi-file queries. As we will make clear in the sequel, the transformational apparatus utilized in the TQA Ques- tion Answering System provides a principled basis for handling these and many other problems in natural language access to databases. In addition to considering some subset of the chairman's five problems, each of the panelists was invited to propose and choose one issue of his/her own choosing. If time and space permitted, I would have chosen the subject of extensibility of natural language systems to new applications. In light of e
|
1982
|
9
|
CONTEXT-FREENESS AND THE COMPUTER PROCESSING OF HUMAN LANGUAGES Geoffrey K. Pullum Cowell College University of California, Santa Cruz Santa Cruz, California 95064 ABSTRACT Context-free grammars, far from having insufficient expressive power for the description of human fan K - uages, may he overly powerful, along three dimen- sions; (i) weak generative capacity: there exists an interesting proper subset of the CFL's, the profligate CFL's, within which no human language appears to fall; (2) strong generative capacity: human languages can be appropriately described in terms of a proper subset of the CF-PSG's, namely those with the ECPO property; (3) time complexity: the recent controversy about the importance of a low deterministic polynomial time bound on the recognition problem for human languages is mis- directed, since an appropriately restrictive theory would guarantee even more, namely a linear bound. 0. INTRODUCTIO
|
1983
|
1
|
A FOUNDATION FOR SEMANTIC INTERPRETATION Graeme Hirst Department of Computer Science Brown University Providence, RI 02912 Abstract Traditionally, translation from the parse tree repre- senting a sentence to a semantic representation (such as frames or procedural semantics) has a/ways been the most ad hoc part of natural language understand- •ng (NLU) systems. However, recent advances in lin- guistics, most notably the system of formal semantics known as Montague semantics, suggest ways of putting NLU semantics onto a cleaner and firmer foundation. We are using a Montague-inspired approach to seman- tics in an integrated NL U and pro blem-solving system that we are building. Like Montague's, our semantics are compositional by design and strongly typed, with semantic rules in one-to-one correspondence with the meaning-affecting rules of a Marcus-style parser. We have replaced Montague's semantic objects, functors and truth c
|
1983
|
10
|
TELEGRAM: A GRAMMAR FORMALISM FOR LANGUAGE PLANNING Douglas E. Appelt Artificial Intelligence Center SRI International Menlo Park, California O. Abstract Planning provides the basis for a theory of language generation that considers the communicative goals of the speaker when producing utterances. One central problem in designing a system based on such a theory is specifying the requisite linguistic knowledge in a form that interfaces well with a planning system and allows for the encoding of discourse information. The TELEGRAM (TELEological GRAMmar'} system described in this paper solves this prob- lem by annotating a unification grammar with assertions about how grammatical choices are used to achieve various goals, and by enabling the planner to augment the func- tional description of an utterance as it is being unified. The control structures of the planner and the grammar unifier are then merged in a manner
|
1983
|
11
|
AN OVERVIEW OF THE NIGEL TEXT GENERATION GRAMMAR William C. Mann USC/Information Sciences institute 4676 Admiralty Way # 1101 Marina del Rey, CA 90291 Abstract Research on the text generation task has led to creation of a large systemic grammar of English, Nigel, which is embedded in a computer program. The grammar and the systemic framework have been extended by addition of a semantic stratum. The grammar generates sentences and other units under several kinds of experimental control. This paper describes augmentations of various precedents in the systemic framework. The emphasis is on developments which control the text to fulfill a purpose, and on characteristics which make Nigel relatively easy to embed in a larger experimental program. 1 A Grammar for Text Generation - The Challenge Among the various uses for grammars, text generation at first seems to be relatively new. The organizing goal of text gener
|
1983
|
12
|
Automatic Recognition of Intonation Patterns Janet B. Pierrehumbert Bell Laboratories Murray Hill, New Jersey 07974 1. Introduction This paper is a progress report on a project in linguistically based automatic speech recognition, The domain of this project is English intonation. The system I will describe analyzes fundamental frequency contours (F0 contours) of speech in terms of the theory of melody laid out in Pierrehumbert (1980). Experiments discussed in Liberman and Pierrehumbert (1983) support the assumptions made about intonational phonetics, and an F0 synthesis program based on a precursor to the present theory is described in Pierrehumbert (1981). One aim of the project is to investigate the descriptive adequacy of this theory of English melody. A second motivation is to characterize cases where F0 may provide useful information about stress and phrasing. The third, and to my mind the most important, motivation depends on
|
1983
|
13
|
A Finite-Slate Parser for Use in Speech Recognition Kenneth W. Church NE43-307 Massachusetts Institute of Technology Cambridge, MA. 02139 This paper is divided into two parts. 1 The first section motivates the application of finite-state parsing techniques at the phonetic level in order to exploit certain classes or" contextual constraints. -In the second section, the parsing framework is extended in order to account ['or 'feature spreading' (i:.g., agreement and co-articulation) in a natural way. I. Parsing at the Phonetic Level It is well known that phonemcs have different acoustic/phonetic realizations depending on the context. Fur example, the phoneme/t/ is typically realized with a different allophone (phonetic variant) in syllable initial position than in syllable final position. In syllable initial position (e.g., Tom),/t/is almost always released (with a strong burst of energy) and aspirated (with h-like noise), whereas in syll
|
1983
|
14
|
On the Mathematical Properties of Linguistic Theories C. tgm.lm~nd Pev-r=utt Dept. of Computer Science Untvermty of Toronto Toronto, Ontario, Canada M5S IA4 ASS~ACT Meta-theoretical results on the decidability, genera- tire capacity, and recognition complexity o~ several syn- tactic theories are surveyed These include context-free grammars, transformational grammars, lexical func- tional grammars, generalized phrase structure gram- mars, and tree adjunct grammars. i. lmt~tiom. The development of new formalisms im which to express linguistic theories has been accompanied, at [east since Chomsky and Miller's early work on context- free languages, by the study of their nets-theory. In par- tioular, numerous results on the decidabttity, generative capacity, and more recently the co~aplexity of recogni- tion of these formalisms have been published (and rumoured!). Strangely enough, much less attention seems to hav
|
1983
|
15
|
A Framework for Processing Partially Free Word Order* Hans Uszkoreit Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Abstract The partially free word order in German belongs to the class of phenomena in natttral language that require a close in- teraction between syntax and pragmatics. Several competing principles, which are based on syntactic and on discourse in- formation, determine the [ineac order of noun phrases. A solu- tion to problems of this sort is a prerequisite for high-quality language generation. The linguistic framework of Generalized Phrase Structure Grammar offers tools for dealing with-word order variation. Some slight modifications to the framework al- low for an analysis of the German data that incorporates just the right, degree of interaction between syntactic and pragmatic components and that can account for conflicting ordering state- ments. I. Introduction
|
1983
|
16
|
Sentence Disambiguation by a Shift-Reduce Parsing Technique* Stuart M. Shieber Abstract Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sen- tences. A user of a natural-language-processing system would naturally expect it to reflect the same preferences. Thus, such systems must model in some way the linguistic performance as well as the linguistic competence of the native speaker. We have developed a parsing algorithm--a variant of the LALR(I} shift.-reduce algorithm--that models the preference behavior of native speakers for a range of syntactic preference phenomena reported in the psycholinguistic literature, including the recent data on lexical preferences. The algorithm yields the preferred parse deterministically, without building multiple parse trees and choosi
|
1983
|
17
|
SYN'I'ACI IC CONSTI~,,\INTS AND F~FI:ICIFNI' I~AI(SAI~,II.I'I'Y Robert C. Berwick Room 820, MIT Artificial Intelligence l,aboratory 545 Technology Square, Cambridge, MA 02139 Amy S. Weinberg Deparuncnt of Linguistics, MIT Cambridge, MA 02139 ABSTRACT A central goal of linguistic theory is to explain why natural languages are the way they are. It has often been supposed that com0utational considerations ought to play a role in this characterization, but rigorous arguments along these lines have been difficult to come by. In this paper we show how a key "axiom" of certain theories of grammar, Subjacency, can be explained by appealing to general restrictions on on-line parsing plus natural constraints on the rule-writing vocabulary of grammars. The explanation avoids the problems with Marcus' [1980] attempt to account for the same constraint. The argument is robust with respect to machine implementauon, and t
|
1983
|
18
|
Deterministic Parsing of Syntactic Non-fluencies Donald Hindle Bell Laboratories Murray Hill, New Jersey 07974 It is often remarked that natural language, used naturally, is unnaturally ungrammatical.* Spontaneous speech contains all manner of false starts, hesitations, and self-corrections that disrupt the well-formedness of strings. It is a mystery then, that despite this apparent wide deviation from grammatical norms, people have little difficx:lty understanding the non-fluent speech that is the essential medium of everyday life. And it is a still greater mystery that children can succeed in acquiring the grammar of a language on the basis of evidence provided by a mixed set of apparently grammatical and ungrammatical strings. I. Sell-correction: a Rule-governed System In this paper I present a system of rules for resolving the non-fluencies of speech, implemented as part of
|
1983
|
19
|
FACTORING RECURSION AND DEPENDENCIES: AN ASPECT OF TREE ADJOINING GRAMMARS (TAG) AND A COMPARISON OF SOME FORMAL PROPERTIES OF TAGS, GPSGS, PLGS, AND LPGS * Aravind K. Joshi Department of Computer and Information Science R. 268 Moore School University of Pennsylvania Philadelphia, PA 19104 I.IWrRODUCTION During the last few years there is vigorous activity In constructing highly constrained grammatical systems by eliminating the transformational component either totally or partially. There is increasing recognition of the fact that the entire range of dependencies that transformational grammars in their various incarnations have tried to account for can be satisfactorily captured by classes of rules that are non-transformational and at the same Clme highly constrlaned in terms of the classes of grammars and languages that they de fine. Two types of dependencies are especially
|
1983
|
2
|
D-Theory: Talking about Talking about Trees Mitchell P. Marcus Donald Hindle Margaret M. Fleck Bell Laboratories Murray Hill, New Jersey 07974 Linguists, including computational linguists, have always been fond of talking about trees. In this paper, we outline a theory of linguistic structure which talks about talking about trees; we call this theory Description theory (D-theory). While important issues must be resolved before a complete picture of D-theory emerges (and also before we can build programs which utilize it), we believe that this theory will ultimately provide a framework for explaining the syntax and semantics of natural language in a manner which is intrinsically computational. This paper will focus primarily on one set of motivations for this theory, those engendered by attempts to handle certain syntactic phenomena within the framework of deterministic parsing. 1. D-Theory: An Introduction The key idea of D-th
|
1983
|
20
|
PARSING AS DEDUCTION l Fernando C. N. Pereira David H. D. Warren Artificial Intelligence Center SRI International 333 Ravenswood Ave., Menlo Park CA 04025 Abstract By exploring the relationship between parsing and deduction, a new and more general view of chart parsing is obtained, which encompasses parsing for grammar formalisms based on unification, and is the basis of the Earley Deduction proof procedure for definite clauses. The efficiency of this approach for an interesting class of grammars is discussed. 1. Introduction The aim of this paper is to explore the relationship between parsing and deduction. The basic notion, which goes back to Kowaiski (Kowalski, 1980} and Colmerauer {Colmeraucr, 1978), h'zs seen a very efficient, if limited, realization in tile use of the logic programming language Prolog for parsing {Colmerauer, 1978; Pereira and Warren, 1980). The connection between parsing and
|
1983
|
21
|
DESIGN OF A KNOWLEDGE-BASED REPORT GENERATOR Karen Kukich University of Pittsburgh Bell Telephone Laboratories Murray ~tll, NJ 07974 ABSTRACT Knowledge-Based Report Generation is a technique for automatically generating natural language reports from computer databases. It is so named because it applies knowledge-based expert systems software to the problem of text generation. The first application of the technique, a system for generating natural language stock reports from a daily stock quotes database, is par- tially implemented. Three fundamental principles of the technique are its use of domain-specific semantic and linguistic knowledge, its use of macro-level semantic and linguistic constructs (such as whole messages, a phrasal lexicon, and a sentence-combining grammar), and its production system approach to knowledge representa- tion. I. WHAT IS KNOWLEDGE-BASED REPORT GENERATION A knowledge-based report gener
|
1983
|
22
|
MENU-BASED NATURAL LANGUAGE UNDERSTANDING Harry R. Tennant, Kenneth M. Ross, Richard M. Saenz, Craig W. Thompson, and James R. Miller Computer Science Laboratory Central Research Laboratories Texas Instruments Incorporated Dallas, Texas ABSTRACT This paper describes the NLMenu System, a menu-based natural language understanding system. Rather than requiring the user to type his input to the system, input to NLMenu is made by selec- ting items from a set of dynamically changing menus. Active menus and items are determined by a predictive left-corner parser that accesses a semantic grammar and lexicon. The advantage of this approach is that all inputs to the NLMenu System can be understood thus giving a 0% failure rate. A companion system that can automatically generate interfaces to relational databases is also discussed. relatively straightforward queries that PLANES could understand. Additionally, users did
|
1983
|
23
|
Knowledge Structures in UC, the UNIX* Consultantt David N. Chin Division of Computer Science Department of EECS University of California, Berkeley Berkeley, CA. 94720 ABSTRACT The knowledge structures implemented in UC, the UNLX Consultant are sufficient for UC to reply to a large range of user queries in the domain of the UNIX operating sys- tem. This paper describes how these knowledge struc- tures are used in the natural language tasks of parsing, reference, planning, goal detection, and generation, and ~ow they are organized to enable efficient access even with the large database of an expert system. The struc- turing of knowledge to provide direct answers to common queries and the high usability and efficiency of knowledge structures allow UC to hold an interactive conversation with a user. 1. Introduction UC is a natural language program that converses in English with users in the domain of the UNIX operating sy
|
1983
|
24
|
Discourse Pragmatics and Ellipsis Resolution in Task-Oriented Natural Language Interfaces Jaime G. Carbonell Computer Science Department Carnegie-Mellon University. P!ttsburgh, PA 15213 Abstract This paper reviews discourse phenomena that occur frequently in task.oriented man.machine dialogs, reporting on a~n empirical study that demonstrates the necessity of handling ellipsis, anaphora, extragrammaticality, inter-sentential metalanguage, and other abbreviatory devices in order to achieve convivial user interaction. Invariably, users prefer to generate terse or fragmentary utterances instead of longer, more complete "stand- alone" expressions, even when given clear instructions tO the contrary. The XCALIBUR exbert system interface is designed to meet these needs, including generalized ellipsis resolution by means of a rule-based caseframe method superior tO previous semantic grammar approaches. 1. A Summary of Task-Oriented
|
1983
|
25
|
Crossed Serial Dependencies: i low-power parseable extension to GPSG Henry Thompson Department of Artificial Intelligence and Program in Cognitive Science University of Edinburgh Hope Park Square, Meadow Lane Edinburgh EH8 9NW SCOTLAND ABSTRACT An extension to the GPSG grammatical formalism is proposed, allowing non-terminals to consist of finite sequences of category labels, and allowing schematic variables to range over such sequences. The extension is shown to be sufficient to provide a strongly adequate grammar for crossed serial dependencies, as found in e.g. Dutch subordinate clauses. The structures induced for such constructions are argued to be more appropriate to data involving conjunction than some previous proposals have been. The extension is shown to be parseable by a simple extension to an existing parsing method for GPSG. I. INTRODUCTION There has
|
1983
|
3
|
Formal Constraints on Metarules* Stuart M. Shieber, Susan U. Stucky, Hans Uszkoreit, and Jane J. Robinson SRI International 333 Ravenswood Avenue Menlo Park, California Abstract Metagrammaticai formalisms that combine context-free phrase structure rules and metarules (MPS grammars) allow con- cise statement of generalizations about the syntax of natural lan- guages. Unconstrained MPS grammars, tmfortunately, are not cornputationally "safe." We evaluate several proposals for con- straining them, basing our amae~ment on computational trac- tability and explanatory adequacy. We show that none of them satisfies both criteria, and suggest new directions for research on alternative metagrammatical formalisms. 1. Introduction The computational-linguistics community has recently shown interest in a variety of metagrammatical formalisms for encoding grammars of natural language. A common technique found in these formalisms involves the no
|
1983
|
4
|
A PROLEGOMENON TO SITUATION SEMANTICS David J. Israel Bolt Beranek and Newman Inc. Cambridge, MA 02238 ABSTRACT An attempt is made to prepare Computational Linguistics for Situation Semantics. I INTRODUCTION The editors of the AI Journal recently hit upon the nice notion of correspondents' columns. The basic idea was to solicit experts in various fields, both within and outside of Artificial Intelligence, to provide "guidance to important, interesting current literature" in their fields. For Philosophy, they made the happy choice of Dan Dennett; for natural language processing, the equally happy choice of Barbara Grosz. Each has so far contributed one column, and these early contributions overlap in one, and as it happens, only one, particular; to wit: Situation~manties. Witness Dennett: " " ~ t ~oplcln " Cis] the hottest new
|
1983
|
5
|
A Modal Temporal Logic for Reasoning about Change Eric Mays Department of Computer and Information Science Moore School of Electrical Engineerlng/D2 University of Pennsylvania Philadelphia, PA 19104 ABSTRACT We examine several behaviors for query systems that become possible with the ability to represent and reason about change in data bases: queries about possible futures, queries about alternative histories, and offers of monitors as responses to queries. A modal temporal logic is developed for this purpose. A completion axiom for history is given and modelling strategies are given by example. I INTRODUCTION In this paper we present a modal temporal logic that has been developed for reasoning about change in data bases. The basic motivation is as follows. A data base contains information about the world: as the world changes, so does the data base -- probably maintaining some description of what the world was like b
|
1983
|
6
|
PROVIDING A UNIFIED ACCOUNT OF DEFINITE NOUN PHRASES IN DISCOURSE Barbara J. Grosz ,M'tificial Intelligence Center SRI International Menlo Park. CA Aravind K. Joshi Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA Scott Wcinstein Dept. of Philosophy University of Pennsylvania Philadelphia, PA 1. Overview Linguistic theories typically assign various linguistic phenomena to one of the categories, syntactic, semantic, or pragmatic, as if the phenomena in each category were relatively independent of those in the others. However, various phenomena in discourse do not seem to yield comfortably to any account that is strictly a syntactic or semantic or pragmatic one. This paper focuses on particular phenomena of this sort-the use of various referring expressions such as definite noun phrases and pronouns-and examines their interaction with mechani
|
1983
|
7
|
USING %-CALCULUS TO REPRESENT MF~kNINGS IN LOGIC GRAMMARS* David Scott Warren Computer Science Department SUNY at Stony Brook Stony Brook, NY 11794 ABSTRACT This paper descrlbes how meanings are repre- sented in a semantic grammar for a fragment of English in the logic programming language Prolog. The conventions of Definite Clause Grammars are used. Previous work on DCGs with a semantic com- ponent has used essentially first-order formulas for representing meanings. The system described here uses formulas of the typed ~-calculus. The first section discusses general issues concerning the use of first-order logic or the h-calculus to represent meanings, The second section describes how h-calculus meaning representations can be con- structed and manipulated directly in Prolog. This 'programmed' representation motivates a suggestion, discussed in the third section, for an extension to Prolog so that the language
|
1983
|
8
|
AN IMPROPER TREATMENT OF QUANTIFICATION IN ORDINARY ENGLISH Jerry R. Hobbs SRI International Menlo Park, California i. The Problem Consider the sentence In most democratic countries most politicians can fool most of the people on almost every issue most of the time. In the currently standard ways of representing quantification in logical form, this sentence has 120 different readings, or quantifier scopings. Moreover, they are truly distinct, in the sense that for any two readings, there is a model that satisfies one and not the other. With the standard logical forms produced by the syntactic and semantic translation components of current theoretical frameworks and implemented systems, it would seem that an inferencing component must process each of these 120 readings in turn in order to produce a best rea
|
1983
|
9
|
Multilingual Text Processing in a Two-Byte Code Lloyd B. Anderson Ecological Linguistics 316 "A" st. s. E. Washington, D. C., 20003 ABS~ACT National and international standards commit- tees are now discussing a two-byte code for multi- lingual information processing. This provides for 65,536 separate character and control codes, enough to make permanent code assiguments for all the cha- ranters of ell national alphabets of the world, and also to include Chinese/Japanese characters. This paper discusses the kinds of flexibility required to handle both Roman and non-Roman alp.ha- bets. It is crucial to separate information units (codes) from graphic forms, to maximize processing p ower, Comparing alphabets around the world, we find t.hat the graphic devices (letters, digraphs, accent marks, punctuation, spacing, etc.) represent a very limited number of information units. It is possi- ble to arr_ange alphabet codes
|
1984
|
1
|
DENORMALIZATION AND CROSS REFERENCING IN THEORETICAL LEXICOGRAPHY Joseph E. Grimes DMLL, Morrill Hall, Cornell University Ithaca NY lh853 USA Summer Institute of Linguistics 7500 West Camp Wisdom Road Dallas TX 75236 USA ABSTRACT A computational vehicle for lexicography was designed to keep to the constraints of meaning- text theory: sets of lexical correlates, limits on the form of definitions, and argument relations similar to lexical-functional grA--~-r. Relational data bases look like a natural frame- work for this. But linguists operate with a non- normalized view. Mappings between semantic actants and grammatical relations do not fit actant fields uniquely. Lexical correlates and examples are poly- valent, hence denormalized. Cross referencing routines help the lexicogra- pher work toward a closure state in which every term of a definition traces back to zero level terms defined extralinguistically or circularly.
|
1984
|
10
|
EXPERT SYSTEMS AND OTHER NEW TECHNIQUES IN MT SYSTEMS Christian BOITET - Ren~ GERBER Groupe d'Etudes pour la Traduction Automatique BP n ° 68 Universit~ de Grenoble 38402 Saint-Martin d'H~res FRANCE ABSTRACT Our MT systems integrate many advanced con- cepts from the fields of computer science, linguis- tics, and AI : specialized languages for linguistic programming based on production systems, complete linguistic programming environment, multilevel representations, organization of the lexicons around "lexical units", units of translation of the size of several paragraphs, possibility of using text-driven heuristic strategies. We are now beginning to integrate new techni- ques : unified design of an "integrated" lexical data-base containing the lexicon in "natural" and "coded" form, use of the "static grammars" forma- lism as a specification language, addition of expert systems equipped with "extralinguistic" or "metal
|
1984
|
100
|
ROBUST PROCESSING IN MACHINE TRANSLATION Doug Arnold, Rod Johnson, Centre for Cognitive Studies, University of Essex, Colchester, CO4 3SQ, U.K. Centre for Computational Linguistics UMIST, Manchester, M60 8QD, U.K. ABSTRACT In this paper we provide an abstract characterisation of different kinds of robust processing in Machine Translation and Natural Language Processing systems in terms of the kinds of problem they are supposed to solve. We focus on one problem which is typically exacerbated by robust processing, and for which we know of no existing solutions. We discuss two possible approaches to this, emphasising the need to correct or repair processing malfunctions. ROBUST PROCESSING IN MACHINE TRANSLATION This paper is an attempt to provide part of the basis for a general theory of robust processing in Machine Translation (MT) with relevance to other areas of Natur
|
1984
|
101
|
Disambiguating Grammatically Ambiguous Sentences By Asking M-~saru Tomita Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract The problem addressed in this paper is to disambiguate grammatically ambiguous input semences by asking the user. who need not be a computer specialist or a linguist, without showing any parse trees or phrase structure rules. Explanation List Comgarison (ELC) is the technique that implements this process. It is applicable to all parsers which are based on phrase structure grammar, regardless of the parser implementation. An experimental system has been implemented at Carnegie-Mellon University, and it has been applied to English-Japanese machine translation at Kyoto University. 1. Introduction /~ F=rge number of techniques using semantic information have been deve!oped to resolve natural language ambiguity. However, not all ambiguity prob
|
1984
|
102
|
AMBIGUITY RESOLUTION IN THE HUMAN SYNTACTIC PARSER: AN EXPERIMENTAL STUDY Howard S. Kurtzman Department of Psychology Massachusetts Institute of Technology Cambridge, MA 02139 (This paper presents in summary form some major points of Chapter 3 of Kurtzman, 1984.) Models of the human syntactic parsing mecha- nism can be classified according to the ways in which they operate upon ambiguous input. Each mode of operation carries particular requirements con- cerning such basic computational characteristics of the parser as its storage capacities and the sched- uling of its processes, and so specifying which mode is actually embodied in human parsing is a useful approach to determining the functional orga- nization of the human parser. In Section l, a pre- liminary taxonomy of parsing models is presented, based upon a consideration of modes of handling ambiguities; and then, in Section 2, psycholinguis- tic evidence is presented whi
|
1984
|
103
|
Conceptual Analysis of Garden-Path Sentences Michael J. Pazzani The MITRE Corporation Bedford, MA 01730 ABSTRACT By integrating syntactic and semantic processing, our parser (LAZY) is able to deterministically parse sentences which syntactically appear to be garden path sentences although native speakers do not need conscious reanalysis to understand them. LAZY comprises an extension to conceptual analysis which yields an explicit representation of syntactic information and a flexible interaction between semantic and syntactic knowledge. 1. INTRODUCTION The phenomenon we wish to model is the understanding of garden path sentences (GPs) by native speakers of English. Parsers designed by Marcus [81] and Shieber [83] duplicate a reader's first reaction to a GP such as (1) by rejecting it as ungrammatical, even though the sentence is, in some sense, grammatical. (1) The horse raced past the barn fell. Thinking fi
|
1984
|
104
|
LANGUAGE GENERATION FROM CONCEPTUAL STRUCTURE: SYNTHESIS OF GERMAN IN A JAPANESE/GERMAN MT PROJECT J. Laubsch, D. Roesner, K. Hanakata, A. Lesniewski Projekt SEMSYN, Institut fuer Informatik, Universitaet Stuttgart Herdweg 51, D-7000 Stuttgart i, West Germany This paper idescribes the current state of the S~/~gYN project , whose goal is be develop a module for generation of German from a semantic representation. The first application of this module is within the framework of a Japanese/German machine translation project. The generation process is organized into three stages that use distinct knowledge sources. ~ne first stage is conceptually oriented and language independent, and exploits case and concept schemata. The second stage e~ploys realization schemata which specify choices to map from meaning structures into German linguistic constructs. The last stage constructs the surface string using knowle
|
1984
|
105
|
NAtural Language driven Image Generation Giovanni Adorni, Mauro Di Manzo and Fausto Giunchiglis Department of Communication, Computer and System Sciences University of Genoa Via Opera Pia i] A - 16145 Genoa - Italy ABSTRACT In this paper the experience made through the development of a NAtural Language driven Image Generation is discussed. This system is able to imagine a static scene described by means of a sequence of simple phrases. In particular, a theory for equilibrium and support will be outlined together with the problem of object positioning. i. IntrOduction A challenging application of the AI techniques is the generation of 2D projections of 3D scenes starting from a possibly unformalized input, as a natural language description. Apart from the practically unlimited simulation capabilities that a tool of this kind could give people working
|
1984
|
106
|
Conceptual and Linguistic Decisions in Generation Laurence DANLOS LADL (CNRS) Universit~ de Paris 7 2, Place Jussieu 7S00S Paris, France ABSTRACT Generation of texts in natural language requires making conceptual and linguistic decisions. This paper shows first that these decisions involve the use of a discourse grammar, secondly that they are all dependent on one another but that there is a priori no reason to give priority to one decision rather than another. As a consequence, a generation algorithm must not be modularized in components that make these decisions in a fixed order. 1. Introduction To express in natural language the information given in a semantic representation, at least two kinds of decisions have to be made: "conceptual decisions" and "linguistic decisions". Conceptual decisions are concerned with questions such as: in what order must the information appear in the text? which
|
1984
|
107
|
A Computational Analysis of Complex Noun Phrmms in N,,vy Messages Elaine Marsh Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory - Code 7510 Washington, D.C. 20375 ABS TRACT Methods of text compression in Navy messages are not limited to sentence fragments and the omissions of function words such as the copula be. Text compression is also exhibited within ~grammatieal" sentences and is identified within noun phrases in Navy messages. Mechanisms of text compression include increased fre- quency of complex noun sequences and also increased usage of nominalizations. Semantic relationships among elements of a complex noun sequence can be used to derive a correct bracketing of syntactic constructions. I INTRODUCTION At the Navy Center for Applied Research in Artificial Intelligence, we have begun computer-analyzing and processing the compact text in Navy equipment failure messages, sp
|
1984
|
108
|
ANOTHER LOOK AT NOMINAL COMPOUNDS Pierre Isabelle D~partement de linguistique Universit~ de Montreal C.P. 6128, Succ. A, Montreal, Qua., Canada H)C )37 ABSTRACT We present a progress report on our research on nominal compounds (NC's). Recent approaches to this probiem in linguistics and natural ianguage processing (NLP) are reviewed and criticized. We argue that the notion of "roie nominal", which is at the interface of linguistic and extraiinguis- tic knowledge, is crucial for characterizing NC'e as weII as other Iinguistic phenomena. We examine a number of constraints on the semantic interpre- tation ruies for NC's. Proposals are made that shouid improve the capability of NLP systems to deaI with NC's. I INTRODUCTION A. Problem Statement As a first approximation, we define a "nominal compound (NC) as s string of two or more nouns having the same distribution as a singie noun, as in example (I):
|
1984
|
109
|
Lexicon Features for Japanese Syntactic Analysis in Mu-Project-JE Yoshiyuki Sakamoto Electrotechnical Laboratory Sakura-mura, Niihari-gun, Ibsraki, Japan Masayuki Satoh The Japan Information Center of Science and Technology Nagata-cho, Chiyeda-ku Tokyo, Japan Tetsuya Ishikawa Univ. of Library & Information Science Yatabe-machio Tsukuba-gun. Ibaraki, Japan O. Abstract In this paper, we focus on the features of a lexicon for Japanese syntactic analysis in Japanese-to-English translation. Japanese word order is almost unrestricted and Kc~uio-~ti (postpositional case particle) is an important device which acts as the case label(case marker) in Japanese sentences. Therefore case grammar is the most effective grammar for Japanese syntactic analysis. The case frame governed by )buc~n and having surface case(Kakuio-shi), deep case(case label) and se
|
1984
|
11
|
SEMANIIC PARSING AS GRAPH LANGUAGE TRANSFORMATION - A MULIIDIMENSIONAL APPROACH TO PARSING HIGHLY INFLECTIONAL LANGUAGES Eero Hyv~nen He]sJnkJ IJniversity of TechnoloQy DiaJtal SysLems Laboratory OtakaarJ 5A n215n Espoo 15 FINLAND ABSTRACT The structure of many languages with "free" word order and rich morphology like Finnish is rather configurational than linear. Although non-linear structures can be represented by linear formalisms i t is often more natural to study multidimensional arrangement of symbols. Graph grammars are a multidimensional generalization of linear string grammars. In graph grammars string rewrite rules are generalized into graph rewrite rules. This p a p e r presents a graph grammar formalism and parsing scheme for parsing languages with inherent configurational flavor. A small experimental Finnish
|
1984
|
110
|
HANDLING SYNTACTICAL AMBIGUITY IN MACHINE TRANSLATION Vladimir Pericliev Institute of Industrial Cybernetics and Robotics Acad. O.Bontchev Sir., bl.12 1113 Sofia, Bulgaria ABSTRACT The difficulties to be met with the resolu- tion of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the co- rrespondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other suffi- ciently related languages as well. The paper con- centrates on the linguistic grounds for adopting the approach proposed. 1. I
|
1984
|
111
|
ARGUMENTATION IN REPRESENTATION SEMANTICS * Pierre-Yves RACCAH ERA 430 - C.N.R.S. Conseil d'Etat Palais Royal 75100 Paris RP ABSTRACT It seems rather natural to admit that language use is governed by rules that relate signs, forms and meanings to possible intentions or possible interpretations, in function of utterance situations. Not less natural should seem the idea that the meaning of a natural language expression conveys enough material to the input of these rules, so that, given the situation of utterance, they determine the appropriate interpretation. If this is correct, the semantic description of a natural language expression should output not only the 'informative content' of that expression, but also all sorts of indications concerning the way this expression may be used or interprete
|
1984
|
112
|
VOICE SIMULATION: FACTORS AFFECTING QUALITY AND NATURALNESS B. Yegnanarayana Department of Computer Science and Engineering Indian Institute of Technology, Madras-60O 036, India J.M. Naik and D.G. Childers Department of Electrical Engineering University of Florida, Galnesville, FL 32611, U.S.A. ABSTRACT In this paper we describe a flexible analysls-synthesls system which can be used for a number of studies In speech research. The maln objective Is to have a synthesis system whose characteristics can be controlled through a set of parameters to realize any desired voice characteristics. The basic synthesis scheme consists of two steps: Generation of an excita- tion signal from pitch and galn contours and excitation of the linear system model described by linear prediction coefficients, We show that a number of basic studies such as time expansion/ compression, pitch modifications
|
1984
|
113
|
INTERPRETING SYNTACTICALLY ILL-FORMED SENTENCES Leonardo LESMO and Pietro TORASSO Dipartimento di Informatica - Universita' di Torino Corso Massimo D'Azeglio 42 - 10125 Torino - ITALY ABSTRACT The paper discusses three different kinds of syntactic ill-formedness: ellipsis, conjunctions, and actual syntactic errors. It is shown how a new grammatical formalism, based on a two-level repr_e sentation of the syntactic knowledge is used to cope with Ill-formed sentences. The basic control struc ture of the parser is briefly sketched; the paper shows that it can be applied without any substan tial change both to correct and to ill-formed sen tences. This is achieved by introducing a mechanism for the hypothesization of syntactic structures, which is largely independent of the rules defining the well-formedness. On the contrary, the second level of syntactic knowledge embodies those rules and is used to validate the hyp
|
1984
|
114
|
AN INT~ATIONAL DELPHI POLL ON FUTURE TRENDS IN "INFORMATION LINGUISTICS" Rainer Kuhlen Universitast Konstanz Informationswissenschaft Box 6650 D-7750 Konstanz I, West Germany ABSTRACT The results of an international Delphi poll on information linguistics which was carried out between 1982 and 1983 are presented. As part of conceptual work being done in information science at the University of Constance an international Delphi poll wss carried out from 1982 to 1983 with the aim of establishing a mid-term pro@aosis for the development of "information linguistics". The term "information linguistics" refers to a scientific discipline combining the fields of linguistic data processing, applied computer science, linguistics, artificial intelligence, and information science. A Delphi poll is a written poll of experts - carried out in this case in two phases. The results of the fir
|
1984
|
115
|
Machine Translation: its History, Current Status, and Future Prospects Jonathan Slocum Abstract Elements ot the history, state of the art, and probable future of Machine Translation (MT) are discussed. The treatment is largely tutorial, based on the assumption that this audience is, for the most part, ignorant of matters pertaining to translation in general, and MT in particular. The paper covers some of the major MT R&D groups, the general techniques they employ(ed), and the roles they play(ed) in the development of the field. The conclusions concern the seeming permanence of the translation problem, and potential re-integration of MT with mainstream Computational Linguistics. Introduction Siemens Communications Systems, Inc. Linguistics Research Center University of Texas Austin, Texas We are now into the fourth decade of MT, and the
|
1984
|
116
|
Toward a Redefinition of Yea/No Questions Julia Hirschberg Department of Computer and Information Science Moore School/D0 University of Pennsylvania Philadelphia, PA 19104 ABSTRACT While both theoretical and empirical studies of question- answering have revealed the inadequacy of traditional definitions of Ve*-no questions (YNQs), little progress has been made toward a more satisfactory redefinition. This paper reviews the limitations of several proposed revisions. It proposes a new definition of YNQs baaed upon research on a type of conversational irnpIieature, termed here setdar imp/ie,,ture, that helps define appropriate responses to YNQs. By representing YNQs as sealer qtteriee it is possible to support a wider variety of system anti user responses in a principled way. I INTRODUCTION If natural language interfaces to question-answering systems are to support a broad range of res
|
1984
|
12
|
THE SYNTAX AND SEMANTICS OF USER-DEFINED MODIFIERS IN A TRANSPORTABLE NATURAL LANGUAGE PROCESSOR Bruce W. Ballard Dept. of Computer Science Duke University Durham, N.C. 27708 ABSTRACT The Layered Domain Class system (LDC) is an experimental natural language processor being developed at Duke University which reached the prototype stage in May of 1983. Its primary goals are (I) to provide English-language retrieval capabilities for structured but unnormaUzed data files created by the user, (2) to allow very complex semantics, in terms of the information directly available from the physical data file; and (3) to enable users to customize the system to operate with new types of data. In this paper we shall discuss (a) the types of modifiers LDC provides for; (b) how information about the syntax and semantics of modifmrs is obtained from users; and (c) how this information is used to process E
|
1984
|
13
|
Interaction of Knowledge Sources in a Portable Natural Language Interface Carole D. Hafner Computer Science Department General Motors Research Laboratories Warren, MI 48090 Abstract This paper describes a general approach to the design of natural language interfaces that has evolved during the development of DATALOG, an Eng- lish database query system based on Cascaded ATN grammar. By providing separate representation schemes for linguistic knowledge, general world knowledge, and application domain knowledge, DATALOG achieves a high degree of portability and extendability. 1. Introduction An area of continuing interest and challenge in computational linguistics is the development of techniques for building portable natural language (NL) interfaces (See, for example, [9,3,12]). The investigation of this problem has led to several NL systemS, including TEAM [7], IRUS [i], and INTELLECT [10], which separate domain-
|
1984
|
14
|
USES OF C-GP.APHS lil A PROTOTYPE FOR ALrFC~ATIC TRNLSLATION, Marco A. CLEMENTE-SALAZAR Centro de Graduados e Investigaci6n, Instltuto Tecnol6gico de Chihuahua, Av. Tecnol6gico No. 2909, 31310 Chihuahua, Chih., MEXICO. ABSTRACT This paper presents a prototype, not com- pletely operational, that is intended to use c-graphs in the translation of assemblers. Firstly, the formalization of the structure and its princi- pal notions (substructures, classes of substruc- tures, order, etc.) are presented. Next section de- scribes the prototype which is based on a Transfor- mational System as well as on a rewriting system of c-graphs which constitutes the nodes of the Trans- formational System. The following part discusses a set of operations on the structure. Finally, the implementation in its present state is shown. 1. INTRODUCTION. In the past [10,11], several kinds of repre- sentation have been used (strings, labelled trees, t
|
1984
|
15
|
QUASI-INDEXICAL REFERENCE IN PROPOSITIONAL SEMANTIC NETWORKS William J. Rapaport Department of Philosophy, SUNY Fredonia, Fredonia, NY 14063 Departmeot of Computer Science, SUNY Buffalo, Buffalo, NY 14260 Stuart C. Shapiro Department of Computer Science, SUNY Buffalo, Buffalo, NY 14260 ABSTRACT We discuss how a deductive question-answering sys- tem can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself. In particular, we examine the representation of first-person beliefs of others (e.g., the ~/v.~'~ representation of a user'A belief that he himself is rich). Such beliefs have as an essential component "'quasi-indexical pronouns" (e.g., 'he himself'), and, hence, require for their analysis a method of represent- ing these pronominal constructions and performing valid inferences with them. The the
|
1984
|
16
|
The Costs of Inheritance in Semantic Networks Rob't F. Simmons The University of Texas, Austin Abstract Questioning texts represented in semantic relations I requires the recognition that synonyms, instances, and hyponyms may all satisfy a questioned term. A basic procedure for accomplishing such loose matching using inheritance from a taxonomic organization of the dictionary is defined in analogy with the unification a!gorithm used for theorem proving, and the costs of its application are analyzed. It is concluded tl,at inherit,~nce logic can profitably be ixiclu.'ted in the basic questioning procedure. AI Handbook Study In studying the pro-.~ss of answering questions from fifty pages of the AI tlandbook, it is striking that such subsections as those describing problem representations are organized so as to define conceptual dictionary entries for the terms. First, c
|
1984
|
17
|
FUNCTIONAL UNIFICATION GRAMMAR: A FORMALISM FOR MACHINE TRANSLATION Martin Kay Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto California 94304 and CSLI, Stanford Abstract Functional Unification Grammar provides an opportunity to encompass within one formalism and computational system the parts of machine translation systems that have usually been treated separately, natably analysis, transfer, and synthesis. Many of the advantages of this formalism come from the fact that it is monotonic allowing data structures to grow differently as different nondeterministic alternatives in a computation are pursued, but never to be modified in any way. A striking feature of this system is that it is fundamental reversible, allowing a to translate as b only if b could translate as a. I Overview A. Machine Translation A classical translating machine stands with one foot on the input text and one on the output. Th
|
1984
|
18
|
COMPUTER SIMULATION OF SPONTANEOUS SPEECH PRODUCTION Bengt Sigurd Dept of Linguistics and Phonetics Helgonabacken 12, S-223 62 Lund, SWEDEN ABSTRACT This paper pinpoints some of the problems faced when a computer text production model (COMMENTATOR) is to produce spontaneous speech, in particular the problem of chunking the utterances in order to get natural prosodic units. The paper proposes a buffer model which allows the accumula- tion and delay of phonetic material until a chunk of the desired size has been built up. Several phonetic studies have suggested a similar tempo- rary storage in order to explain intonation slopes, rythmical patterns, speech errors and speech dis- orders. Small-scale simulations of the whole ver- balization process from perception and thought to sounds, hesitation behaviour, pausing, speech errors, sound changes and speech disorders are pre- sented. 1. Introduction Several text producti
|
1984
|
19
|
CONVEYING IMPLICIT CONTENT IN NARRATIVE SUMMARW~S Malcolm E. Cook, Wendy G. Lehnert, David D. ~ d Department of Computer and Information Science University of Massachusetts Amherst, Massachusetts 01003 ABSTRACT One of the key characteristics of any summary is that it must be concise. To achieve this the content of the summary (1) must be focused on the key events, and (2) should leave out any information that she audience can infer on their own. We have recently begun a project on summarizing simple narrative stories. In our approach, we assume that the focus of the story has already been determined and is explicitly given in the story's lung-term representation; we concentrate instead on how one can plan what inferences an audience will be able to make when they read a summary. Our conduglon is that one should think about inferences as following from the audience's recognition of the central
|
1984
|
2
|
LIMITED DOMAIN SYSTEMS FOR LANGUAGE TEACHING S G Pulman, Linguistics, EAS University of East Anglia, Norwich NR4 7Tj, UK. This abstract describes a natural language system which deals usefully with ungrammatical input and describes some actual and potential applications of it in computer aided second language learning. However, this is not the only area in which the principles of the system might be used, and the aim in building it was simply to demonstrate the workability of the general mechanism, and provide a framework for assessing developments of it. BACKGROUND The really hard problem in natural language processing, for any purpose, is the role of non-linguistic knowledge in the understanding process. The correct treatment of even the simplest type of non-syntactic phenomena seems to demand a formidable amount of encyclopedic knowledge, and complex inferences therefrom. To date, the only systems which have simulated
|
1984
|
20
|
G~T : A GENERAL TRANSDUCER FOR TEACHING C~TIONAL LINGUISTICS P. Shann J.L. Cochard Dalle Molle Institute for Semantic and Cognitive Studies University of Geneva Switzerland ABSTRACT The GTI~syst~m is a tree-to-tree transducer developed for teaching purposes in machine transla- tion. The transducer is a specialized production system giving the linguists the tools for express- ing infon~ation in a syntax that is close to theo- retical linguistics. Major emphasis was placed on developing a system that is user friendly, uniform and legible. This paper describes the linguistic data structure, the rule formalism and the control facilities that the linguist is provided with. 1. INTRODUCTION The GTT-system (Geneva Teaching Transducer)1 is a ger~ral tree-to-tree transducer developed as a tool for training linguists in machine transla- tion and computational linguistics. The transducer is a specialized production system tail
|
1984
|
21
|
A PARSING ARCHITECTURE BASED ON DISTRIBUTED MEMORY MACHINES Jon M. Slack Department of Psychology Open University Milton Keynes MK7 6AA ENGLAND ABSTRACT The paper begins by defining a class of distributed memory machines which have useful properties as retrieval and filtering devices. These memory mechanisms store large numbers of associations on a single composite vector. They provide a natural format for encoding the syntactic and semantic constraints associated with linguistic elements. A computational architecture for parsing natural language is proposed which utillses the retrieval and associative features of these devices. The parsing mechanism is based on the principles of Lexlcal Functional Grammar and the paper demonstrates how these principles can be derived from the properties of the memory mechanisms. I INTRODUCTION Recently, interest has focussed on computational architectures employing massiv
|
1984
|
22
|
AUTOMATED DETERMINATION OF SUBLANGUAGE SYNTACTIC USAGE Ralph Grbhman and Ngo Thanh Nhan Courant Institute of Mathematical Sciences New York University New York, NY 10012 Elalne Marsh Navy Center for Applied 1~se, arch in ~ Intel~ Naval ~ Laboratory Wx,~hinm~, DC 20375 Lynel~ Hirxehnum Research and Development Division System Development Corpmation / A Burroughs Company Paofi, PA 19301 Abstract Sublanguages _differ from each other, and from the "stan- dard Ian~age, in their syntactic, semantic, and discourse vrolx:rties. Understanding these differences is important'if -we are to improve our ability to process these sublanguages. We have developed a sen~.'- automatic ~ure for identifying sublangnage syntact/c usage from a sample of text in the sublanguage..We describe the results of applying this procedure to taree text samples: two sets of medical documents and a set of equipment failure me~ages.
|
1984
|
23
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.