Abstract Wikipedia/Related and previous work/Natural language generation
Abstract Wikipedia will generate natural language text from an abstract representation. This is not a novel idea, and it has been tried a number of times before.
This page aims to collect different existing approaches. It tries to summarize the core ideas of the different approaches, their advantages and disadvantages, and points to existing implementations. This page (by and for the community) will help to choose which approach to focus on first.
Implementations
- Arria NLG
- Wikipedia: Arria NLG [ de ] [ en ] [ nn ]
- Website: https://www.arria.com/
- License: Proprietary, 30 patents apply
- Supported languages: English
- ASTROGEN
- Chimera
- Website: https://github.com/AmitMY/chimera
- License: MIT License
- Elvex
- FUF/SURGE
- Genl
- Website: http://kowey.github.io/GenI/
- GoPhi
- Website: https://github.com/rali-udem/gophi
- Grammar Explorer
- Grammatical Framework
- Wikipedia: Grammatical Framework [ en ] [ nn ]
- Website: https://www.grammaticalframework.org/
- License: GNU General Public License: see text
- Supported languages: Afrikaans, Amharic (partial), Arabic (partial), Basque (partial), Bulgarian, Catalan, Chinese, Czech (partial), Danish, Dutch, English, Estonian, Finnish, French, German, Greek ancient (partial), Greek modern, Hebrew (fragments), Hindi, Hungarian (partial), Interlingua, Italian, Japanese, Korean (partial), Latin (partial), Latvian, Maltese, Mongolian, Nepali, Norwegian bokmål, Norwegian nynorsk, Persian, Polish, Punjabi, Romanian, Russian, Sindhi, Slovak (partial), Slovene (partial), Somali (partial), Spanish, Swahili (fragments), Swedish, Thai, Turkish (fragments), and Urdu.
- jsRealB
- KPML
- Website: http://www.fb10.uni-bremen.de/anglistik/langpro/kpml/README.html
- Supported languages: (2014):
- More advanced: Czech, English, German?, Spanish
- Prototype: Bulgarian, Chinese, Dutch, Portuguese, Russian
- Less advanced: French, Greek, Japanese
- Linguistic Knowledge Builder
- Website: http://moin.delph-in.net/LkbTop
- Multimodal Unification Grammar
- NaturalOWL
- NLGen and NLGen2
- OpenCCG
- Website: http://openccg.sourceforge.net/
- rLDCP
- RoseaNLG
- Website: https://rosaenlg.org/
- Supported languages: English, French, German and Italian
- Semantic Web Authoring Tool (SWAT)
- Wikipedia: WYSIWYM [ en ] [ nn ] A SWAT is a tool that implements the WYSIWYM (what you see is what you meant) interaction technique for developing formal representations based on successive refinements (by humans) of NLG outputs.
- Website: http://mcs.open.ac.uk/nlg/SWAT/
- Supported languages: OWL Simplified English
- SimpleNLG
- Website: https://github.com/simplenlg/simplenlg
- Supported languages: English, French
- SPUD
- Suregen-2
- Website: http://www.suregen.de/index.html
- Supported languages: German, English
- Syntax Maker
- Website: https://github.com/mikahama/syntaxmaker
- Supported languages: Finnish
- TGen
- Website: https://github.com/UFAL-DSG/tgen
- Universal Networking Language
- UralicNLP
- Website: https://uralicnlp.com/
https://github.com/mikahama/uralicNLP - Supported languages: Finnish, Russian, German, English, Norwegian, Swedish, Arabic, Ingrian, Meadow & Eastern Mari, Votic, Olonets-Karelian, Erzya, Moksha, Hill Mari, Udmurt, Tundra Nenets, Komi-Permyak, North Sami, South Sami and Skolt Sami[1]
Theoretical background
Natural language generation [ de ] [ en ] [ es ] [ fr ] [ 日本語 ] [ nn ] [ 中文 ] is a sub-field of natural language processing. See the broader topic on Scholia.[2]
Pipeline model
In their 2018 Survey,[3] Gatt[4] and Krahmer[5] begin by describing natural language generation as the "task of generating text or speech from non-linguistic input." They identify six sub-problems (after Reiter & Dale 1997, 2000[6]) [2.NLG Tasks, pp. 70-82]:[3]
- Content determination (content determination (Q5165077))
- Text structuring (document structuring (Q5287648))
- Sentence aggregation (aggregation (Q4692263))
- Lexicalisation (lexical choice (Q6537688))
- Referring expression generation (referring expression generation (Q7307185))
- Linguistic realisation (realization (Q7301282))
Please note that the six topics listed above have articles only in the English Wikipedia (24 July 2020).
These six sub-problems can be seen as a segmentation of the “pipeline”, beginning with “early” tasks, aligned to the purpose of the linguistic output. The “late” tasks are more aligned to the final linguistic form. A summary form might be “What (1), ordered (2) and segmented (3) how, with which words (4&5), in which forms (6)”. Lexicalisation (4) is not clearly distinguished from “referring expression generation” (REG) (5) in this summary form. The key idea during REG is avoiding repetition and ambiguity, or managing the tension between those conflicting aims. This corresponds to the Gricean maxim (Grice, 1975[7]) that “speakers should make sure that their contributions are sufficiently informative for the purposes of the exchange, but not more so” (or, as Roger Sessions said (1950) after Albert Einstein (1933): “everything should be as simple as it can be but not simpler!”).
Content determination
Document structuring
Aggregation
Lexical choice
Referring expression generation
Realization
- “In linguistics, realization is the process by which some kind of surface representation is derived from its underlying representation; that is, the way in which some abstract object of linguistic analysis comes to be produced in actual language. Phonemes are often said to be realized by speech sounds. The different sounds that can realize a particular phoneme are called its allophones.”
- “Realization is also a subtask of natural language generation, which involves creating an actual text in a human language (English, French, etc.) from a syntactic representation.”
- English Wikipedia
- (Wikipedia contributors, “Realization”, Wikipedia, The Free Encyclopedia, 26 May 2020, 02:46 UTC, <https://en.wikipedia.org/w/index.php?title=Realization&oldid=958866516> [accessed 31 August 2020].)
Black-box approach
In a later survey, Gârbacea and Mei[8] suggested “Neural language generation” as an emerging sub-field of NLG. Eleven of the papers cited in their survey have titles with “neural language” in them, the earliest from 2016 (Édouard Grave, Armand Joulin, and Nicolas Usunier)[9]. The earliest citation in which “neural language generation” appears is from 2017 (Jessica Ficler and Yoav Goldberg)[10].
In mid 2020, “neural language generation” is not mature enough to be used to generate natural language renditions of language-neutral content.
References
External links
- ACL Special Interest Group on Natural Language Generation ACL is the Association for Computational Linguistics.
- Ehud Reiter's Blog Ehud Reiter has no English Wikipedia page (apart from his user page).
- Natural Language Generation (CLAN Group), School of Natural and Computing Sciences, The University of Aberdeen.
- Institute for Language, Cognition and Computation (ILCC), School of Informatics, The University of Edinburgh.
- Harvard NLP, Harvard University.
- The Interaction Lab, School of Mathematical and Computer Sciences, Heriot-Watt University.
- Institute of Linguistics and Language Technology, University of Malta (Albert Gatt, Director).
- The Open University Natural Language Generation Group.
- TALN Research Group, Department of Information and Communication Technologies,Universitat Pompeu Fabra, Barcelona.
- The Natural Language Processing Group, The University of Sheffield.
- The Natural Language Group, Information Sciences Institute, University of Southern California.
- SyNaLP (Symbolic and statistical NLP), Laboratoire Lorrain d'Informatique et ses Applications (LORIA).
- Paul G. Allen School of Computer Science and Engineering, University of Washington.
Notes
- ↑ https://models.uralicnlp.com/nightly/
- ↑ The Scholia view on Natural-language generation lacked the standard sources and leading authors on 27 July 2020. Instead, see Google Scholar.
- ↑ a b c Gatt, Albert; Krahmer, Emiel (January 2018), "Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation", Journal of Artificial Intelligence Research 61: 65–170, archived from the original on 2020-06-23, retrieved 2020-07-24
- ↑ Gatt's publications
- ↑ Emiel Krahmer (Q51689943) selected publications
- ↑ a b Reiter, EB; Dale, R (2000), Building Natural-Language Generation Systems. (PDF), Cambridge University Press., archived from the original (PDF) on 2019-07-11, retrieved 2020-07-27
- ↑ a b Grice, H. Paul (1975), Logic and conversation (PDF), retrieved 2020-08-10
- ↑ a b Gârbacea, Cristina; Mei, Qiaozhu, Neural Language Generation: Formulation, Methods, and Evaluation (PDF), pp. 1–70, retrieved 2020-08-08,
Compared to the survey of (Gatt and Krahmer, 2018), our overview is a more comprehensive and updated coverage of neural network methods and evaluation centered around the novel problem definitions and task formulations.
- ↑ a b Grave, Édouard; Joulin, Armand; Usunier, Nicolas (2016), Improving neural language models with a continuous cache (PDF)
- ↑ a b Ficler, Jessica; Goldberg, Yoav (2017), "Controlling linguistic style aspects in neural language generation" (PDF), Proceedings of the Workshop on Stylistic Variation: 94–104. Published slightly earlier that year was Van-Khanh Tran and Le-Minh Nguyen. 2017.
Ficler, Jessica; Goldberg, Yoav (2017), Semantic Refinement GRU-based Neural Language Generation for Spoken Dialogue Systems (PDF) - ↑ Gardent, Claire; Shimorina, Anastasia; Narayan, Shashi; Perez-Beltrachini, Laura (2017), "The WebNLG Challenge: Generating Text from RDF data." (PDF), Proceedings of the 10th International Conference on Natural Language Generation: 124–133