A Systematic Literature Review on Text Generation Using Deep Neural Network Models

In recent years, significant progress has been made in text generation. The latest text generation models are revolutionizing the domain by generating human-like text. It has gained wide popularity recently in many domains like news, social networks, movie script writing, poetry composition, to name a few. The application of text generation in various fields has resulted in a lot of interest from the scientific community in this area. To the best of our knowledge, there is a lack of extensive review and up-to-date body of knowledge of text generation deep learning models. Therefore, this paper aims to bring together all the relevant work in a systematic mapping study highlighting key contributions from various researchers over the years, focusing on the past, present, and future trends. In this work, we have identified 90 primary studies from 2015 to 2021 employing the PRISMA framework. We also identified research gaps that are further needed to be explored by the research community. In the end, we provide some future directions for researchers and guidelines for practitioners based on the findings of this review.


I. INTRODUCTION
Text Generation is a field of study in Natural Language Processing (NLP) that combines computational linguistic and artificial intelligence to generate new text. It is a process of generating grammatically and semantically correct synthetic text. This process includes training a model that takes input data, learns the context from the input, and generates new text relating to the domain of input data. The generated text should satisfy the basic language structure and convey the desired message [1]. It is challenging to generate and evaluate grammatically, semantically, and synthetically correct text because text generation and its evaluation are open-ended. Thus, this SLR discusses five research aspects associated with text generation. These include the deep learning approaches for text generation, quality metrics for evaluating generated text, training datasets used in the domain, languages on which the text generation is performed, and application areas for text generation.
Text generation can be performed at different granularity of text, i.e., character, word, and sentence level [2]. Text generation at a sentence level aims to analyze the entire text as a fine-grained and learn the relationship of sentence and contextual of sentence. Meanwhile, Word based text generation aims to analyze the structure of a sequence and predict the probability of the next word in a given text. Similarly, at character level text generation, the model identify the character rather than the entire document.
Automatic text generation was possible due to recent developments in computational resources coupled with advancements in deep learning techniques. Deep learning is a field of machine learning that uses artificial neural networks and representation learning. Text generation approaches can broadly be categorized into three types of deep learning models as given below: 1) Vector-Sequence Model -Input is a fixed size vector, whereas output can vary. For instance, this model can be used for caption generation of images [3]. 2) Sequence-Vector Model -Input is of variable size, and output is a fixed-size vector. Classification is an example of this model [4]. 3) Sequence-to-Sequence Model -Input and output are variable sizes in this model type. It is the most widely used variant of text generation models. Language translation belong to this type of text generation model [5] Above all, deep learning has contributed immensely to different aspects of natural language generation for various tasks including, dataset balancing [6], [7], next word prediction & text suggestion in chatting, generation of answers to questions in question answering system, in chatbots [8], [9], machine learning translation [10], [11], text summarization [12]- [14], text classification [15], [16], text generation for topic modeling [17], dialogue generation [18], sentiment analysis [19], [20], poetry writing [21], script writing for movies [1], [22], and others.
Text generation quality is the essential task to evaluate.Evaluating the quality of the generated text governs the model's performance and measure the diversity of generated and original text. The quality metrics are also known as evaluation methods. There are two ways to assess the quality of generated text: human-centric (HC) and machine-centric (MC) [23].The human-centric evaluation method involves language and domain experts who evaluate the generated text. It is expensive in terms of both time and cost and is prone to human errors. On the other hand, the machinecentric evaluation method, as known as objective quality assessment, is widely adopted and found in the literature. It includes various evaluation metrics: Metric for evaluation of translation with explicit ordering (METEOR), bilingual evaluation understudy (BLEU), recall-oriented understudy for gisting evaluation (ROUGE), consensus-based image description evaluation (CIDEr), National institute of standards and technology (NIST), word error rate (WER), Word Perplexity, and BERTScore. The machine-centric method saves time and cost, but the quality of an objective evaluation metric is highly language-specific There are various deep learning architectural frameworks that are widely used in the literature for implementation of deep learning models. Recurrent Neural Networks (RNN) [24] is one of them. It is a class of neural networks that uses output of previous states as input in future states. This is the first algorithm that preserves the outputs of past states. One problem with RNN is that it forgets the previous outputs over a period of time due to vanishing gradient.
Bidirectional RNN [25] uses two RNN layers that looks into the sequence in both directions i.e. forward and backward, and combines their output. This is helpful when the current state is not only dependent on previous state but also on the future state. One special class of RNN is Long Short-Term Memory (LSTM) [26] network that is used to retain the information of previous states over a very long period and forgets the irrelevant information. Gated Recurrent Unit (GRU) also overcomes the problem of vanishing gradient in RNN. GRU is simplified version of LSTM.
Generative Adversarial Network (GAN) works on the concept of minmax game where discriminator predicts if the sample is from training set or is produced by generative network, and generator tries to maximize the mistakes of discriminator.
GPT-2 was proposed by Radford et al. [27]. GPT-2 is a transformer based model having 1.5 billion parameters. It is trained on 40GB of the Internet text scrapped from eight million web pages. It is revolutionary model in text processing. It has exceptional human-like ability to generate long sequences.
In June 2020, OpenAI released third version of GPT, which is 100 times larger than the previous model. GPT-3 is trained on 499 billion tokens of web data and it has 175 billion parameters and 96 layers. It has more ability and generative power that it may outperform in many different tasks like, text generation and zero shot learning and one shot learning [28]. However, the model is not publicly available; instead, API access was to be provided but to those who pay for it [29].
Usually, these pre-trained (LSTM as well as GPT based) models are used to generate text in different domains. For example, for generating a movie script, it is possible to use the pre-trained GPT model and customize its generation capability by fine-tuning using some movies' script dataset. Once, the data is gathered and model learning is customized to generate domain specific text, the next step is to assess quality of generated text. LSTM was introduced as a character level text generation model.

A. BACKGROUND
The basic strategies of text generation is first to train any language model on lots of sequences or text data then the model is capable of generating next character or multiple characters in the sequence given previous characters as input. For example, generate the next character 'k' for given sequence 'Cat likes mil', as shown in Figure 1. Looking over the example mentioned in this paper [6], conditioning text (initial text that is fed to LSTM network for predicting next character in the sequence) is 'Cat likes m', LSTM model is trained on Wikipedia text or related domain of conditioning text. To predict probability of next character using Sof tmax activation function at the output layer which is defined in Equation 1.
where, x i is the LSTM score for character i to be the next character in conditioning text. Each of the x i is not a probability score, therefore LSTM uses softmax to convert LSTM scores to probabilities score. The actual magic of text generation is hidden in sampling strategy. Text generation would be almost similar to the original text, if the next character is generated based on highest score of probability taken from softmax output. Thus, to introduce novelty and creativity in generated text, some randomness was introduced in generated text. The sampling strategy introduces such randomness using temperature value.
Suppose, P original is original probability distribution at Sof tmax, the α term is defined as, Once α is computed, the P revised is defined as, where, n is the number of elements in original distribution and temperature value is an arbitrary value ranges from any non zero value up to 1.

B. RELATED SURVEYS
There are nine handful of surveys published on topic of text generation shown in Table 1. We have found that five research papers have worked on a single aspect. Li et al. in [30] have provided extensive systematic literature review on deep learning techniques along with its data type. The authors mainly focused on encoder and decoder based deep learning architecture. Beside that, different data types (unstructured input, structured and multimedia) were discussed along best fitted transformer based models. Similarly, Gatt et al. in [31], have worked on extensive systematic review on text generation based applications. Lu et al. in [32], have provided the systematic literature review on evolution matrix on SLR text generation. Lastly, both studies [33] and [34] have conducted extensive review study on text generation only on the basis of quality metrics.
Beside that, few researchers have worked on two aspects of SLR text generation. Research work in [1] and [31] have provided an extensive literature review on quality metrics and techniques. Another review study focused on three aspects -quality metrics, techniques and applications, and provided the overview of text generation [35]. Similarly, study in [1] aimed to review on multiple objectives of SLR on quality metric, dataset, techniques and application. There are major two limitations of these systematic literature reviews. First, seven articles were non-peer reviewed. Second, none of these attempts have worked on comprehensive review on dataset, quality metrics, languages, deep learning approaches and trends on text generation in deep learning in a single study.
The limitations and findings shown in Table 1 provide a base of conducting a comprehensive detail of text generation in deep learning. Therefore, our study focuses on articles published between 2015 to 2021. 90 baseline articles are reviewed following the Preferred Reporting Items for Systematic Literature Review and Meta-Analysis(PRISMA) protocol for systematic literature review. We have investigated text generations on five different aspects, namely deep learning approach, quality metric, dataset, language, and application on text generation in deep learning, as shown in Figure 2.
The main contributions of this study are as follows: 1) A systematic map of 90 primary studies based on the PRISMA framework; 2) An analysis of the investigated text generations on five different aspects, namely deep learning approaches, quality metrics, datasets, languages, and applications on text generation in deep learning; 3) An overview of the challenges, opportunities, and recommendations of the field for future research exploration. The major significance of this Systematic Literature Review (SLR) provides an in-depth analysis, most extensive and up-to-date knowledge of text generation based on five research aspects -deep learning approaches, quality metrics, datasets, languages and applications. Moreover, this study also focuses on the major challenges and future research directions in text generation domain all together. To the best of our knowledge, there is no SLR on text generation that covered all these aspects. This study is a comprehensive systematic literature review to cover all the text generation related aspects.
The rest of the paper is organised as follows. Section II describes the research design of this SLR followed by Section III that covers the finding of RQs and provides the most relevant articles based on quality assessment criteria. Section IV provides the identified challenges and research gaps. Section V presents the recommendations and future research directions and finally Section VI summarizes the SLR.

II. RESEARCH DESIGN
In this study, we have applied systematic mapping as a research methodology for reviewing the literature. We have utilized the guidelines of Preferred Reporting Items for Systematic Literature Review and Meta-Analysis (PRISMA), given by [38]. This SLR consists of four major steps, including the planning and searching of primary studies, collection VOLUME No, 2020  of studies, data extraction, and synthesis of data. The first step generally identifies research questions and objectives (stated in Section II-A). The search strategy step involves criteria for selecting studies, study selection procedure, keywords formulation for research and searching queries, as well as the quality assessment criteria of extracted studies (which are addressed in Section II-B). The data extraction step involves strategies of data extraction from selected studies (see Section II-C and II-D for details). In addition, the final step involves Quality assessment (see Section II-E for more details).

A. RESEARCH QUESTIONS
The main purpose of this SLR is to explore various techniques for text generation using deep learning. The following five research questions (RQs) were raised to achieve this aim, as shown in Table 2.

B. RESEARCH OBJECTIVES
The following five research objectives of this study are given below: • To investigate the existing traditional and advanced deep learning based text generation approaches/techniques • To explore various performance metrics used for evaluating text generation models • To investigate various evaluation methods for measuring the quality of generated text • To review the recent application domains where text generation is being applied • To discuss the major challenges and future research directions in text generation domain

C. SEARCHING STRATEGY TO RETRIEVE PRIMARY STUDIES
The majority of studies have included text generation or automatic text generation as their sources of data for text generation. Thus, various search keywords are formulated to retrieve the related literature from six reliable and highquality academic databases, namely, Web of Science (WoS), Scopus, IEEE Xplore, Springer link, ScienceDirect and ACM Digital Library. Five of the authors prepared the list of several relevant keywords to search the relevant literature on "text generation techniques in deep learning" from the selected databases. Table 3 shows the keywords used to perform queries. Each keyword within the group is paired using the OR operator, whereas the groups are paired using the AND operator (see Table 3) to form a search query. The last row of Table 3 shows how keywords from different groups are concatenated to form a query that was executed in all six bibliographic databases. The query was applied on the article title, article abstract, and article keywords to determine the relevant articles from the six selected bibliographic databases published in English from January 2015 to October 2021. The search query identified 264 studies when applied to the fives selected bibliographic databases, as shown in Figure 4. The identical studies from different databases were then extracted and only distinctive copies were retained in EndNote for each primary sample. During removing of duplicate records, 50 studies were excluded.

D. ARTICLE SCREENING AND SELECTION CRITERIA
The remaining 214 studies were analyzed after the removal of duplicate records. The screening was done on the basis of title, abstract, and keywords of the articles retrieved. These studies were retrieved by four authors using inclusion and exclusions criteria. A majority vote was used to include or remove articles for all inconsistencies. Furthermore, a final decision was taken in the event of ties between all the authors. Figure 4 indicates the screening of all the articles based on the title, abstract and keyword based screening method. Moreover, only 90 studies out of 264 were selected for primary studies; the remaining articles were excluded. The distribution of conference and journal reviewed papers are shown in Figure 5.
There were established criteria for excluding 117 articles. First, the purpose of many excluded studies was to extract information other the text generation. Second, the majority of the studies were about text classification that is out of our scope. Third, a number of articles were written other than English. Lastly, studies which were not peer reviewed were excluded from analysis thus to maintain the quality of this SLR paper. We use following inclusion criteria: • The article must be used to include a generative model for text only • The article must be published from 2015 to 2021 • The article must be published in a journal or a conference • The article must be published in English Language We use following exclusion criteria: • The articles which used NLP or machine learning techniques but did not propose or used any text generation techniques are excluded • The articles published in languages other than English are excluded

E. QUALITY ASSESSMENT
The quality assessment criteria (QAC) was used to assess the quality of the 90 selected studies. The QAC was used to assess whether our review objectives could be achieved by a selected primary study. To determine the consistency of selected primary studies, a variety of questions were made by all the authors. Table 4 describes the list of 10 questions to check the quality of studies. Either Yes or No can be the answer to each question with weights of 1 and 0 respectively. The selected primary studies were reviewed by a group of four authors. Results were evaluated after the quality assessment of each primary study. Finally, to include any study for the process of review, each question is matched by all the authors of the current research for every study for the review process. However, the quality review process did not rule out any study as all the studies fit the quality assessment questions. This review therefore included all the 90 studies selected.

III. SYSTEMATIC MAPPING STUDY RESULTS
In this section, we critically analyze 90 primary studies from five different aspects, namely, deep learning approach, quality metric, dataset, language, and application.   to generate the text: traditional deep learning approaches (TDLA) and advanced deep learning approaches (ADLA).
In traditional based approaches, many deep learning based models along with NLP techniques were employed to generate the text. The top most text generation models are RNN [39], LSTM [40], and CNN [41]- [43]. Text generation domain has seen some limitations due to its discrete nature [43]. Language generation required a lot of efforts, domain knowledge and skills to learn the different semantic and contextual meaning from the text. Every language has its own standard rules and regulations. Therefore, it is not possible that the generated CNN for English model may perform well at Urdu language. Moreover, contextual meaning of generated text was a major issue in traditional text generation deep learning based algorithms. Thus, to overcome the problem in traditional approaches, many advanced deep learning models were introduced like transformer [44], [45], BERT [46], GPT2 [47] and GPT3 [28]. These latest models are content-dependent algorithms with attention mechanism [47]. Moreover, many approaches and models have been employed to generate text in different languages, which can be categorized into three main groups. Table 5 shows the papers grouped based on traditional approaches of text generation, advanced deep learning, and a combination of both approaches. 47 papers out of 90 have employed advanced approaches to generate the text, 40 papers used the traditional approaches, and 3 papers have used both approaches.  Moreover, we found that after 2018 there is drastic increase of using traditional as well as advanced text generation approaches, as shown in Figure 6. Many researchers have been working in text generation in various languages. The most often used algorithms for text generations in the reviewed studies for traditional approaches are LSTM and RQ2: What are the various metrics for evaluating the performance of text generation models? Generated text can be measured by two metrics, humancentric and machine-centric. In this SLR, we categorize studies into three groups based on the approaches used to assess the quality of generated text. As shown in Table 6, 64 out of 90 papers have evaluated generated text on the basis of machine-centric approach, 3 papers have evaluated on basis of human experts, 14 studies have done both of the approaches human-and machine-centric. However, we found 9 studies which have not performed any measures to evaluate the generated text.
RQ3: What are the major standard datasets available in the literature for text generation?
The standard datasets for text generation based on their characteristic as mentioned below, have been extracted: VOLUME No, 2020 1) Availability: Private/Public 2) Size: Number of words, sentences, reviews. etc. 3) Type: Sentence-, paragraph-, document-level and question/answer 4) Format: CSV, JSON, XML, files 5) Annotation: Labeled/unlabelled 6) Quality: Raw or pre-processed The detail of datasets is given in Table III. 9 out of 90 papers have used private datasets and those are not publicly available [17], [50], [67], [68], [88], [90], [96]. 2 have used both public and private datasets [6], [85] and 79 studies have used the publicly available datasets, as shown in Table 7. In addition to this, 33 datasets were of sentence-level, 14 were of paragraph-level, 2 were of document-level, 1 was question/answer type and 1 study did not mention the type of dataset explicitly.
RQ5: Which languages have been focused on text generation in deep learning? Many text generation models and approaches have been employed to generate text in different languages. There are eleven languages found in the literature namely, English, Chinese, Bengali, Arabic, Russian, Korean, Slovak, Spanish, Czech, German and Macedonian. This information can help researchers which languages have lack of research in this domain, which languages need to have focused more on, and what could be the possible deep learning approaches to contribute to a specific language. As can be seen from Figure  7, 74% of the articles have worked on English language, 7% of articles have worked on Chinese, and 4% of articles have worked on Bengali, 2% of the articles generate Arabic and Russian, and 1% of the articles found for rest of languages. Moreover, the detailed summary of languages according to deep learning approaches is shown in Figure 10.
There are many resources available for English and Chinese languages like Dataset, lexical, syntactic and POS  tagging and programming development support. Therefore, both languages are known as rich-resource languages. On the other hand, languages such as Bengali, Arabic, Russian, Korean, Slovak, Spanish, Czech, German and Macedonian are known as low-resource languages because resources availability is scarce.
A brief summary of language on the basis of deep learning techniques is depicted in Table 8, Table 9 and in Figure 10. The variety of traditional and advanced deep learning based approaches have been done for English text generation, as shown in Table 9. Yet, it needs more  Table 8.
In addition to this, a brief summary of language year-wise is depicted in Table 8. The research work on English text generation has fast-growing after 2018 year and found 74 studies. After English, we found that studies on Chinese language text generation are constantly growing and found 6 studies. In sum up, there is highly need for the rest of languages to work in text generation as we can easily see the picture that there is one single study for the rest of the language.

RQ1
Complex language constructs RQ1 The demand for diversity RQ2 Improper selection of quality metrics RQ3 Limited resources RQ3 Scarcity of datasets RQ3 Un-standardized source of datasets RQ4 Exploratio-resource languages questions, as shown in Table 10. 1) Complex language constructs. Language construct is a piece of language syntax and every language has its own language constructs. Therefore, it may vary language to language. For example, the language construct for English sentence is Subject + V erb + Object, whereas for Urdu language it is Subject + Object + V erb. There is no proper way to deal with complex language that requires construct morphology, delexicalised verbs and abbreviations. Currently, many researchers adopted translation method where lowresource language are being converted to a English language. However, it mostly provide rigid word order and relatively poor morphology because the technique developed for English may not work for other lowresource languages [134].
2) The demand for diversity. Most of the generated text approaches found in the studies discussed in this SLR have a redundant and poor quality text generation problem [21], [50]. 3) Improper selection of quality metrics. We observed in this survey paper that the selection of quality matrices was improper and the quality of the generated text was not properly evaluated. For example, BLEU metric is used to measure the quality of two sentences thus, for short sentence-based problem it works well.  [88], [96], [137] and all these studies have used their own extracted dataset. The major reason of using their own dataset was that there was no any publicly available dataset for Bengali language. Similarly, there is no any dataset available for Czech language; we have found only one paper for Czech language in which a multilingual dataset is used [122]. 6) Un-Standardized sources of dataset. We found that a vast variety of sources of datasets are available like Quora, GitHub, Kaggle, own website. Sometimes, there is a huge amount of noise available in datasets therefore, researchers need to adopt a lot of prepossessing techniques to get the best results from the noisy datasets [138].

V. RECOMMENDATIONS AND FUTURE RESEARCH DIRECTIONS
In this section, we highlight various research directions for researchers in the field which require considerable efforts to improve the performance of text generation domain. These research directions are presented below.

A. STANDARDIZED DATASET
Researchers are needed to put efforts into developing few benchmark datasets for Arabic language, Chinese, Bengali, Russian, Korean, Slovak, Spanish, Czech, German, Macedonian and other low-resource languages. The Standardized dataset formation can be at the document level, question/answer form and paragraph level. In addition to this, we recommend researchers to explore these benchmark datasets: Books3 Stack Exchange, PubMed Abstracts, and CC-2021-04 in text generation for various application like, article summarization, deal with imbalanced problem.

B. QUALITY METRICS
Our study showed that researchers evaluated generated text by using machine and human-based approaches. Nonethe-less, a considerable number of research articles failed to evaluate the quality of generated text [17], [41], [50], [88], [118], [137], although they reported excellent results. These results may be biased, in which the experiments that obtained low results may have not been reported. To deal with this issue, we recommend a standard way to evaluate the generated text depending upon the nature of generated text. For instance, for text summarization ROUGE quality metric is recommended. Another issue found in the literature is selection of inappropriate metric for text generation quality assessment. For example, BLEU metric is used to measure the quality of two sentences thus, for short sentence-based problem it works well. However, it may not capture the semantic meaning and does not map well to human judgemental capacity. Keeping in view this point, AMR is a seq2seq generation or graph to sequence generation. It is known for semantics. However, many authors have validated the quality of generated text by using BLEU matrix.

C. TEXT GENERATION IN LOW-RESOURCE LANGUAGES
In this SLR, we have observed a high demand and scope for text generation in low-resource languages. A majority of studies have worked on English language text generation. Nonetheless, we have found that 23% of researchers have worked on low-resource languages too. Low-resource languages such as Arabic, Spanish, Turkish, Slovak, Hindi, Russian, Macedonian, Czech, Bengali, Korean, Urdu and many other low resources languages require more attempts towards text generation as there exist loads of online text thanks to social media and news websites. Thus, future researchers may explore and get benefits of these languages for text generation through deep neural network models. Moreover, advanced deep learning approaches for text generation such as GPT2, BERT, ELMo should be further considered for exploration by researchers in this field as they have outperformed other approaches in English language [6].

D. USE OF GPT3 FOR TEXT GENERATION
Existing studies either used traditional and advanced deep learning approaches for text generation. Thus, researchers can emphasize to generate text by using GPT-3. GPT-3 is trained on 499 Billion tokens of web data and it has 175 billion parameters and 96 layers. It has more ability and generative power that it may outperform other algorithms in many different tasks like text generation [28].

E. NLP BASIC OPERATIONS IN LOW-RESOURCE LANGUAGES
Standard NLP operations like POS-tagging, tokenization, lemmatization, stemming, word meaning and related tasks are extremely important in ensuring quality of generated text. In low-resource language there exists an enormous scarcity of these standard basic tasks. Researchers are highly encouraged to come forward and contribute in these areas to further democratize the Internet with increased use of local languages alongside English and other resource-rich languages. It is important to mention here that loads of mature algorithms are available in the field for these NLP tasks. Data availability is also not an issue, it is only that few concentrated efforts are required to work on basic NLP tasks in low-resource languages. These efforts would certainly promote low-resource languages on the Internet.

VI. CONCLUSION
Text generation is creative side of AI. For decades, we the computer scientists, have been promising humanity to bring artificial intelligence equal to artificial general intelligence. In order to fulfil these promises, we have to ensure AI is able to generate text which can pass Turing test and in past few years, only recently, we observed that our dream of synthetic text generation is very close to reality, albeit it is only for few resource rich languages.
Text generation has gained a wide popularity because a profusion of applications uses them and abundant availability of text online thanks to social media, news outlets and other sources with enormous usage of text. Few applications which are benefited from text generation include generating and predicting character/word/sentence while typing an email or chatting, chat-bot, movie/drama script writing, poetry generation and many other applications. Moreover, text generation has also been attracting the attention of researchers in the application area of education, industry and social networks to provide an insight view on different aspects of the approaches. In this context, this systematic literature review provides an analysis of the investigated 90 relevant papers (2015 to 2021) based on text generations on five different aspects, namely text generation approaches, quality metrics, dataset, languages, and applications on text generation in deep learning.
After thoroughly mapping the primary articles, we reviewed them critically to explore different aspects of text generation. For instance, a diverse quality metrics are applied to evaluate the generated text. A myriad of approaches are proposed for text generation. A variety of datasets exist, our review was concerning their size and format, and applications in which text generation has been applied.
We have provided an overall trend of publications investigating on deep learning approaches for text generation throughout the studied years. We have noticed that there is a significant growth of articles published during the year of 2018, where the advanced deep learning techniques were mostly represented. In addition to this, text generation on English language has been more exploited in the literature than any other language.
This systematic literature review will help researchers, academicians, practitioners, and educators who are interested in text generation with data sources, approaches, trends, techniques and languages. VOLUME No, 2020