Hierarchical Multi-Granularity Attention- Based Hybrid Neural Network for Text Classification

Neural network-based approaches have become the driven forces for Natural Language Processing (NLP) tasks. Conventionally, there are two mainstream neural architectures for NLP tasks: the recurrent neural network (RNN) and the convolution neural network (ConvNet). RNNs are good at modeling long-term dependencies over input texts, but preclude parallel computation. ConvNets do not have memory capability and it has to model sequential data as un-ordered features. Therefore, ConvNets fail to learn sequential dependencies over the input texts, but it is able to carry out high-efficient parallel computation. As each neural architecture, such as RNN and ConvNets, has its own pro and con, integration of different architectures is assumed to be able to enrich the semantic representation of texts, thus enhance the performance of NLP tasks. However, few investigation explores the reconciliation of these seemingly incompatible architectures. To address this issue, we propose a hybrid architecture based on a novel hierarchical multi-granularity attention mechanism, named Multi-granularity Attention-based Hybrid Neural Network (MahNN). The attention mechanism is to assign different weights to different parts of the input sequence to increase the computation efficiency and performance of neural models. In MahNN, two types of attentions are introduced: the syntactical attention and the semantical attention. The syntactical attention computes the importance of the syntactic elements (such as words or sentence) at the lower symbolic level and the semantical attention is used to compute the importance of the embedded space dimension corresponding to the upper latent semantics. We adopt the text classification as an exemplifying way to illustrate the ability of MahNN to understand texts. The experimental results on a variety of datasets demonstrate that MahNN outperforms most of the state-of-the-arts for text classification.


I. INTRODUCTION
Nature language understanding plays an critical role in machine intelligence and it includes many challenging NLP tasks such as reading comprehension [1], machine translation [2], question answering [3] and etc.. Amongst a wide spectrum of NLP tasks, text classification [4] is considered as the foundation for its measuring the semantic similarities between texts. Traditional machine learning methods The associate editor coordinating the review of this manuscript and approving it for publication was Long Wang . employ hand-crafted features to model the statistical properties of syntactical elements (usually words), which are further fed into the classification algorithms such as k-Nearest Neighbor (k-NN), Random Forests, Support Vector Machines (SVM), or its probabilistic versions [5]- [7]. However, such hand-crafted features often suffered from the loss of semantic information and scalability. To solve the drawbacks of the hand-crafted features, automatic learning of representation using the neural networks was introduced into NLP fields. Word embedding is a foretype of automatic representation learning [8], [9], which outperforms the traditional methods for alleviating the sparsity problem and enhancing the semantic representation.
In recent years, the NLP community has conducted extensive investigations on the neural-based approaches [10], [11]. There exist a diversity of deep neural network architectures with different modeling capabilities. The RNN is a widely-used neural network architecture for NLP tasks owing to its capability to model sequences with long-term dependencies [12]. When modeling texts, a RNN sequentially processes word by word and generates a hidden state at each time step to represent all previous words. However, although the purpose of RNNs is to capture the long-term dependencies, theoretical and empirical studies have revealed that it is difficult for RNNs to learn very long-term information.
To address this problem, the long short-term memory network (LSTM) [13] and other variants such as gated recurrent unit (GRU) [14], simple recurrent unit (SRU) [15] were proposed for better remembering and memory accesses. Another roadblock for RNNs is that when they are used to process a long sequence, the latest information is more dominant than the earlier one, however, which might be the real significant part of the sequence. In fact, the most important information can appear anywhere in a sequence rather than at the end. Consequently, some researchers proposed to assign the same weight to all hidden states and average the hidden states of all time steps to equally spread the focus to all the sequence.
Inspired by the biological ability to focus on the most important information and ignore the irrelevant ones, the attention mechanism was introduced to assign different weights to the elements at different positions in a sequence and select the informative ones for the downstream tasks [16]. Nowadays, the attention mechanism has become an integral part of sequence modeling, especially with RNNs [1]. The attention mechanism enables RNNs to maintain a variable-length memory and compute the outputs based on the importance weights of different parts in a sequence. The attention mechanism has been empirically proven to be effective in NLP tasks such as neural machine translation [14]. However, the attention mechanism cannot capture the relationships between words and the word ordering information, which contains important semantic information for downstream tasks. Taking the sentences ''Tina likes Bob.'' and ''Bob likes Tina. '' as examples, the weighted sum of their hidden states encoded by RNN are almost the same. Nevertheless, the two sentences have different meanings.
The ConvNet is another widely-adopted neural architecture for NLP tasks. The modeling power of ConvNets relies on four key factors: local connections, shared weight, pooling and multi-layers. The fundamental assumption behind the ConvNet approaches is that locally grouped data in natural signals are often high correlated and the compositional hierarchies in natural signals can be exploited by the stacked convolutional layers. As a result, ConvNets have been believed to be good at extracting informative semantic representations from the salient N-gram features of input word sequences by utilizing convolutional filters in a parallel way. For the above example, 2-gram features of '' Tina likes'' and ''likes Bob'' that contain the word ordering information can be captured by ConvNets. These features are more representative for the original sentence than the weighted sum of the hidden states. Therefore, ConvNets have been employed for a variety of NLP tasks and achieved impressive results in sentence modeling [17], semantic parsing [18], and text classification [19]. Moreover, ConvNets can operate on different levels of lexical structures such as characters, words, sentences, or even the whole document. For instance, some research has shown that the character-level text classification using Con-vNets can achieve competitive results in comparison with the state-of-the-arts [20], [21]. However, basic ConvNets apply a fixed-width window to slide over the input sequences, which limits the created representations to local semantic pattern, failing to capture long-term dependencies.
To take full advantage of both the ConvNet and the RNN, and complement the superiorities of different neural architectures, researchers explored to introduce the hybrid structure of the ConvNets and the RNNs. For instance, the recurrent convolutional neural network [22] proposed a recurrent structure of convolutional filters to enhance the contextual modeling ability to avoid the problem of fixed-width sliding windows. This work also claimed to apply a max-pooling layer to automatically determine the key components for text classification. However, even though this approach managed to reduce noise by replacing the fixed-width sliding window of ConvNets with a recurrent mechanism, it still depend on the max-pooling to determine the discriminative features and lacks the mechanism to selectively choose the dominant component as the attention mechanism can do. Similarly, Wang et al. proposed the convolutional recurrent neural network [23] that stacked four types of neural layers: word embedding, Bidirectional RNN layer, convolutional layer, and max-pooling layer. This approach functions very similarly to the one in [22], but with disparate applications in sentence classification and answer selection. Also, this work bypassed the attention mechanism when integrating the ConvNet and the RNN structures.
As discussed above, any neural architecture has its own pros and cons, it is reasonable to conjecture that consistently combing different architectures can benefit extracting of different aspects of linguistic information from texts. However, to the best of our knowledge, there are still no efforts in integrating entirely the ConvNet, RNN and attention architectures. Inspired by proposition by Lecun et al. [24], we hypothesize that the attention mechanism can function as an adhesive that seamlessly integrate the ConvNet and the RNN architecture, where the RNN layer is used to represent the input word sequences and the ConvNet layer is used for classification. Furthermore, we assume that, besides attending to elements (words as a typical example) at syntactical or symbolic level, coarser-grained attentions at the hidden state vectorial space can improve the local N-gram coherence for ConvNets, as the attentions on hidden state vectors can select the salient dimensions that represent most informative VOLUME 8, 2020 latent semantics, hence reducing the noise perturbation to the ConvNet layer and enhancing the classification performance.
Based on the above motivations, we propose a hybrid architecture based on a novel hierarchical multi-granularity attention mechanism, named Multi-granularity Attentionbased Hybrid Neural Network (MahNN). In MahNN, two types of attentions are introduced: the syntactical attention and the semantical attention. The syntactical attention computes the importance of the syntactic elements (such as words or sentence) at the lower symbolic level and the semantical attention is used to compute the importance of the embedded space dimension corresponding to the upper latent semantics. We adopt the text classification as an exemplifying way to illustrate the ability of MahNN to understand texts. The experimental results on a variety of datasets demonstrate that MahNN outperforms most of the state-of-the-arts for text classification.
The main contributions of our work are listed as follows: 1) We propose a hybrid neural architecture MahNN that, for the first time, seamlessly integrate the RNN architecture and the ConvNet with an attention mechanism. In this architecture, the different neural structure each learns a different aspect of semantic information from the linguistic structures and collectively strengthen the power of semantical understanding of texts. 2) we introduce a novel hierarchical multi-granularity attention mechanism, which includes the syntactical attention and the semantical attention. The syntactical attention and the semantical attention compute the importance weights at the lower symbolic level and the upper latent semantics level, respectively. This coarser-grained attention mechanism helps to learn semantic representations more precisely. This article is organized as follows. Section II introduces the related work about ConvNet and attention mechanisms. Section III introduces the proposed MahNN in detail. And Section IV introduces datasets, baselines, experiments, and analysis. Finally, Section V concludes this article.
A ConvNet architecture [19] was proposed with multiple filters to capture local correlations followed by max-pooling operation to extract dominant features. This architecture performs well on text classification with a few parameters. The case of using character-level ConvNet was explored for text classification without word embedding [20] and in this work language was regarded as a kind of signal. Based on character-level representations, very deep convolutional networks (VDConvNet) [28] were applied to text classification which is up to 29 convolutional layers much larger than 1 layer used by [19]. To capture word correlations of different sizes, a dynamic k-max-pooling method, a global pooling operation over linear sequences, was proposed to keep features better [17]. Tree-structured sentences were also explored convolutional models [29]. Multichannel variablesize convolution neural network (MVConvNet) [30] combined diverse versions of pre-trained word embedding and used varied-size convolution filters to extract features.
A RNN is often employed to process temporal sequences. In addition to RNN, there are several approaches for sequences learning, such as echo state network and learning in the model space [31]- [33]. In the learning in the model space, it transforms the original temporal series to an echo state network (ESN), and calculates the 'distance' between ESNs [34], [35]. Therefore, the distance based learning algorithms could be employed in the ESN space [36]. Chen et al. [37] investigated the trade-off between the representation and discrimination abilities. Gong et al. proposed the multi-objective version for learning in the model space [38].
The other popular RNN architecture is able to deal with input sequences of varied length and capture long-term dependencies. Gated recurrent neural network (GRU) [39] was proposed to model sequences. As a similar work, GRU was applied to model documents [12]. Their works show that GRU has the ability to encode relations between sentences in a document. To improve the performance of GRU on large scale text, hierarchical attention networks (HAN) [26] was proposed. HAN has a hierarchical structure including word encoder and sentence encoder with two levels of attention mechanisms.
As an auxiliary way to select inputs, attention mechanism is widely adopted in deep learning recently due to its flexibility in modeling dependencies and parallelized calculation. The attention mechanism was introduced to improve encoder-decoder based neural machine translation [16]. It allows a model to automatically search for parts of elements that are related to the target word. As an extension, global attention and local attention [1] were proposed to deal with machine translation and their alignment visualizations proved the ability to learn dependencies. In HAN [26], hierarchical attention was used to generate document-level representations from word-level representations and sentence-level representations. This architecture simply sets a trainable context vector as a high-level representation of a fixed query. This way may be unsuitable because the same words may count differently in varied contexts. In a recent work [40], the calculation of attention mechanism was generalized into Q-K-V 1 form.

III. MULTI-GRANULARITY ATTENTION-BASED HYBRID NEURAL NETWORK
The MahNN architecture is demonstrated in Fig.1. It consists of three parts: bi-directional long short-term memory (Bi-LSTM), attention layer and convolutional neural network (ConvNet). The following sections describe how we utilize Bi-LSTM to generate the syntactical attention and the semantical attention, and form multichannel for ConvNet.

A. LONG SHORT-TERM MEMORY NETWORK
In many NLP tasks, RNN processes word embedding for texts of variable length and generates a hidden state h t in t time step by recursively transforming the previous hidden state h t−1 and the current input vector x t .
where W ∈ R l h ×(l h +l i ) , b ∈ R l h , l h and l i are dimensions of hidden state and input vector respectively, and f (·) represents activation function such as tanh (·). However, standard RNN is not a preferable choice for researchers due to the problem of gradient exploding or vanishing [41]. To address this problem, the long short-term memory network (LSTM) was introduced and obtained remarkable performance. As a variant of RNNs, the LSTM architecture has a range of tandem modules whose parameters are shared. At t time step, the hidden state h t is controlled by the previous hidden state h t−1 , input x t , forget gate f t , input gate i t and output gate o t . These gates identify the way of updating the current memory cell c t and the current hidden state h t . The LSTM transition function can be summarized by the following equations: Here, σ is the logistic sigmoid function that has the domain of all real numbers, with return value ranging from 0 to 1. tanh denotes the hyperbolic tangent function with return value ranging from −1 to 1. Intuitively, the forget gate f t controls the extent to which the previous cell state C t−1 remains in the cell. The input gate i t controls the extent to which a new input flows into the cell. The output gate o t controls the extent to which the cell state C t is used to compute the current hidden state h t . The existence of those gates enables LSTM to capture long-term dependencies when dealing with time-series data.
Though unidirectional LSTM includes an unbounded sentence history in theory, it is still constrained since the hidden state of each time step fails to model future words of a sentence. Therefore, Bi-LSTM provides a way to include both previous and future context by applying one LSTM to process sentence forward and another LSTM to process sentence backward.
Given a sentence of n words {w i } n i=1 , we first transfer the one-hot vector w i into a dense vector x i through an embedding matrix W e with the equation x i = W e w i . We use Bi-LSTM to get the annotations of words by processing sentence from both directions. Bi-LSTM contains the backward ← −− − LSTM that reads the sentence from x n to x i and a forward − −− → LSTM which reads from x 1 to x i :

B. HIERARCHICAL MULTI-GRANULARITY ATTENTIONS
For the NLP tasks such as text classification and sentiment analysis, different words contribute unequally to the representation of a sentence. The attention mechanism can be used to reflect the importance weight of the input element so that the relevant element contributes significantly to the merged output. Although the attention mechanism is able to model dependencies flexibly, it is still a crude process because of the loss of latent semantic information. We apply attention mechanisms to the hidden states of Bi-LSTM and splice them into a matrix.
Taking the form of the matrix rather than a weighted sum of vectors will keep the order information. Furthermore, by applying the syntactical attention and the semantical attention, we could obtain several matrices and take them as multichannel for inputs of ConvNet.

1) SYNTACTICAL ATTENTION MECHANISM
We introduce the syntactical attention to calculate the importance weights of all input elements. M is the association matrix that represents the association among words in texts. The element of the i-th row and the j-th column of M represents the degree of association between the i-th word and the j-th word. We will set L channel mask matrices V if we need L channels. In the l-th channel, M l i,j is calculated as follows: The i-th channel mask matrix is defined as follows: That means each element of V l obeys binomial distribution. Given M l i,j and V l i,j , the i-th channel is computed as follows: Here, c li denotes the new representation of h i in the l-th channel and ⊗ denotes element-wise product operation. The pad symbol still carries little information after it is encoded by Bi-LSTM. So, if word x k is a pad symbol, its syntactical attention s lk will be subtracted from 99999 before softmax operation and so that a lk will be close to 0 after softmax. By concatenating all C li , we obtain the l-th channel C l . The multichannel representations reflect the different contributions of different words to the semantics of a text, which is regarded as diversification of input information caused by data perturbation.
The whole process of the syntactical attention is shown in Fig. 2.

2) SEMANTICAL ATTENTION MECHANISM
Given that a syntactical element (a word or a sentence) is encoded into an n-dimensional vector (v 1 , v 2 , v 3 , . . . . . . , v n ) T , each dimension in the embedding vector space corresponds to a specific latent semantic factor. Analyzing the different impacts of these semantic factors and selecting the informative ones can improve the performance of the downstream tasks.
Based on the above hypotheses, we propose the semantical attention mechanism to compute the semantical importance weight of each dimension in the input element: where c li denotes the final representation of h i in the l-th channel. By concatenating all c li where i ∈ [1, n], we obtain the l-th channel C l . By combining the syntactical attention and the semantical attention, multichannel is generated as follows: 3

) CONVOLUTIONAL NEURAL NETWORK
ConvNets utilize several sliding convolution filters to extract local features. Assume we have one channel that is represented as Here, C ∈ R n×k , n is the length of the input element, and k is the embedded dimension of each input element. In a convolution operation, a filter m ∈ R lk is involved in applying to consecutive l words to generate a new feature: where c i:i+l−1 is the concatenation of c i , . . . , c i+l−1 .
f is a non-liner activation function such as relu and b ∈ R is a bias term. After the filter m slide across {c 1:l , c 2:l+1 , . . . , c n−l+1:n }, we obtain a feature map: We apply max-pooling operation over the feature map x and take the maximum valuex = max{x} as the final feature extracted by the filter m. This pooling scheme is to capture the most dominating feature for each filter. ConvNet obtains multiple features by utilizing multiple filters with varied sizes. These features form a vector r = [x 1 , x 2 , . . . , x s ] (s is the number of filters) which will be passed to a fully connected softmax layer to output the probability distribution over labels Given a training sample (x i , y i ) where y i ∈ {1, 2, · · · , c} is the true label of x i and the estimated probability of our model isỹ i j ∈ [0, 1] for each label j ∈ {1, 2, · · · , c}, and the error is defined as: Here, c denotes the number of possible labels of x i and if {} is an indicator function such that: if We employ stochastic gradient descent (SGD) to update the model parameters and adopt Adam optimizer. Here, the ConvNet layer is intended to enhance the local N-gram coherence instead of merely averaging weighted sum, thus improving the discriminative ability to text classification.

IV. EXPERIMENTAL STUDY A. EXPERIMENTS DATASETS
We evaluate our model against other baseline models on a variety of datasets. Summary statistics of the datasets are shown in Table 1.
• MR: Short movie review dataset with one sentence per review. Each review was labeled with their overall sentiment polarity (positive or negative).
• Subj: Subjectivity dataset containing sentences labeled with respect to their subjectivity status (subjective or objective). • SST-1: Stanford Sentiment Treebank-an extension of MR but with train/dev/test splits provided and fine-grained labels (very positive, positive, neutral, negative, very negative).
• SST-2: Same as SST-1 but with neutral reviews removed and binary labels • MPQA: Opinion polarity detection subtask of the MPQA dataset.

B. EXPERIMENTS SETTINGS
• Padding: We first use len to denote the maximum length of the sentence in the training set. As the convolution layer requires input of fixed length, we pad each sentence that has a length less than len with UNK symbol which indicates the unknown word in front of the sentence. Sentences in the test dataset that are shorter than len are padded in the same way, but for sentences that have a length longer that len, we just cut words at the end of these sentences to ensure all sentences have a length len.
• Initialization: We use publicly available word2vec vectors to initialize the words in the dataset. word2vec vectors are pre-trained on 100 billion words from Google News through an unsupervised neural language model. For words that are not present in the set of pre-trained words or rarely appear in data sets, we initialize each dimension from U [−0.25, 0.25] to ensure all word vectors have the same variance. Word vectors are fine-tuning along with other parameters during the training process.
• Hyper-parameters: The feature representation of Bi-LSTM is controlled by the size of hidden states. We investigate our model with various hidden sizes and set the hidden size of unidirectional LSTM to be 100. We also investigate the impact of the size of the channels on our model. When the size of the channels is set to be 1, our model is a single channel network. When increasing the size of the channels, our model obtains a more semantic representation of the text. Convolutional filter decides the n-gram feature which directly influences the classification performance. We set the filter size based on different datasets and simply set the filter map to be 100. More details of hyper-parameters are shown on Table 3.
• Other settings: We only use one Bi-LSTM layer and one convolutional layer. Dropout is applied on the word VOLUME 8, 2020 embedding layer, the ConvNet input layer, and the penultimate layer. Weight vectors are constrained by L2 regularization and the model is trained to minimize the cross-entropy loss of true labels and the predicted labels.

C. BASELINES
We compare our model with several baseline methods which can be divided into the following categories:

1) TRADITIONAL MACHINE LEARNING
A statistical parsing framework was studied for sentence-level sentiment classification [42]. Simple Naive Bayes (NB) and Support Vector Machine (SVM) variants outperformed most published results on sentiment analysis datasets [43]. It was shown in [44] how to do fast dropout training by sampling from or integrating a Gaussian approximation. These measures were justified by the central limit theorem and empirical evidence, and they resulted in an order of magnitude speedup and more stability.

2) DEEP LEARNING
Word2vec [11] was extended with a new method called Paragraph-Vec, which is an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Various recursive networks were extended [45]- [47]. Generic and target domain embeddings were incorporated to ConvNet [17]. A series of experiments with ConvNets was trained on top of pre-trained word vectors for sentence-level classification tasks [19]. Desirable properties such as semantic coherence, attention mechanism and kernel reusability in ConvNet were empirically studied for learning sentence-level tasks [49]. Both word embeddings created from generic and target domain corpora were utilized when it's difficult to find a domain corpus [48]. A hybrid L-MConvNet model was proposed to represent the semantics of sentences [50].    The second sentence could not be labeled positive or negative without a doubt if we focus on a single informative word (''uplifting'', ''worst'' or ''best'') alone. Only if these informative words were all emphasized can this sentence be truly understood. ''Uplifting'' received more attention weight than other words in the first channel. ''worst'' received more attention weight in the second than the third channel and ''best'' received more attention weight in the third than the second channel. If the second channel is set to be an independent model, this sentence might be classified incorrectly. But MahNN-3 will still label this sentence as positive.

D. RESULTS AND ANALYSIS
Multichannel essentially provides a way to represent a sentence from different views and provides diversification.
We also investigate the impact of the semantical attention on MahNN and find out that it considerably improves performance. MahNN-rv denotes MahNN-3 without applying the semantical attention mechanism. We owe the validity of the MahNN semantical attention mechanism to its selectivity of latent semantics that can better represent the texts in the specific given tasks. Actually, the semantical attention mechanism discriminates the perturbation of hidden states and makes the whole model more robust. Another advantage of the semantical attention mechanism is that it assigns different learning speeds to each dimension of the hidden state indirectly so that informative dimension could be tuned at a bigger pace than dimension of less information.

E. PARAMETER SENSITIVITY
We further evaluate how the parameters of MahNN impact its performance on the text classification task. In this experiment, we evaluate the effect of change of Hidden size, Channel, Filter size, and Filter map on MahNN performance with other parameters remaining the same.
• Impact of Hidden size: Fig.4a shows the impact of Hidden size on classification accuracy. It can be observed that the classification accuracy of the model increases with the increasing of hidden size. When the hidden size is set to be 128, the accuracy curve of the model tends to be flat or even begins to decline. So, the hidden size of Bi-LSTM affects the encoding of the document. If the Hidden size is too small, it will lead to underfitting. If the Hidden size is too large, it will lead to overfitting.
• Impact of Channel: Fig.4b shows the impact of Channel on classification accuracy. We observe that the performance first rises and then tends to decline. When channel size is set to be 3, the model (MahNN-3) performs best on MPQA/SST-2/MR datasets. The model (MahNN-4) performs best on Subj dataset when channel size is set to be 4. This result shows that multichannel representations of texts help our model improve its performance. However, as the increasing number of the channels means the enlarged size of parameters, which might lead to overfitting.
• Impact of Filter size: Fig.4c shows the impact of Filter size on classification accuracy. It can be observed that the optimal filter size settings of each dataset are different, and the accuracy curve of the MR dataset is opposite to the accuracy curve of other datasets. When Filter size is between [10,14], the model achieves high accuracy on MPQA/Subj/SST-2 datasets. But this performance improvement is not significant compared to the accuracy when Filter size is 2. In order to reduce the size of the parameters, Filter size of the model is set between [4,8] in the experiment.
• Impact of Filter map: Fig.4d shows the impact of Filter map on classification accuracy. We can observe that the performance rises rapidly first and then tends to be flat. The number of Filter map determines the number of feature maps generated after the convolution operation. Each feature map represents a certain feature of the text. The more the number of feature maps, the more features that the convolution operation can extract, and the accuracy of the model can be higher. But the number of features of the text is finite, and the increase in the number of Filter map will also increase the size of trainable parameters, which may lead to overfitting.

V. CONCLUSION AND FUTURE WORK
In this article, we attempt to develop a hybrid architecture that can extract different aspects of semantic information from the linguistic data with diverse types of neural structures. Intriguingly, we propose a novel hierarchical multi-granularity attention mechanism, consisting of the syntactical attention at the symbolic level and the semantical attention at the embedding level, respectively. The experimental results show that the MahNN model achieves impressive performances on a variety of benchmark datasets for the text classification task. Moreover, visualization of attention distribution illustrates that the hierarchical multi-granularity attention mechanism is effective in capturing informative semantics from different perspectives. We can draw the following conclusions from our work: 1) Hybrid neural architectures integrating a diversity of neural structures can improve the power of the representation learning from linguistic data. Richer semantic representations help to increase the capacity of deep understanding of texts and thus benefit to the downstream tasks in the NLP filed. 2) Hierarchical multi-granularity attention mechanism plays a significant role in constructing the hybrid neural architecture. The fine-grained attention at the symbolic level can diversify the semantic representations of input texts and the coarser-grained attention at the latent semantical space enhance the local N-gram coherence for the following ConvNet layers, thus increasing the performance of the text classification tasks. There are several future directions to extend this work. First, we would investigate on applying a generative model to obtain multichannel representations of texts. Data augmentation in this way can represent much richer semantics. Second, ConvNets require the fixed-length inputs and perform some unnecessary convolution operations for NLP tasks. It is worthwhile to explore the novel ConvNet architecture processing with variable length. Moreover, we use simple calculating methods for the attention weights and this might not be able to demonstrate the full potential for the hierarchical multi-granularity attention mechanism. It would be intriguing to compute the attention weights with more advanced approaches such as transfer learning and reinforcement learning to further improve the performance.