Abstract:Most of the existing mainstream textual entailment models adopt recurrent neutral network to encode text, and various complex attention mechanisms or manually extracted text features are used to improve the accuracy of textual entailment recognition. The training and inference speed of the models is usually slow due to the complex network structure and the sequential nature of RNNs. In this paper, Lightweight Text Entailment Model is proposed. In the proposed model, the self-attentional encoder is adopted to encode text vectors; the dot product attention mechanism is adopted to interact two texts; the convolutional neural network is adopted to deduce interactive features, and the module number of the structure can be adjusted according to the reasoning difficulty of data. Experiments on multiple datasets show that the parameter size of single module in the model is only 665 K, and the inference speed of the model is at least twice as high as that of other mainstream models, under the condition of high accuracy.