Abstract:Idiom cloze test is a subtask in Machine Reading Comprehension (MRC), which aim to test the model's ability to understand and apply idioms in Chinese text. The existing idiom cloze algorithms ignore the fact that the idiom embeddings suffer from representational collapse, which leads to low accuracy and poor generalization performance on out-of-domain data. In this paper, the authors propose the NeZha-CLofTN, which consists of four parts: embedding layer, fusion coding layer, graph attention subnetwork, and prediction layer. The fusion coding layer uses contrastive learning to force the network to change the feature extraction that avoids the network outputting a constant embedding vector, thus preventing the representational collapse. The prediction layer combines the output of multiple synonym subgraphs to obtain better prediction than a single subgraph and to enhance the generalization performance of the model. NeZha-ClofTN is used in the ChID-Official and ChID-Competition datasets with accuracy of 80.3% and 85.3%, and the effectiveness of each module was demonstrated by ablation experiments.