Abstract:Accroding to the defects in the current machine reading comprehension approaches that either match the passage to the question alone leading to the loss of information of the passage or match the passage to the sequence that concatenates the question and the candidate answer leading to loss of interaction information between a question and an answer and traditional recurrent-based encoder that sequentially parses the text sequence ignoring inta-document relationships, a model that improve the encoder of the passage and jointly match the question and candidate answer to the passage is proposed. Firstly, the sequences of the passage are chunked into blocks based on multi-granular, encoder takes the neural bag-of-words representation of each block, that is sum the embedding of all words that reside in each block. Next, the blocks are passed into fully-connected layers and expanded to original sequence lengths. The gating function are then constructed through two layered feed-forward neural network which modeling the relationshiops between all blocks that each word resides in, allowing for possessing a larger overview of the context information and capturing the intra-document relationships. Finaly, the attention mechanism is used to model the interaction between the passage and the question as well as the candidate answer to select an answer. Experimental results on the SemEval-2018 Task 11 demonstrate that our approach performance improvement over the baselines such as Stanford AR and GA Reader ranges from %-%, pull ahead of recent model SurfaceLR by at least 3% and outperforms the TriAN by 1%. Besides, pretraining the model on RACE datasets helps to improve the overall performance.