"Methods of text fragment relevance estimation based on the topic model analysis in the text summarization problem"
Mashechkin I.V., Petrovskiy M.I., Tsarev D.V.

The most state of the art methods of text fragment relevance (significance, importance) estimation based on the topic model analysis are considered. These methods are used for building summaries in the form of generic extracts, i.e., the summaries completely consisting of word fragments copied from an original document. The most popular text mining topic models are considered: the models based on the latent semantic analysis (LSA) and the probabilistic topic models, such as the probabilistic latent semantic analysis (PLSA) and the latent Dirichlet allocation (LDA). A new method of text fragment relevance estimation is proposed. The proposed sentence relevance estimation is based on the normalization of the non-negative matrix factorization (NMF) topic space and on the further weighting of each topic using the sentences representation in the topic space. The non-negative matrix factorization is used as a matrix decomposition in the latent semantic analysis model. A number of experiments performed with the considered methods of text fragment relevance estimation show the superiority of methods based on the latent semantic analysis over the probabilistic topic models in single document summarization problems. The DUC 2001 and DUC 2002 standard datasets with the ROUGE standard metrics are used in the comparative experiments. In addition, the proposed method shows a better summarization quality than the other considered text summarization methods. The work was supported by the Ministry of Education and Science of the Russian Federation (state contract no. 14.514.11.4016) and by the Russian Foundation for Basic Research (project nos. 11–07–00616 and 12–07–00585).

Keywords: text fragment relevance, automatic text summarization, semantic text models, topic models, latent semantic analysis (LSA), singular value decomposition (SVD), non-negative matrix factorization (NMF), probabilistic topic models, probabilistic latent semantic analysis (PLSA), latent Dirichlet allocation (LDA)

Mashechkin I.V., e-mail: mash@cs.msu.su;   Petrovskiy M.I., e-mail: michael@cs.msu.su;   Tsarev D.V., e-mail: tsarev@mlab.cs.msu.su – Moscow State University, Faculty of Computational Mathematics and Cybernetics; Leninskiye Gory 1-52, Moscow, 119991, Russia