News-Summarizer

所属分类:自然语言处理
开发工具:Python
文件大小:26KB
下载次数:0
上传日期:2018-12-14 17:19:36
上 传 者sh-1993
说明:  新闻摘要生成器,提取式自动多文档新闻文章摘要
(News-Summarizer,Extractive automatic multi-document news article summarization)

文件列表:
LICENSE (1070, 2018-12-15)
docs (0, 2018-12-15)
docs\VW (0, 2018-12-15)
docs\VW\VW-ars.txt (4630, 2018-12-15)
docs\VW\VW-nyt.txt (4206, 2018-12-15)
docs\argentina (0, 2018-12-15)
docs\argentina\argentina-guardian.txt (6442, 2018-12-15)
docs\argentina\argentina-nyt.txt (5333, 2018-12-15)
docs\china (0, 2018-12-15)
docs\china\china-cnn.txt (3091, 2018-12-15)
docs\china\china-nyt.txt (7510, 2018-12-15)
docs\climate (0, 2018-12-15)
docs\climate\climate-npr.txt (4748, 2018-12-15)
docs\climate\climate-nyt.txt (4730, 2018-12-15)
driver.py (419, 2018-12-15)
summarize.py (7951, 2018-12-15)

# Multi-document news article summarizer Final project for CSCI 4930 (Machine Learning). ## Usage Run `python3 driver.py` Which passes a list of file paths containing docs into Summarizer: VW_articles = ["VW/VW-ars.txt", "VW/VW-nyt.txt"] magic = Summarizer(VW_articles) print(magic.generate_summaries()) Outputs a single paragraph, containing a customizable number of sentences extracted from the documents. **Runtime**: ~ 45 seconds **Dependencies**: Requires sklearn and NLTK ## Overview This program uses an unsupervised machine learning algorithm to extract representative sentences from a series of articles to generate a summary. Unlike generative summarization approaches where new content is created, this program's output summary contains only sentences contained in the source documents. Moreover, these summaries are "generic", in that they aren't customized in response to a specific user or query. With the goal of choosing informative yet non-redundant sentences, each sentence of each set of articles is given a score, weighed by the following features. ### Weighted features for sentence extraction: 1. Words in common with headline (using stemming) 2. Sentence length (assuming longer sentences are more representative; goal: ~20 words). 3. TF-IDF word frequency (using stemming), using 11k Reuters news articles as background corpus. 4. Relative sentence location in article Each of these features were weighed differently in computing the final sentence score, and were determined by trial-and-error manual testing. ### Design Notes: - Using NLTK (https://github.com/nltk/nltk) for tokenization and stop-word corpus - Using scikit-learn for TF-IDF Vectorizer - Currently using English-language articles only - Weights for sentence positions borrowed from PyTeaser project (https://github.com/xiaoxu193/PyTeaser/) ### Potential Future Additions: - While sentence order is a factor in calculating the score of a each sentence in a given article, once the highest ranking sentences from each source are joined, the semantic order is no longer available. The original positions for each sentence could be persisted in the final scores, to produce a final summary whose sentence order reflects that of the initial articles. - After initially selecting the sentences with highest scores, we might discount TF-IDF scores for duplicate words in the remaining sentences (or in subsequent articles) in effort to reduce repetitiveness in the summary.

近期下载者

相关文件


收藏者