Few-Shot-News-Classification

所属分类:人工智能/神经网络/深度学习
开发工具:Others
文件大小:0KB
下载次数:0
上传日期:2023-10-04 00:32:58
上 传 者sh-1993
说明:  少数新闻分类,,
(Few Shot News Classification,,)

文件列表:
Proposal.pages (365932, 2023-12-23)
Proposal.pdf (66032, 2023-12-23)
data/ (0, 2023-12-23)
data/dev.tsv (4048214, 2023-12-23)
data/dev_emotion.csv (4510737, 2023-12-23)
data/nli.csv (14265757, 2023-12-23)
data/test.tsv (2881102, 2023-12-23)
data/test_emotion.csv (3214966, 2023-12-23)
data/train.tsv (6880671, 2023-12-23)
data/train_emotion.csv (7795512, 2023-12-23)
depression-detection-classification.ipynb (216186, 2023-12-23)
final-report/ (0, 2023-12-23)
final-report/COMP 5900 project report.zip (143055, 2023-12-23)
final-report/COMP_5900_project_report.pdf (283578, 2023-12-23)
final-report/system-structure.png (171599, 2023-12-23)
fine-tune-bert-for-sentiment_analysis.ipynb (235872, 2023-12-23)
fine_tune_nli_model.ipynb (268287, 2023-12-23)
generating-emotion-scores.ipynb (48435, 2023-12-23)
masked-language-modeling.ipynb (238909, 2023-12-23)
voting-system.ipynb (343034, 2023-12-23)

# Depression Detection This repository is for [Detecting Signs of Depression from Social Media Text-LT-EDI@ACL 2022](https://competitions.codalab.org/competitions/36410) ## Generating emotion features using BERT model Fine-tuning BERT-base-uncased on argilla/twitter-coronavirus to generate emotion features. The fine-tuned model can be found [here](https://huggingface.co/kwang123/bert-sentiment-analysis). [Code for fine-tuning](https://github.com/KaishuoWang/Depression-Detection/blob/main/fine-tune-bert-for-sentiment_analysis.ipynb) [Code to generate emotion features](https://github.com/KaishuoWang/Depression-Detection/blob/main/generating-emotion-scores.ipynb) Weighted F1 score: 0.8926, Macro F1 score: 0.8967 ## Traditional classification approach The model can be found [here](https://drive.google.com/file/d/107vHAbHNqG05WlDmNSzV7m9vET72AVe_/view?usp=sharing). [Code for fine-tuning](https://github.com/KaishuoWang/Depression-Detection/blob/main/depression-detection-classification.ipynb) Weighted F1 score: 0.7686, Macro F1 score: 0.7278, Accuracy: 0.7679 ## Prompt-based classification approach The model can be found [here](kwang123/MaskedLM-roberta-large). [Code for fine-tuning](https://github.com/KaishuoWang/Depression-Detection/blob/main/masked-language-modeling.ipynb) Weighted F1 score: 0.6160, Macro F1 score: 0.3786, Accuracy: 0.6743 ## Zero-shot classification approach The model can be found [here](https://huggingface.co/kwang123/roberta-base-nli). Weighted F1 score: 0.7410, Macro F1 score: 0.7410, Accuracy: 0.7411 ## Voting System [Code for voting system](https://github.com/KaishuoWang/Depression-Detection/blob/main/voting_system.ipynb) ### Classification + MaskedLM Weight for MaskedLM model: $0.3786 \div (0.7278 + 0.3786)$ Weight for Classification model: $0.7278 \div (0.7278 + 0.3786)$ Weighted F1 score: 0.8488, Macro F1 score: 0.8044, Accuracy: 0.8453 ### Classification + Zero-shot Classification Weight for zero-shot classification model $0.7410 \div (0.7278 + 0.7410)$ Weight for classification model $0.7278 \div (0.7278 + 0.7410)$ Weighted F1 score: 0.8358, Macro F1 score: 0.7909, Accuracy: 0.8351 ### Classification + MaskedLM + Zero-shot Classification Weight for zero-shot classification $0.7410 \div (0.7278 + 0.7410 + 0.3786)$ Weight for classification $0.7278 \div (0.7278 + 0.7410 + 0.3786)$ Weight for MaskedLM $0.3786 \div (0.7278 + 0.3786 + 0.7410)$ Weighted F1 score: 0.8358, Macro F1 score: 0.7909, Accuracy: 0.8351

近期下载者

相关文件


收藏者