A small project to automatically crawl micro-blog in Sina Weibo and try to detect whether the specified blog is a rumor.
- python >= 3.7
- torch >= 1.6.0
- requirements.txt
The dataset used in this project is merged from some small set. It was all uploaded into this repo under folder data/dataset/raw/
Use extractraw.py to generate train
, valid
and eval
datasets.
The raw pretrained vectors is download from repo: Chinese-Word-Vectors via this link: Mixed-large 综合 Baidu Netdisk Word + Character + Ngram
In this project, to avoid huge memory occupation, the raw vectors was processed to a binary data file pretrain_wv.vec.dat
and a index file pretrain_wv.index.json
, and use the class PretrainedVector
in dataset.py to load it.
pretrain_wv.indexpretrain_wv.index
You can download pretrain_wv.vec.dat
from the release page.
See train.py for details.
After training, it will automatically make evaluation on eval dataset.
See model.py for details.
In this project, it just used fixed parameters to train the model, the parameters of final uploaded rmdt.pt model is shown in the output below.
RumorDetectModel(
(origin_bilstm): LSTM(300, 32, batch_first=True, bidirectional=True)
(comment_lstm): LSTM(300, 64, batch_first=True)
(comment_dropout): Dropout(p=0.5, inplace=False)
(attn_U): Linear(in_features=64, out_features=32, bias=False)
(attn_W): Linear(in_features=64, out_features=32, bias=False)
(attn_v): Linear(in_features=32, out_features=1, bias=False)
(linear_dropout): Dropout(p=0.5, inplace=False)
(linear): Linear(in_features=128, out_features=2, bias=True)
)
See main.py and rmdt.py for details.
A simple example is in main.py and main.ipynb.
Due to model limitations, the input data must have both original blog text and at least one comment text, otherwise may throw exceptions.
If you think this project is helpful to you, plz star it and let more people see it. :)