Paper link: Pending
Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification. Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification, is therefore difficult and only limited previous works tackled this problem. In this paper, we propose a two-phase framework together with data augmentation and feature augmentation to solve this problem. Four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and a general knowledge graph) are incorporated into the proposed framework to deal with instances of unseen classes effectively. Experimental results show that each and the combination of the two phases clearly outperform baseline and recent approaches in classifying real-world texts under the zero-shot scenario.cd src_reject
sh run.sh
In order to run the code, please check the following issues.
- Package dependencies:
- Python 3.5
- TensorFlow 1.11.0
- TensorLayer 1.11.0
- Numpy 1.14.5
- Pandas 0.21.0
- NLTK 3.2.5
- Download original dataset
- Check config.py and update the locations of data files accordingly. The config.py also defines the locations of intermediate files.
- The intermediate files already provided in this repo
- classLabelsDBpedia.csv: A summary of classes in DBpedia and linked nodes in ConceptNet.
- classLabels20news.csv: A summary of classes in 20news and linked nodes in ConceptNet.
- Selection of seen/unseen classes in DBpedia with unseen rate 0.25 and 0.5.
- Selection of seen/unseen classes in 20news with unseen rate 0.25 and 0.5.
- Note: seen/unseen classes are randomly selected for 10 times. You may randomly generate another 10 groups of seen/unseen classes.
- The intermediate files need to be manually generated
- run
combine_zhang15_dbpedia_train_test()
in playground.py: the generatedfull.csv
is used to create vocabulary for DBpedia. - run
combine_20news_train_test()
in playground.py: the generatedfull.csv
is used to create vocabulary for 20news.
- run
- Other intermediate files should be generated automatically when they are needed.
An example:
python3 train_seen.py \
--data dbpedia \
--unseen 0.5 \
--model vw \
--sepoch 1 \
--train 1
The arguments of the commands represent
data
: Dataset, eitherdbpedia
or20news
.unseen
: Rate of unseen classes, either0.25
or0.5
.model
: The model specified to train the model. This argument can only bevw
: the inputs are embedding of words (from text)
sepoch
: Repeat training of each epoch for several times. The ratio of positive/negative samples and learning rate will keep consistent in one epoch no matter how many times the epoch is repeated.train
: In Phase 1, this argument does not affect the program. The program will run training and testing together.rgidx
: Optional, Random group starting index: e.g. if 5, the training will start from the 5th random group, by default1
. This argument is used when the program is accidentally interrupted.gpu
: Optional, GPU occupation percentage, by default1.0
, which means full occupation of available GPUs.baseepoch
: Optional, you may want to specify which epoch to test.
An example:
python3 train_unseen.py \
--data 20news \
--unseen 0.5 \
--model vwvcvkg \
--ns 2 --ni 2 --sepoch 10 \
--rgidx 1 --train 1
The arguments of the commands represent
data
: Dataset, eitherdbpedia
or20news
.unseen
: Rate of unseen classes, either0.25
or0.5
.model
: The model specified to train the model. This argument can be (correspond with Table 6 in the paper)kgonly
: the inputs are the relationship vectors which are extracted from knowledge graph (KG).vcvkg
: the inputs contain the embedding of class labels and the relationship vectors.vwvkg
: the inputs contain the embedding of words (from text) and the relationship vectors.vwvc
: the inputs contain the embedding of words and class labels.vwvcvkg
: all three kinds of inputs mentioned above.
train
: 1 for training, 0 for testing.sepoch
: Repeat training of each epoch for several times. The ratio of positive/negative samples and learning rate will keep consistent in one epoch no mather how many times the epoch is repeated.ns
: Optional, Integer, the ratio of positive and negative samples, the higher the more negative samples, by default2
.ni
: Optional, Integer, the speed of increasing negative samples during training per epoch, by default2
.rgidx
: Optional, Random group starting index: e.g. if 5, the training will start from the 5th random group, by default1
. This argument is used when the program is accidentally interrupted.gpu
: Optional, GPU occupation percentage, by default1.0
, which means full occupation of available GPUs.baseepoch
: Optional, you may want to specify which epoch to test.