Improved CRNN,ASTER,DAN on different text domains like scene text, hand written, document, chinese/english, even ancient books
Date | Description |
---|---|
7/30 | Checkpoint for CRNN on IAM dataset has been released. You can test your English handwritten now |
7/31 | Checkpoint for CRNN on CASIA-HWDB2.x has been released. You can test your Chinese handwritten now |
8/3 | New Algorithms! ASTER is reimplemented here and checkpoint for scene text recognition is released |
8/5 | Checkpoint for ASTER on IAM dataset has beem released. It's much more accurate than CRNN due to attention model's implicit semantic information. You should not miss it😃 |
8/8 | New Algorithms! DAN(Decoupled attention network) is reimplented. checkpoint forb both scene text and iam dataset are realesed |
8/11 | New Algorithms! ACE(Aggratation Cross-Entropy). It's a new loss function to handle text recognition task. Like CTC and Attention |
8/17 | Retrained ACE and DAN; Add a powerful augmentation tool |
9/7 | Training SRN and so on. |
Now I'm focusing on a project to build a general ocr systems which can recognize different text domains. From scene text, hand written, document, chinese, english to even ancient books like confucian classics. So far I don't have a clear idea about how to do it, but let's just do it step by step. This repository is suitable for greens who are interesed in text recognition(I am a green too😂).
Part | Description |
---|---|
Datasets | Multible datasets in lmdb form |
Alogrithms | CRNN |
ASTER | |
DAN | |
ACE | |
How to use | Use |
Checkpoints | CheckPoints |
Dataset | Description | Examples | BaiduNetdisk link |
---|---|---|---|
SynthText | 9 million synthetic text instance images from a set of 90k common English words. Words are rendered onto nartural images with random transformations | Scene text datasets(提取码:emco) | |
MJSynth | 6 million synthetic text instances. It's a generation of SynthText. | Scene text datasets(提取码:emco) |
Dataset | Description | Examples | BaiduNetdisk link |
---|---|---|---|
IIIT5k-Words(IIIT5K) | 3000 test images instances. Take from street scenes and from originally-digital images | Scene text datasets(提取码:emco) | |
Street View Text(SVT) | 647 test images instances. Some images are severely corrupted by noise, blur, and low resolution | Scene text datasets(提取码:emco) | |
StreetViewText-Perspective(SVT-P) | 639 test images instances. It is specifically designed to evaluate perspective distorted textrecognition. It is built based on the original SVT dataset by selecting the images at the sameaddress on Google Street View but with different view angles. Therefore, most text instancesare heavily distorted by the non-frontal view angle. | Scene text datasets(提取码:emco) | |
ICDAR 2003(IC03) | 867 test image instances | Scene text datasets(提取码:mfir) | |
ICDAR 2013(IC13) | 1015 test images instances | Scene text datasets(提取码:emco) | |
ICDAR 2015(IC15) | 2077 test images instances. As text images were taken by Google Glasses without ensuringthe image quality, most of the text is very small, blurred, and multi-oriented | Scene text datasets(提取码:emco) | |
CUTE80(CUTE) | 288 It focuses on curved text recognition. Most images in CUTE have acomplex background, perspective distortion, and poor resolution | Scene text datasets(提取码:emco) |
Dataset | Description | Examples | BaiduNetdisk link |
---|---|---|---|
IAM | IAM dataset is based on handwritten English text copied from the LOB corpus. It contains 747 documents(6,482 lines) in the training set, 116 documents (976 lines)in the validation set and 336 documents (2,915 lines) in the testing set | IAM_line_level(提取码:u2a3) | |
CASIA-HWDB2.x | CASIA-HWDB is a large-scale Chinese hand-written database. | HWDB2.x(提取码:ozqu) |
- I reimplemented the most classic and wildly deployed algorithm CRNN. The orignal backbone is replaced by a modifyied ResNet and the results below are trained on MJ + ST.
# | IIIT5K | SVT | IC03 | IC13 | IC15 | SVTP | CUTE |
---|---|---|---|---|---|---|---|
CRNN(reimplemented) | 91.2 | 84.4 | 90.8 | 88.0 | 73.1 | 71.8 | 77.4 |
CRNN(original) | 78.2 | 80.8 | 89.4 | 86.7 | - | - | - |
- Some recognion results
Image | GT | Prediction |
---|---|---|
I am so sorry | 'iamsosory' | |
I still love you | 'istilloveyou' | |
Can we begin again | 'canwebeginagain' |
- note that we only predict 0-9, a-z. No upper case and punctuations. If you want to predict them, you can modify the code
- Relative experiments are conducted on IAM dataset and CASIA-HWDB
Dataset | Word Accuracy |
---|---|
IAM(line level) | 67.2 |
CASIA-HWDB2.0-2.2 | 88.6 |
- Some recognion results
- Chinese handwritten are sufferd from imbalanced words contribution. So sometimes it's hard to recognize some rare words
- ASTER is a classic text recognition algorithms with a TPS rectification network and attention decoder.
# | IIIT5K | SVT | IC03 | IC13 | IC15 | SVTP | CUTE |
---|---|---|---|---|---|---|---|
ASTER(reimplemented) | 92.9 | 88.1 | 91.2 | 88.6 | 75.9 | 78.3 | 78.5 |
ASTER(original) | 91.93 | 88.76 | 93.49 | 89.75 | # | 74.11 | 73.26 |
- Some recognion results
Image and Rectified Image | GT | Prediction |
---|---|---|
COLLEGE | 'COLLEGE' | |
FOOTBALL | 'FOOTBALL' | |
BURTON | 'BURTON' |
- Relative experiments are conducted on IAM dataset and CASIA-HWDB
Dataset | Word Accuracy |
---|---|
IAM(line level) | 69.8 |
CASIA-HWDB2.0-2.2 | The model fails to convergence and I am still training |
- Some recognion results
# | IIIT5K | SVT | IC03 | IC13 | IC15 | SVTP | CUTE |
---|---|---|---|---|---|---|---|
DAN1D(reimplemented) | 91.2 | 83.8 | 89.4 | 88.7 | 72.1 | 70.2 | 74.7 |
DAN1D(original) | 93.3 | 88.4 | 95.2 | 94.2 | 71.8 | 76.8 | 80.6 |
- Relative experiments are conducted on IAM dataset and CASIA-HWDB
Dataset | Word Accuracy |
---|---|
IAM(line level) | 74.0 |
CASIA-HWDB2.0-2.2 |
- Some recognion results
- ACE is simple yet effective loss funciton. However, there is still a huge gap with CTC and Attention
# | IIIT5K | SVT | IC03 | IC13 | IC15 | SVTP | CUTE |
---|---|---|---|---|---|---|---|
ACE(reimplemented) | 84.8 | 76.7 | 84.0 | 82.6 | 65.3 | 64.8 | 68.8 |
ACE(original) | 82.3 | 82.6 | 92.1 | 89.7 | # | # | # |
- It's easy to start the training process. Firstly you need to download the datasets required.
- Check the root
scripts--
ACE--
CASIA_HWDB--
train.sh
test.sh
inference.sh
iam_dataset--
train.sh
test.sh
inference.sh
scene_text--
train.sh
test.sh
inference.sh
ASTER--
...
CRNN --
...
DAB --
...
- let's say you want to train ACE on Scene text. Change the training and testing dataset path in
scripts/ACE/scene_text/train.sh
(The first two rows). - run
bash scripts/ACE/scene_text/train.sh
- If you want to test the accuracy, follow the same step as training. Also, you need to set up the resume parameter in .sh. It's where the checkpoint is
- run
bash scripts/ACE/scene_text/test.sh
- To test a single image. Change the image path in corresponding .sh and the resume path
- then run
bash scripts/ACE/scene_text/inference.sh
CRNN on STR, Checkpoints(提取码:axf7)
CRNN on IAM, Checkpoints(提取码:3ajw)
CRNN on CASIA_HWDB, Checkpoints(提取码:ujpy)
ASTER on STR, Checkpoints(提取码:mcc9)
ASTER on IAM, Checkpoints(提取码:mqqm)