Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
-
Updated
Apr 6, 2023 - Python
Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
Pre-trained Transformers for Arabic Language Understanding and Generation (Arabic BERT, Arabic GPT2, Arabic ELECTRA)
Pretrained ELECTRA Model for Korean
NLP 领域常见任务的实现,包括新词发现、以及基于pytorch的词向量、中文文本分类、实体识别、摘要文本生成、句子相似度判断、三元组抽取、预训练模型等。
Turkish BERT/DistilBERT, ELECTRA and ConvBERT models
Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
Pytorch-Named-Entity-Recognition-with-transformers
“英特尔创新大师杯”深度学习挑战赛 赛道2:CCKS2021中文NLP地址要素解析
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Baseline code for Korean open domain question answering(ODQA)
基于bert4keras的GLUE基准代码
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Character-level Korean ELECTRA Model (음절 단위 한국어 ELECTRA)
Pre-training Language Models for Japanese
Transformers Pipeline with KoELECTRA
Distilling Task-Specific Knowledge from Teacher Model into BiLSTM
AiSpace: Better practices for deep learning model development and deployment For Tensorflow 2.0
Albert for Conversational Question Answering Challenge
Add a description, image, and links to the electra topic page so that developers can more easily learn about it.
To associate your repository with the electra topic, visit your repo's landing page and select "manage topics."