EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING
-
Updated
Apr 3, 2025 - C++
EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING
Fast and memory-efficient library for WordPiece tokenization as it is used by BERT.
Learning BPE embeddings by first learning a segmentation model and then training word2vec
A curated list of tokenizer libraries for blazing-fast NLP processing.
Byte Pair Encoding (BPE)
WordPiece Tokenizer for BERT models.
Detect whether the text is AI-generated by training a new tokenizer and combining it with tree classification models or by training language models on a large dataset of human & AI-generated texts.
A framework for generating subword vocabulary from a tensorflow dataset and building custom BERT tokenizer models.
Add a description, image, and links to the wordpiece topic page so that developers can more easily learn about it.
To associate your repository with the wordpiece topic, visit your repo's landing page and select "manage topics."