Handkerchief is the classy and sophisticated alternative to RAG. Stop being GPU-poor and give openai your money by shoving your entire text corpus into parallelized gpt-3.5-turbo-16K calls! Handkerchief does everything you want:
- ✅ Misses less information
- ✅ Understands more of the surrounding context
- ✅ Gives better answers when nothing is found
- ✅ Uses more GPU's!!
To use handkerchief.py
, run:
pip install openai tiktoken
Get your API Key and add this line to your ~/.bashrc
or ~/.zshrc
export OPENAI_API_KEY='your-api-key'
See the test()
function for an example of running handkerchief. RAG is implemented with handkerchief.sneeze()
, which does two things:
- It searches through all 15K token chunks of text for relevant information.
- It generates a response based on the information found.
NOTE: The prompts used in the script are just examples. You should engineer the prompts to suit your specific use case.
made by Ivan Yevenko