-
-
Save jayo78/79d8834e6e31bf942c7b604e1611b68d to your computer and use it in GitHub Desktop.
import openai | |
openai.api_key = "YOUR API KEY HERE" | |
model_engine = "text-davinci-003" | |
chatbot_prompt = """ | |
As an advanced chatbot, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. In order to effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions. | |
<conversation_history> | |
User: <user input> | |
Chatbot:""" | |
def get_response(conversation_history, user_input): | |
prompt = chatbot_prompt.replace( | |
"<conversation_history>", conversation_history).replace("<user input>", user_input) | |
# Get the response from GPT-3 | |
response = openai.Completion.create( | |
engine=model_engine, prompt=prompt, max_tokens=2048, n=1, stop=None, temperature=0.5) | |
# Extract the response from the response object | |
response_text = response["choices"][0]["text"] | |
chatbot_response = response_text.strip() | |
return chatbot_response | |
def main(): | |
conversation_history = "" | |
while True: | |
user_input = input("> ") | |
if user_input == "exit": | |
break | |
chatbot_response = get_response(conversation_history, user_input) | |
print(f"Chatbot: {chatbot_response}") | |
conversation_history += f"User: {user_input}\nChatbot: {chatbot_response}\n" | |
main() |
What if the length of the conversation history exceeds the range allowed by the API?
You can use semantic search to find relevant parts of the previous conversation to insert before generating a response. David Shapiro has a great video on this: https://www.youtube.com/watch?v=c3aiCrk0F0U
Hi, thank you for putting code like this out in the open. I'm a newbie web developer and stumbled upon your work while going through learnprompting.org. I was inspired and refactored it into javascript and also built a React front-end to test it out. The server-side code works well [tested it with postman and the ai responses are valid] but can not seem to understand why the response on the client side is not what it should be. If you're still following this thread and in the mood to share your expertise, I would very much appreciate it :)
I tried this, and wondered why it didn't seem to be remembering anything I'd said.
Then I noticed that in the prompt you have <conversation history>
(with a space), but in the replace command you use <conversation_history>
(with an underscore), so no replacement actually happens...
I tried this, and wondered why it didn't seem to be remembering anything I'd said. Then I noticed that in the prompt you have
<conversation history>
(with a space), but in the replace command you use<conversation_history>
(with an underscore), so no replacement actually happens...
Good catch - just updated. Thanks!
Hello, thank you very much for this example, as a Java programmer, learning python with a focus on generative AI and also using prompt engineering has been a super pleasant adventure.
love this
Ah! Thank you - found what I was looking to learn here! Thank you @jayo78 !!
What if the length of the conversation history exceeds the range allowed by the API?