Development Status :: 7 - Inactive
Transition to Gemini API
In February 2024, Bard has changed its service name to Gemini.
- For some countries/regions where the
__Secure-1PSID
cookie value ends with a single dot: Bard API >= 0.1.40 - For all other countries/regions: Starting from March 1st, 2024, please go to the Gemini API package.
Moving forward, updates will primarily focus on the Gemini API package. Alternatively, utilize the official Gemini API at Google AI Studio.
The python package that returns response of Google Bard through value of cookie.
Please exercise caution and use this package responsibly. This python package is UNOFFICIAL.
I referred to this github repository(github.com/acheong08/Bard) where inference process of Bard was reverse engineered. Using __Secure-1PSID
, you can ask questions and get answers from Google Bard. Please note that the bardapi is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, I strongly discourage using it for any other purposes. If you have access to reliable official PaLM-2 API or Google Generative AI API, replace the provided response with the corresponding official code. Check out dsdanielpark#262.
- Google Bard API
What is Google Bard?
Bard is a conversational generative artificial intelligence chatbot developed by Google, based initially on the LaMDA family of LLMs(Large Language Models) and later the PaLM LLM. Please check official documents for updates on Bard, including available regions and languages.
$ pip install bardapi
$ pip install git+https://github.com/dsdanielpark/Bard-API.git
Due to certain dependency packages that are not compatible with 64bit windows(OS), we are releasing a lightweight alpha release of bard that only returns responses for simple requests. This release is a continuation of the pypi 0.1.18
version, which was maintained with lightweight and simple functionality. See alpha-release github branch for more details.
$ pip install bardapi==0.1.23a
Warning Do not expose the
__Secure-1PSID
. For testing purposes only; avoid direct application use. Cookie values change periodically (every 15-20 minutes). Frequent session changes may briefly block access; headless mode is challenging. Rate limiting applies and changes often. If the cookie changes, log out of your Google account, close the browser, and enter the new cookie value. Or manually reset the cookie for a new value. See FAQ and issue pages for details.
- Visit https://gemini.google.com/
- F12 for console
- Session: Application → Cookies → Copy the value of
__Secure-1PSID
cookie. Or try to useSIDCC
as token.
Note that while I referred to __Secure-1PSID
or SIDCC
value as an API key for convenience, it is not an officially provided API key.
Cookie value subject to frequent changes. Verify the value again if an error occurs. Most errors occur when an invalid cookie value is entered.
If you need to set multiple cookie values:
- Multi-cookie Bard - After confirming that multiple cookie values are required to receive responses reliably in certain countries, I will deploy it for testing purposes. Please debug and create a pull request.
Simple Usage
from bardapi import Bard
token = 'xxxxxxx'
bard = Bard(token=token)
bard.get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
Or you can use this
from bardapi import Bard
import os
os.environ['_BARD_API_KEY'] = "xxxxxxx"
Bard().get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
To get reponse dictionary
import bardapi
# set your __Secure-1PSID value to key
token = 'xxxxxxx'
# set your input text
input_text = "나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘"
# Send an API request and get a response.
response = bardapi.core.Bard(token).get_answer(input_text)
Addressing errors caused by delayed responses in environments like Google Colab and containers. If an error occurs despite following the proper procedure, utilize the timeout argument.
from bardapi import Bard
import os
os.environ['_BARD_API_KEY']="xxxxxxx"
bard = Bard(timeout=30) # Set timeout in seconds
bard.get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
If you are working behind a proxy, use the following.
from bardapi import Bard
# Change 'http://proxy.example.com:8080' to your http proxy
# timeout in seconds
proxies = {
'http': 'http://proxy.example.com:8080',
'https': 'https://proxy.example.com:8080'
}
bard = Bard(token='xxxxxxx', proxies=proxies, timeout=30)
bard.get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
If you want to avoid blocked requests and bans, then use Smart Proxy by Crawlbase. It forwards your connection requests to a randomly rotating IP address in a pool of proxies before reaching the target website. The combination of AI and ML make it more effective to avoid CAPTCHAs and blocks.
from bardapi import Bard
import requests
# Get your proxy url at crawlbase https://crawlbase.com/docs/smart-proxy/get/
proxy_url = "http://xxxxxxxxxxxxxx:@smartproxy.crawlbase.com:8012"
proxies = {"http": proxy_url, "https": proxy_url}
bard = Bard(token='xxxxxxx', proxies=proxies, timeout=30)
bard.get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
You can continue the conversation using a reusable session. However, this feature is limited, and it is difficult for a package-level feature to perfectly maintain conversation_id and context. You can try to maintain the consistency of conversations same way as other LLM services, such as passing some sort of summary of past conversations to the DB.
from bardapi import Bard
import requests
# import os
# os.environ['_BARD_API_KEY'] = 'xxxxxxx'
token = 'xxxxxxx'
session = requests.Session()
session.headers = {
"Host": "gemini.google.com",
"X-Same-Domain": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded;charset=UTF-8",
"Origin": "https://gemini.google.com",
"Referer": "https://gemini.google.com/",
}
# session.cookies.set("__Secure-1PSID", os.getenv("_BARD_API_KEY"))
session.cookies.set("__Secure-1PSID", token)
bard = Bard(token=token, session=session, timeout=30)
bard.get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
# Continued conversation without set new session
bard.get_answer("What is my last prompt??")['content']
Async Bard Code (Click to expand)
from httpx import AsyncClient
from bardapi import BardAsync
import os
# Uncomment and set your API key as needed
# os.environ['_BARD_API_KEY'] = 'xxxxxxx'
token = 'xxxxxxx' # Replace with your actual token
SESSION_HEADERS = {
"Host": "gemini.google.com",
"X-Same-Domain": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded;charset=UTF-8",
"Origin": "https://gemini.google.com",
"Referer": "https://gemini.google.com/",
}
timeout = 30 # Example timeout
proxies = {} # Replace with your proxies if needed
client = AsyncClient(
http2=True,
headers=SESSION_HEADERS,
cookies={"__Secure-1PSID": token},
timeout=timeout,
proxies=proxies,
)
bard_async = BardAsync(token=token, client=client)
# Asynchronous function to get the answer
async def get_bard_answer(question):
await bard_async.async_setup() # Ensure async setup is done
return await bard_async.get_answer(question)
response = await get_bard_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")
print(response['content'])
Using browser_cookie3 we extract the __Secure-1PSID
cookie from all browsers, and then we can use the API without passing the token. However, there are still incomplete dependency packages and various variables, so please seek assistance in the following GitHub Issues or adjust your browser's version.
- Visit https://gemini.google.com/ in your browser and execute the following command while in the chat-enabled state. Refer to browser_cookie3 for details on how it works. If any issues arise, please restart the browser or log in to your Google account again. Recommended to keep the browser open.
from bardapi import Bard
bard = Bard(token_from_browser=True)
response = bard.get_answer("Do you like cookies?")
print(response['content'])
It may not work as it is only available for certain accounts, regions, and other restrictions. As an experimental feature, it is possible to ask questions with an image. However, this functionality is only available for accounts with image upload capability in Bard's web UI.
from bardapi import Bard
bard = Bard(token='xxxxxxx')
image = open('image.jpg', 'rb').read() # (jpeg, png, webp) are supported.
bard_answer = bard.ask_about_image('What is in the image?', image)
print(bard_answer['content'])
Text To Speech(TTS) from Bard
Business users and high traffic volume may be subject to account restrictions according to Google's policies. Please use the Official Google Cloud API for any other purpose. The user is solely responsible for all code, and it is imperative to consult Google's official services and policies. Furthermore, the code in this repository is provided under the MIT license, and it disclaims any liability, including explicit or implied legal responsibilities.
from bardapi import Bard
bard = Bard(token='xxxxxxx')
audio = bard.speech('Hello, I am Bard! How can I help you today?')
with open("speech.ogg", "wb") as f:
f.write(bytes(audio['audio']))
Starting from version 0.1.18
, the GitHub version of BardAPI will be synchronized with the PyPI version and released simultaneously. However, the version undergoing QA can still be used from the GitHub repository.
$ pip install git+https://github.com/dsdanielpark/Bard-API.git
- Multi-cookie Bard
- Auto Cookie Bard
- TTS from Bard
- Multi-language Bard API
- Get image links
- ChatBard
- Export Conversation
- Export Code to Repl.it
- Executing Python code received as a response from Bard
- Using Bard Asynchronously
- Bard Cookies
- Fix Coversation ID (Fix Context)
- Max_token, Max_sentences
- Translation to another programming language
Amazing Bard Prompts Is All You Need!
- Helpful prompts for Google Bard
If you want to comfortably use the open-source LLM models in your native language, which are released under the Apache License (allowing free commercial use)
in various languages
, you can try using the hf-transllm package. hf-transllm also supports multilingual inference for various LLMs stored in hugging face repository.
Example code of hf-transllm
In case the Google package is no longer available due to policy restrictions, here's a simple example code for using open-source language models (LLMs) in both English and multiple languages.
For the decoder models provided by Hugging Face, you can easily use them by either following a simple approach or overriding the inference method. You can explore various open-source language models at this link. Through the ranking information from Open LLM Leader Board Report repository, you can find information about good models.
from transllm import LLMtranslator
open_llama3b_kor = LLMtranslator('openlm-research/open_llama_3b', target_lang='ko', translator='google') # Korean
trnaslated_answer = open_llama3b_kor.generate("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")
print(trnaslated_answer)
Refer https://github.com/openlm-research/open_llama or using like this:
from transllm import LLMtranslator
open_llama3b = LLMtranslator('openlm-research/open_llama_3b)
answer = open_llama3b.generate("Tell me about the Korean girl group Newjeans.")
print(answer)
What is Google Gemini?
Gemini or formerly knowns as Bard is an advanced, multimodal AI model by Google DeepMind, capable of understanding and integrating various information types like text, code, audio, images, and video.
- Paper: https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf
- Web: https://blog.google/technology/ai/google-gemini-ai/#capabilities
- Code Guide: https://ai.google.dev/tutorials/python_quickstart
- Official API On Google AI Studio.
Google AI Studio creates a new Google Cloud project for each new API key. You also can create an API key in an existing Google Cloud project. All projects are subject to the Google Cloud Platform Terms of Service.
- Web: https://makersuite.google.com/app/apikey
- Note: The Gemini API is currently in public preview. Production applications are not supported yet.
The Bard API, sourcing responses from Google BardGemini's official website, allows you to receive the same responses as the website. So, if Gemini answers are available on the web, you can also accessed Gemini through the Bard API. However, it's important to note that responses might also come from other models, not exclusively Gemini Pro or Ultra.
- There is no official Bard API or early access/waiting list for Gemini, although the PaLM2 API is available.
- Google's PaLM2 API differs from Bard, with some aspects of Bard being superior.
- It's speculated that after expert review, Bard Advanced lineup will likely provide an official API in 2024.
- Gemini and previous generative AI model responses are provided randomly on Bard Web.
- The Bard API, with its imperfect extension features(e.g,
ask_about_image
), occasionally demonstrates Gemini's capabilities. This behavior may vary by region, language, or Google account. - More information in the FAQ.
For more on Gemini:
- Official API
- Introducing Gemini: our largest and most capable AI model
- How it's made: multimodal prompting
- YouTube Demo
Try demo at https://makersuite.google.com/app/prompts/new_text.
who are you?
>> I am powered by PaLM 2, which stands for Pathways Language Model 2, a large language model from Google AI.
Google Generative AI
- Official Page: https://blog.google/technology/ai/google-palm-2-ai-large-language-model/
- GitHub: https://github.com/GoogleCloudPlatform/generative-ai
- Try Demo: https://makersuite.google.com/app/prompts/new_text.
- Official Library: https://makersuite.google.com/app/library
- Get API Key: https://makersuite.google.com/app/apikey
- Quick Start Tutorial: https://developers.generativeai.google/tutorials/text_quickstart
$ pip install -q google-generativeai
import pprint
import google.generativeai as palm
palm.configure(api_key='YOUR_API_KEY')
models = [m for m in palm.list_models() if 'generateText' in m.supported_generation_methods]
model = models[0].name
print(model)
prompt = "Who are you?"
completion = palm.generate_text(
model=model,
prompt=prompt,
temperature=0,
# The maximum length of the response
max_output_tokens=800,
)
print(completion.result)
Use data scraping to train your AI models.
- Easy to use API to crawl and scrape millions of websites
- Use crawlbase for efficient data extraction for your LLMs
- Average success rate: 98%
- Uptime guarantee: 99.9%
- Simple docs to get started in minutes
- Asynchronous Crawling API if you need massive amounts of data
- GDPR and CCPA compliant
Used by 70k+ developers.
Please check the FAQ and open issues for similar questions before creating a new issue. Repeated questions will be kept as open issues. Too many requests can trigger a temporary account block (HTTP 429). Maintain proper intervals, using functions like sleep to avoid rate limits. Policies may vary by country and language, so all users could face temporary or permanent errors via the API.
In the scripts folder, I have released a script to help you compare OpenAI-ChatGPT, Microsoft-EdgeGPT and Google-Bard. I hope they will help more developers.
We would like to express our sincere gratitude to all the contributors.
MIT
We hold no legal responsibility; for more information, please refer to the bottom of the readme file. We just want you to give me and them a star. This project is a personal initiative and is not affiliated with or endorsed by Google. It is recommended to use Google's official API.
The MIT License (MIT)
Copyright (c) 2023 Minwoo Park
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Bard's service status and Google's API interfaces are in constant flux. The number of replies is currently limited, but certain users, such as those utilizing VPNs or proxy servers, have reported slightly higher message caps. Adaptability is crucial in navigating these dynamic service policies. Please note that the cookie values used in this package are not official API values.
Sincerely grateful for any reports on new features or bugs. Your valuable feedback on the code is highly appreciated.
- Core maintainer:
[1] https://github.com/acheong08/Bard
Warning Important Notice The user assumes all legal responsibilities associated with using the BardAPI package. This Python package merely facilitates easy access to Google Bard for developers. Users are solely responsible for managing data and using the package appropriately. For further information, please consult the Google Bard Official Document.
Warning Caution This Python package is not an official Google package or API service. It is not affiliated with Google and uses Google account cookies, which means that excessive or commercial usage may result in restrictions on your Google account. The package was created to support developers in testing functionalities due to delays in the official Google package. However, it should not be misused or abused. Please be cautious and refer to the Readme for more information.
Copyright (c) 2023 MinWoo Park, South Korea