Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the main documentation.
Notebook | Description |
---|---|
agent_fireworks_ai_langchain_mongodb.ipynb | Build an AI Agent With Memory Using MongoDB, LangChain and FireWorksAI. |
mongodb-langchain-cache-memory.ipynb | Build a RAG Application with Semantic Cache Using MongoDB and LangChain. |
LLaMA2_sql_chat.ipynb | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters. |
Semi_Structured_RAG.ipynb | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains. |
Semi_structured_and_multi_moda... | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains. |
Semi_structured_multi_modal_RA... | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all. |
amazon_personalize_how_to.ipynb | Retrieving personalized recommendations from Amazon Personalize and use custom agents to build generative AI apps |
analyze_document.ipynb | Analyze a single long document. |
autogpt/autogpt.ipynb | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools. |
autogpt/marathon_times.ipynb | Implement autogpt for finding winning marathon times. |
baby_agi.ipynb | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers. |
baby_agi_with_agent.ipynb | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information. |
camel_role_playing.ipynb | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion. |
causal_program_aided_language_... | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies. |
code-analysis-deeplake.ipynb | Analyze its own code base with the help of gpt and activeloop's deep lake. |
custom_agent_with_plugin_retri... | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints. |
custom_agent_with_plugin_retri... | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the plugnplai directory. |
databricks_sql_db.ipynb | Connect to databricks runtimes and databricks sql. |
deeplake_semantic_search_over_... | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4. |
elasticsearch_db_qa.ipynb | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API. |
extraction_openai_tools.ipynb | Structured Data Extraction with OpenAI Tools |
forward_looking_retrieval_augm... | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer. |
generative_agents_interactive_... | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever. |
gymnasium_agent_simulation.ipynb | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium. |
hugginggpt.ipynb | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face. |
hypothetical_document_embeddin... | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries. |
learned_prompt_optimization.ipynb | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences. |
llm_bash.ipynb | Perform simple filesystem commands using language learning models (llms) and a bash process. |
llm_checker.ipynb | Create a self-checking chain using the llmcheckerchain function. |
llm_math.ipynb | Solve complex word math problems using language models and python repls. |
llm_summarization_checker.ipynb | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results. |
llm_symbolic_math.ipynb | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics. |
meta_prompt.ipynb | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly. |
multi_modal_output_agent.ipynb | Generate multi-modal outputs, specifically images and text. |
multi_modal_RAG_vdms.ipynb | Perform retrieval-augmented generation (rag) on documents including text and images, using unstructured for parsing, Intel's Visual Data Management System (VDMS) as the vectorstore, and chains. |
multi_player_dnd.ipynb | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents. |
multiagent_authoritarian.ipynb | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network. |
multiagent_bidding.ipynb | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example. |
myscale_vector_sql.ipynb | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications. |
openai_functions_retrieval_qa.... | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline. |
openai_v1_cookbook.ipynb | Explore new functionality released alongside the V1 release of the OpenAI Python library. |
petting_zoo.ipynb | Create multi-agent simulations with simulated environments using the petting zoo library. |
plan_and_execute_agent.ipynb | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent. |
press_releases.ipynb | Retrieve and query company press release data powered by Kay.ai. |
program_aided_language_model.i... | Implement program-aided language models as described in the provided research paper. |
qa_citations.ipynb | Different ways to get a model to cite its sources. |
rag_upstage_layout_analysis_groundedness_check.ipynb | End-to-end RAG example using Upstage Layout Analysis and Groundedness Check. |
retrieval_in_sql.ipynb | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector. |
sales_agent_with_context.ipynb | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings. |
self_query_hotel_search.ipynb | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset. |
smart_llm.ipynb | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output. |
tree_of_thought.ipynb | Query a large language model using the tree of thought technique. |
twitter-the-algorithm-analysis... | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake. |
two_agent_debate_tools.ipynb | Simulate multi-agent dialogues where the agents can utilize various tools. |
two_player_dnd.ipynb | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master. |
wikibase_agent.ipynb | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org. |
oracleai_demo.ipynb | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS. |
rag-locally-on-intel-cpu.ipynb | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release. |
visual_RAG_vdms.ipynb | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models. |
contextual_rag.ipynb | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding. |
rag-agents-locally-on-intel-cpu.ipynb | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline. |