Follow these steps to set up and run Ollama and your AutoGPT project:
-
Run Ollama
- Open a terminal
- Execute the following command:
ollama run llama3
- Leave this terminal running
-
Run the Backend
- Open a new terminal
- Navigate to the backend directory in the AutoGPT project:
cd autogpt_platform/backend/
- Start the backend using Poetry:
poetry run app
-
Run the Frontend
- Open another terminal
- Navigate to the frontend directory in the AutoGPT project:
cd autogpt_platform/frontend/
- Start the frontend development server:
npm run dev
-
Choose the Ollama Model
- Add LLMBlock in the UI
- Choose the last option in the model selection dropdown