Installation#
Choose the installation method that best fits your needs and environment.
Development Installation#
Recommended for: Contributing to the project or customizing the code.
Clone the repository and set up a development environment:
git clone https://github.com/jeipollack/euclid_rag
cd euclid_rag
python3 -m venv .venv
source .venv/bin/activate
make init
This setup includes:
Virtual environment for isolated dependencies
Development dependencies for testing and building
Editable installation so changes are immediately available
Docker Installation#
Recommended for: Production deployment and containerized environments.
For containerized deployment with Docker Compose:
git clone https://github.com/jeipollack/euclid_rag
cd euclid_rag
docker compose up --build
Docker Features#
This setup includes:
Parallelized build stages for faster container building
Optimized image size for efficient deployment
Separate Ollama container managed by Docker Compose
Dynamic package versioning for correct version tracking
Isolated services with automatic orchestration
Setting up the LLM Model#
After the containers are running, you need to pull the desired model:
docker exec -it euclid_rag-ollama-1 ollama pull mistral:latest
Note
The model must be explicitly requested with the docker exec
command after the containers are started.
Verification#
Test your installation by importing the package:
import euclid.rag.chatbot
print("euclid_rag installed successfully!")
Next Steps#
After installation, proceed to:
Configuration - Set up your system configuration
Document Ingestion - Ingest documents into the vector store
Running the Chatbot - Run the chatbot interface