Skip to main content

Configuration

Every setting is an environment variable. The repo ships an .env.example file — copy it to .env and fill in what you need.

Required

VariableDescription
TRAKT_CLIENT_IDTrakt OAuth app client id
TRAKT_CLIENT_SECRETTrakt OAuth app client secret
TMDB_API_KEYTMDB v3 API key
RECOMBEE_DATABASE_IDRecombee database id
RECOMBEE_PRIVATE_TOKENRecombee private token
FERNET_KEY32-byte base64 key used to encrypt Trakt tokens at rest
SECRET_KEYSigning key for session cookies
BASE_URLPublic URL, no trailing slash (e.g. https://reclio.example.com)

Optional

VariableDefaultDescription
RECOMBEE_REGIONus-westus-west · eu-west · ap-se
ADMIN_TOKEN(blank = disabled)Enables /admin/* endpoints when set
PORT8000Internal app port
DATABASE_URLsqlite+aiosqlite:///./data/db/reclio.dbSQLAlchemy URL
CHROMA_PERSIST_DIR./data/chromaChromaDB storage path

LLM provider (chat)

Reclio uses an LLM for row titles, the personality blurb, the Ask Reclio chat, and the conversational preference flow. Pick one provider — or leave it off entirely and the LLM-driven features fall back gracefully (plain f-strings for titles, "chat offline" state).

VariableDefaultDescription
LLM_PROVIDERollamaollama · claude · openai · openrouter · none
OLLAMA_BASE_URLhttp://ollama:11434Ollama HTTP endpoint
OLLAMA_MODELllama3.2:3bOllama chat model
ANTHROPIC_API_KEYRequired when LLM_PROVIDER=claude
CLAUDE_MODELclaude-haiku-4-5Anthropic model name
OPENAI_API_KEYRequired when LLM_PROVIDER=openai or when EMBEDDING_PROVIDER=openai
OPENAI_MODELgpt-4o-miniOpenAI chat model
OPENROUTER_API_KEYRequired when LLM_PROVIDER=openrouter
OPENROUTER_MODELanthropic/claude-3.5-haikuOpenRouter model in vendor/model form

See Model integrations for picking a provider and the workload they actually serve.

Embedding provider (vector similarity)

Independent of the chat LLM. Default auto follows LLM_PROVIDER with sensible per-provider mappings (Ollama → nomic-embed-text, OpenAI → text-embedding-3-small, Claude/OpenRouter → local sentence-transformers MiniLM, none → null). Override for mix-and-match — e.g. chat on Claude, embeddings on OpenAI for the top-tier 1536d quality.

VariableDefaultDescription
EMBEDDING_PROVIDERautoauto · openai · ollama · local · none
OLLAMA_EMBEDDING_MODELnomic-embed-textOllama embedding model name

See Embeddings for the full quality/cost/footprint comparison.

Adaptive sync

Each user's taste profile re-syncs based on how active they are. The defaults work well; override if you want tighter or looser intervals.

VariableDefaultDescription
USER_SYNC_DEFAULT_INTERVAL_HOURS8Baseline per-user cadence
USER_SYNC_HOT_INTERVAL_HOURS4Cadence for "hot" users (frequent /feeds hits)
USER_SYNC_COLD_INTERVAL_HOURS24Cadence for "cold" users (rare /feeds hits)
USER_SYNC_HOT_THRESHOLD_PER_WEEK14≥ this many hits in 7d → hot
USER_SYNC_COLD_THRESHOLD_PER_WEEK3≤ this many hits in 7d → cold
USER_SYNC_SWEEP_INTERVAL_HOURS1How often the scheduler checks for stale users
CONTENT_SYNC_INTERVAL_HOURS24Global TMDB catalog refresh
TOKEN_REFRESH_INTERVAL_HOURS6Trakt OAuth token refresh

In v1.5 the user-sync sweep also runs an hourly health check on every external dependency (DB, Trakt, TMDB, Recombee, LLM). It's silent on the happy path and logs WARNING with full diagnostic detail on any degradation.

See Adaptive sync for the full sync-cadence model and Watch-state machine for how Reclio learns from incomplete watches.