Skip to main content
GitStarRecall uses environment variables to configure GitHub OAuth, embedding performance, and optional features. All variables are prefixed with VITE_ to be accessible in the browser, or use server-side prefixes for backend configuration.

GitHub OAuth

Configure GitHub authentication for accessing starred repositories.
VITE_GITHUB_CLIENT_ID
string
required
GitHub OAuth app client ID. Obtained from your GitHub OAuth app settings.This value is embedded in the client bundle and used to initiate the OAuth flow.
GITHUB_OAUTH_CLIENT_ID
string
required
GitHub OAuth app client ID for server-side exchange.Used by the backend to verify OAuth requests. Keep this in server environment only.
GITHUB_OAUTH_CLIENT_SECRET
string
required
GitHub OAuth app client secret.Never expose this value in the client bundle. Backend-only variable for OAuth token exchange.
VITE_GITHUB_REDIRECT_URI
string
required
OAuth callback URL that GitHub redirects to after authorization.Must exactly match one of the callback URLs configured in your GitHub OAuth app.Example: http://localhost:5173/auth/callback
GITHUB_OAUTH_REDIRECT_URI
string
required
Server-side OAuth callback URL for token exchange verification.Must match VITE_GITHUB_REDIRECT_URI exactly. Used by the backend to verify OAuth requests.Example: http://localhost:5173/auth/callback
VITE_GITHUB_OAUTH_EXCHANGE_URL
string
default:"/api/github/oauth/exchange"
Backend endpoint that exchanges OAuth code + PKCE verifier for access token.This keeps the client secret out of the browser and ensures secure token exchange.

Embedding Backend

Control which compute backend is used for generating embeddings.
VITE_EMBEDDING_BACKEND_PREFERRED
string
default:"webgpu"
Embedding backend preference with automatic fallback.Options:
  • webgpu - Try GPU backend first, fallback to WASM if unavailable
  • wasm - Force CPU/WASM only (kill switch for GPU issues)
The app will automatically fallback to WASM if WebGPU is unavailable or fails.
WebGPU provides significant performance improvements on modern devices but may not be available in all browsers. The WASM fallback ensures compatibility across all platforms.

Embedding Performance Tuning

Fine-tune embedding generation throughput and resource usage. See Performance Tuning for detailed guidance.
VITE_EMBEDDING_POOL_SIZE
number
default:"1"
Number of parallel embedding workers.Clamped to 1..2. Higher values increase throughput but consume more memory.Recommendation: Use 1 for devices with limited RAM, 2 for modern laptops/desktops.
VITE_EMBEDDING_WORKER_BATCH_SIZE
number
default:"12"
Number of texts processed in each worker batch.Clamped to 1..32. Higher values improve throughput but increase memory pressure.Recommendation: 8-16 for most devices. The system adaptively downshifts on memory pressure.
VITE_EMBEDDING_DB_WRITE_BATCH_SIZE
number
default:"512"
Number of embeddings buffered before writing to SQLite.Higher values reduce write frequency but increase crash-loss window.Recommendation: Keep at 512 unless you experience write bottlenecks.
VITE_EMBEDDING_UI_UPDATE_MS
number
default:"350"
Throttle interval (milliseconds) for UI progress updates during indexing.Prevents main-thread pressure from excessive UI refreshes.Recommendation: 300-500ms balances responsiveness and performance.
VITE_EMBEDDING_LARGE_LIBRARY_MODE
number
default:"1"
Enable optimizations for large starred repositories.Options:
  • 1 - Enable large library mode (recommended)
  • 0 - Disable optimizations
When enabled, uses priority ordering, resumable cursors, and adaptive batching.
VITE_EMBEDDING_LARGE_LIBRARY_THRESHOLD
number
default:"500"
Minimum number of repositories to trigger large library optimizations.Only applies when VITE_EMBEDDING_LARGE_LIBRARY_MODE=1.

README Processing

Configure README fetching and chunking behavior.
VITE_README_BATCH_PIPELINE_V2
number
default:"0"
Enable experimental README batch pipeline v2.Options:
  • 1 - Enable v2 pipeline with adaptive concurrency and cooldown
  • 0 - Use standard pipeline
V2 improves rate-limit resilience and reduces abusive retry bursts.
VITE_README_BATCH_SIZE
number
default:"40"
Number of READMEs to fetch in parallel.Higher values speed up initial sync but may trigger rate limits.Recommendation: 20-40 balances speed and API courtesy.
VITE_EMBED_TRIGGER_THRESHOLD
number
default:"256"
Minimum number of pending chunks before triggering embedding batch.Prevents frequent small batches during initial sync.
VITE_EMBED_WINDOW_SIZE
number
default:"512"
Maximum chunk size (characters) for text chunking.READMEs are split into overlapping chunks of this size for embedding.Recommendation: Keep at 512 to balance context and retrieval granularity.

Ollama Integration (Optional)

Configure local Ollama embedding and LLM support. Ollama usage is controlled by in-app user toggle.
Ollama endpoints are restricted to localhost addresses (localhost, 127.0.0.1, [::1]) for security. Non-local endpoints are rejected.
VITE_OLLAMA_BASE_URL
string
default:"http://localhost:11434"
Ollama server base URL.Security gate: Must be a localhost/loopback address. Remote endpoints are blocked.
VITE_OLLAMA_MODEL
string
default:"nomic-embed-text"
Default Ollama model for embeddings.Used when Ollama embedding is enabled via in-app toggle.Popular models:
  • nomic-embed-text - Fast, high-quality embeddings
  • all-minilm - Lightweight alternative
VITE_OLLAMA_TIMEOUT_MS
number
default:"30000"
Request timeout (milliseconds) for Ollama API calls.Recommendation: 20000-60000 depending on model size and hardware.

WebLLM Browser Provider (Experimental)

Run LLM inference directly in the browser using WebGPU.
VITE_WEBLLM_ENABLED
number
default:"0"
Enable browser-based WebLLM provider (feature flag).Options:
  • 1 - Enable WebLLM with explicit user consent before download
  • 0 - Disable WebLLM feature
When enabled, users can download and run models like Llama 3.2 1B directly in the browser.
WebLLM requires WebGPU support and downloads multi-gigabyte models. Users must explicitly consent before any download begins. The app recommends appropriate models based on device capabilities.

LLM API Key Encryption (Optional)

Protect stored LLM API keys with client-side encryption.
VITE_LLM_SETTINGS_ENCRYPTION_KEY
string
Encryption key for storing LLM API keys in localStorage.Format: 32-byte hex or base64 stringSecurity model:
  • If set: API keys are encrypted (AES-GCM) before localStorage storage
  • If unset: API keys are not persisted across sessions
This key is embedded in the client bundle. It protects against casual exfiltration from localStorage but not against attackers with access to the app source.
Recommendation: Leave unset for maximum security (keys stored in memory only), or use a strong random key if persistence is required.

Example Configuration

.env.local
# GitHub OAuth (required)
VITE_GITHUB_CLIENT_ID=Iv1.abc123def456
GITHUB_OAUTH_CLIENT_ID=Iv1.abc123def456
GITHUB_OAUTH_CLIENT_SECRET=your_client_secret_here
VITE_GITHUB_REDIRECT_URI=http://localhost:5173/auth/callback

# Embedding performance (recommended for 1000+ stars)
VITE_EMBEDDING_POOL_SIZE=2
VITE_EMBEDDING_WORKER_BATCH_SIZE=16
VITE_EMBEDDING_LARGE_LIBRARY_MODE=1

# Ollama local provider (optional)
VITE_OLLAMA_BASE_URL=http://localhost:11434
VITE_OLLAMA_MODEL=nomic-embed-text

# WebLLM browser provider (experimental)
VITE_WEBLLM_ENABLED=1

Next Steps