Authentication Issues
PAT 401 / Authentication Errors
PAT 401 / Authentication Errors
Symptoms:
- “401 Unauthorized” when fetching stars
- “Bad credentials” error message
- Token validation fails
-
Use raw token only
- Paste token directly without
Bearerprefix - No quotes, no extra whitespace
- Paste token directly without
-
Verify token scopes
- Required:
public_repoorrepo(for private stars) - Optional:
read:user(for user profile) - Check scopes at: https://github.com/settings/tokens
- Required:
-
Check token expiration
- Classic tokens can expire
- Fine-grained tokens have explicit expiration dates
- Generate new token if expired
-
Verify repository access
- Fine-grained tokens require explicit repository access
- Classic tokens with
reposcope have full access - For private repos, ensure token has appropriate permissions
-
Test token manually
OAuth Callback 404 Error
OAuth Callback 404 Error
Symptoms:
- After GitHub login, redirected to 404 page
- URL is correct but page doesn’t load
- “Cannot GET /auth/callback” error
-
Verify callback URL in GitHub OAuth app
- Go to: https://github.com/settings/developers
- Check “Authorization callback URL”
- Must match exactly:
- Local:
http://localhost:5173/auth/callback - Production:
https://yourdomain.com/auth/callback
- Local:
-
Check environment variables
- Both must match GitHub app callback URL
- Protocol (http/https) must match
- Port must match (if applicable)
-
Ensure SPA fallback routing (production)
- Vercel: Check
vercel.jsonhas SPA rewrite - Netlify: Check
_redirectsornetlify.toml - Other hosts: Configure to serve
index.htmlfor all routes
vercel.json: - Vercel: Check
-
Clear browser cache
- OAuth can cache redirects
- Try in incognito/private window
- Clear site data in DevTools
OAuth Exchange Fails
OAuth Exchange Fails
Symptoms:
- “Failed to exchange code for token” error
- OAuth callback succeeds but no token received
- Network error during exchange
-
Check server-side environment variables
- Client ID must match GitHub OAuth app
- Client secret must be correct (never expose in frontend)
- Redirect URI must match exactly
-
Verify exchange endpoint is deployed
- Check:
/api/github/oauth/exchangeis accessible - Test manually:
- Should return 400 or 401, not 404
- Check:
-
Check CORS configuration
- Exchange endpoint must allow requests from your domain
- Vercel/Netlify serverless functions handle CORS automatically
- Custom backends may need CORS headers
-
Review server logs
- Check for errors in OAuth exchange function
- Look for GitHub API errors
- Verify client secret is being read correctly
Indexing and Search Issues
Slow Embedding Performance
Slow Embedding Performance
Symptoms:
- Indexing takes very long time
- UI freezes during embedding
- Browser tab becomes unresponsive
-
Reduce worker pool size
- Default is 2 workers
- Use 1 worker on memory-constrained systems
- Reduces parallel load
-
Adjust batch size
- Default is adaptive (8-32, target 16)
- Lower values reduce memory spikes
- Try 8 or 12 for slower devices
-
Switch to WASM backend
- Default is
webgpu(faster but less stable) - WASM is slower but more compatible
- Use if WebGPU causes crashes
- Default is
-
Enable large-library mode manually
- Auto-enables at 500 repos
- Lower threshold for slower machines
- Prioritizes high-value repos first
-
Check indexing telemetry in UI
- Backend selection reason
- Throughput (chunks/sec)
- Checkpoint frequency
- Queue depth
- Worker pool downshift events
No Search Results
No Search Results
Symptoms:
- Search returns zero results
- Know matching repos exist
- Indexing completed successfully
-
Verify embeddings exist
- Check embedding count in UI
- Should be > 0 after indexing completes
- If 0, indexing may have failed silently
-
Check chunk count
- Chunks should exist for indexed repos
- If chunks = 0, README fetching may have failed
- Try “Fetch Stars” again
-
Verify query embedding
- Query must be embedded before search
- Check browser console for embedding errors
- Try shorter or simpler query
-
Check for model mismatch
- If you switched embedding models, old embeddings are incompatible
- Solution: “Delete local data” and re-index
-
Verify vector index cache
- Cache rebuilds automatically when embedding count changes
- If stale, try refreshing page
- Check browser console for cache errors
-
Try exact repository name
- Test search with known repo name
- If exact name works but semantic doesn’t, check embedding quality
Indexing Interrupted / Resume Not Working
Indexing Interrupted / Resume Not Working
Symptoms:
- Closed browser during indexing
- Indexing restarts from beginning
- Resume cursor not saved
-
Check storage mode
- Resume only works with OPFS or localStorage
- Memory-only mode loses resume state on close
- Verify storage mode in Settings
-
Verify checkpoint policy
- Checkpoints save resume cursor
- Default: every 256 embeddings or 3000ms
- If closed between checkpoints, some progress lost
-
Check
index_metatable- Resume cursor stored in
large_library_cursorkey - Query:
- If missing, resume not enabled
- Resume cursor stored in
-
Enable large-library mode
- Resume requires large-library mode
- Auto-enables at 500 repos
- Manually enable:
-
Force checkpoint before closing
- Wait for checkpoint indicator in UI
- Don’t close during active embedding batch
README Fetch Failures
README Fetch Failures
Symptoms:
- Many repos show “README missing”
- GitHub API rate limit errors
- Partial indexing results
-
Check GitHub API rate limit
- Authenticated: 5,000 requests/hour
- Unauthenticated: 60 requests/hour
- Check remaining:
-
Wait for rate limit reset
- Rate limit resets every hour
- App automatically retries with backoff
- Indexing will resume when limit resets
-
Enable batched README pipeline
- Improves throughput with adaptive concurrency
- Better rate limit handling
-
Adjust batch size
- Default is 40
- Lower value = more conservative API usage
-
Check for deleted/private repos
- Repos may have been deleted since starring
- Private repos require appropriate token scopes
- App skips these and continues
Storage and Data Issues
localStorage Quota Exceeded
localStorage Quota Exceeded
Symptoms:
- “QuotaExceededError” in console
- App switches to memory-only mode
- Data lost on refresh
-
Check if OPFS available
- OPFS has much larger quota than localStorage
- Check browser compatibility:
- Chrome 102+ ✅
- Firefox 111+ ✅
- Safari 15.2+ ✅
- Update browser if outdated
-
Clear old localStorage data
- Open DevTools → Application → Local Storage
- Remove old GitStarRecall data
- Refresh page
-
Reduce database size
- “Delete local data” in Settings
- Re-index with fewer stars
- Consider un-starring unused repos
-
Request persistent storage
- Increases quota in some browsers
- User must grant permission
-
Use incremental sync
- Don’t clear data before re-indexing
- Use “Fetch Stars” to update only changed repos
- Preserves existing embeddings
/app Refresh 404 in Production
/app Refresh 404 in Production
Symptoms:
/apploads initially but 404 on refresh- Direct navigation to
/appfails - Other routes also 404
-
Configure SPA fallback (Vercel)
Create/update
vercel.json: -
Configure SPA fallback (Netlify)
Create
_redirectsinpublic/:Ornetlify.toml: -
Configure SPA fallback (Nginx)
-
Configure SPA fallback (Apache)
Create
.htaccess: -
Verify build output
- Check
dist/folder hasindex.html - Ensure assets are in correct paths
- Test locally with
pnpm preview
- Check
Data Lost After Browser Update
Data Lost After Browser Update
Symptoms:
- All repos and chats disappeared
- Storage mode changed
- Had to re-index everything
-
Check chat backup
- Chat data backed up to IndexedDB
- Should survive most updates
- Load backup:
-
Request persistent storage
- Prevents browser from clearing data
- Must do before data loss
- User grants permission via prompt
-
Export database periodically
- Future feature: manual export/import
- Current workaround: IndexedDB chat backup
-
Check browser data clearing settings
- Some browsers clear on exit
- Some clear after inactivity
- Disable automatic clearing for important sites
-
Use multiple browsers
- Index in different browsers
- Acts as manual backup
- Not ideal but works in emergency
Chat Sessions Not Persisting
Chat Sessions Not Persisting
Symptoms:
- Chat history lost on refresh
- Sessions disappear
- Can’t resume conversations
-
Check storage mode
- Memory-only mode doesn’t persist chats
- Need OPFS or localStorage
- See localStorage Quota Exceeded
-
Verify chat backup working
- Check IndexedDB in DevTools
- Database:
gitstarrecall-chat-backup - Should have
chat_sessionsandchat_messagesstores
-
Check chat table schema
- App auto-heals corrupt chat tables
- Check browser console for migration errors
- If persistent, try “Delete local data” and re-index
-
Verify foreign key constraints
- Messages require valid session ID
- Orphaned messages are deleted
- Check console for FK errors
-
Check localStorage fallback
- If IndexedDB fails, uses localStorage
- Keys:
gitstarrecall.chat.backup.sessions.v1gitstarrecall.chat.backup.messages.v1
- Check in DevTools → Application → Local Storage
LLM and Provider Issues
WebLLM Suggests Wrong Model
WebLLM Suggests Wrong Model
Symptoms:
- Strong desktop suggested 360M model
- Mobile suggested 1B model
- Recommendation doesn’t match device
-
Check recommendation diagnostics
- Shown in chat provider settings
- Fields:
reason: Why model was chosenwebgpu: WebGPU availabilitycores: CPU core countmem: Device memory (GB)perf: Performance probe result
-
Safari/macOS memory detection
navigator.deviceMemorynot available on Safari- Missing memory treated as neutral, not weak
- This is correct behavior
-
Manual model selection
- Override recommendation in settings
- Available models:
- Llama-3.2-1B (recommended for desktop)
- SmolLM2-360M (recommended for mobile)
- Qwen2.5-1.5B
- Gemma-2-2B
- Hermes-3-Llama-3-3B
- Llama-3.1-3B
-
Performance probe timeout
- Slow probe → weak device classification
- May not reflect actual GPU capability
- Try manual selection if mis-classified
-
WebGPU unavailable
- Falls back to 360M automatically
- WASM CPU is slow for 1B models
- Correct behavior for compatibility
Ollama Connection Failed
Ollama Connection Failed
Symptoms:
- “Ollama not available” error
- Connection test fails
- Embeddings or chat don’t work
-
Verify Ollama is running
-
Enable CORS
- Ollama blocks browser requests by default
- Must set CORS before starting
-
Check endpoint URL
- Default:
http://localhost:11434 - Must be localhost/127.0.0.1/[::1]
- App rejects non-local endpoints for security
- Default:
-
Pull required model
- Model must be downloaded before use
- Check available models:
ollama list
-
Test connection manually
-
Check firewall
- Localhost should bypass firewall
- If using Docker, ensure port mapping correct
- Try
127.0.0.1instead oflocalhost
Remote LLM API Errors
Remote LLM API Errors
Symptoms:
- “API key invalid” errors
- 401/403 from LLM provider
- Requests timing out
-
Verify API key
- Check key starts with correct prefix:
- OpenAI:
sk- - Anthropic:
sk-ant-
- OpenAI:
- No extra whitespace
- Key not expired
- Check key starts with correct prefix:
-
Check provider status
- OpenAI: https://status.openai.com
- Anthropic: https://status.anthropic.com
- Outages affect all users
-
Verify rate limits
- Free tiers have low limits
- Check provider dashboard for usage
- Wait if limit exceeded
-
Check CORS (browser requests)
- Direct browser → API requests may fail CORS
- Use proxy/backend if provider doesn’t allow CORS
- App uses backend for OAuth, can extend for LLM
-
Inspect network tab
- Open DevTools → Network
- Find failed request
- Check request headers and response
- Look for specific error message
-
Test API key manually
Performance Issues
Browser Tab Freezing
Browser Tab Freezing
Symptoms:
- Tab becomes unresponsive
- UI freezes during indexing
- Can’t click buttons or scroll
-
Reduce worker pool size
-
Lower batch sizes
-
Increase UI update interval
- Default: 300ms
- Higher = less frequent UI updates = less main thread work
-
Switch to WASM backend
- WebGPU can block main thread on some drivers
- WASM is more stable
-
Close other tabs
- Reduce overall browser memory usage
- Dedicate resources to GitStarRecall
-
Use smaller model (future)
- Current:
all-MiniLM-L6-v2(384 dims) - Future: Option for even smaller models
- Current:
High Memory Usage
High Memory Usage
Symptoms:
- Browser uses 2GB+ RAM
- System becomes slow
- Out of memory crashes
-
Reduce worker pool
- Each worker loads full model
- 2 workers = 2x model memory
- Use 1 worker to halve memory usage
-
Lower batch sizes
- Smaller batches = less in-flight data
- Try 8 or 12 chunks per batch
-
Increase checkpoint frequency
- Flushes data to disk more often
- Reduces in-memory accumulation
-
Index in smaller batches
- Don’t index all 1k stars at once
- Clear data and re-fetch periodically
- Not ideal but reduces peak memory
-
Close other applications
- Free up system RAM
- Close unused browser tabs
-
Upgrade device RAM
- 1k+ stars with embeddings needs ~2-4GB
- Consider hardware upgrade if persistent issue
Related Documentation
- Architecture - System design and components
- Data Storage - Storage implementation details
- Usage Guide:
source/docs/Usage.md