Skip to main content
Chat sessions let you have interactive conversations about your search results. Ask questions, get clarifications, and explore repositories through natural dialogue.

Overview

Each search creates a persistent chat session that maintains context across multiple questions and answers.

Session Lifecycle

1

Search Creation

Performing a semantic search automatically creates a new chat session
2

Context Loading

Top search results (up to 8) are loaded as context for the LLM
3

Interactive Chat

Ask questions and receive responses grounded in your search results
4

Persistence

All messages are saved locally and can be restored from history

Session Storage

Chat sessions are stored in SQLite with automatic backups:
Session Schema
CREATE TABLE chat_sessions (
  id TEXT NOT NULL PRIMARY KEY,
  query TEXT NOT NULL,
  created_at INTEGER NOT NULL,
  updated_at INTEGER NOT NULL
);

CREATE TABLE chat_messages (
  id TEXT NOT NULL PRIMARY KEY,
  session_id TEXT NOT NULL,
  role TEXT NOT NULL CHECK (role IN ('user','assistant','system')),
  content TEXT NOT NULL,
  sequence INTEGER NOT NULL,
  created_at INTEGER NOT NULL,
  FOREIGN KEY (session_id) REFERENCES chat_sessions(id) ON DELETE CASCADE
);
Sessions are backed up to localStorage for recovery after browser data clearing.

Message Ordering

Messages are ordered using a combination of created_at timestamp and sequence number:
Message Sorting
function sortChatMessages(messages: ChatMessageRecord[]): ChatMessageRecord[] {
  return messages.sort((a, b) => {
    const timeDiff = a.createdAt - b.createdAt;
    if (timeDiff !== 0) return timeDiff;
    return a.sequence - b.sequence;
  });
}

Context Management

Chat responses are grounded in search results to prevent hallucination:

Context Construction

Building Context
function buildContextBlock(snippets: string[]): string {
  return snippets
    .slice(0, 8)  // Top 8 results
    .map((snippet, index) => `Context ${index + 1}:\n${snippet}`)
    .join("\n\n");
}

function buildMessages(prompt: string, snippets: string[]): Message[] {
  return [
    {
      role: "system",
      content: "You are a recommendation assistant for GitHub starred repositories. Use only provided context and be concise."
    },
    {
      role: "user",
      content: `${prompt}\n\n${buildContextBlock(snippets)}`
    }
  ];
}
LLMs can only reference repositories from your current search results. Perform a new search to change the context.

Message Types

Chat sessions support three message roles:
Questions and prompts from you
  • Displayed with right alignment
  • Plain text rendering
  • Submitted via Enter key or send button
Responses from the LLM
  • Displayed with left alignment
  • Markdown rendering with syntax highlighting
  • Streamed token-by-token as they’re generated
Internal context instructions (not displayed)
  • Contains repository context
  • Defines assistant behavior
  • Not shown in the UI

Streaming Responses

Assistant responses stream in real-time as the LLM generates tokens:
Streaming Implementation
async function stream(
  config: LLMProviderConfig,
  request: LLMStreamRequest
): Promise<void> {
  const response = await fetch(`${baseUrl}/v1/chat/completions`, {
    method: "POST",
    signal: request.signal,
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${config.apiKey}`
    },
    body: JSON.stringify({
      model: config.model,
      stream: true,
      messages: buildMessages(request.prompt, request.contextSnippets)
    })
  });

  await parseSseStream(response, request.onToken);
}

Cancellation

Streaming can be cancelled mid-generation:
  • Click the Cancel button
  • Uses AbortController to terminate the request
  • Partial response is preserved in the session

Session History

Access previous chat sessions from the history panel:

Session List

All sessions ordered by most recently updatedShows:
  • Original search query
  • Last updated timestamp
  • Number of messages

Session Restore

Click any session to restore itRestores:
  • Search results
  • All messages
  • Active filters

Backup & Recovery

Automatic Backup
async function upsertChatSession(session: ChatSessionRecord): Promise<void> {
  // Save to SQLite
  this.runChatSessionUpsert(session);
  
  // Backup to localStorage
  await backupChatSession({
    id: session.id,
    query: session.query,
    createdAt: session.createdAt,
    updatedAt: session.updatedAt
  });
  
  await this.persist();
}
Chat sessions survive browser refresh and tab closure. Clear browser data will trigger backup recovery.

Best Practices

Effective Prompting

1

Reference Results

“Which of these repositories is best for X?”Forces the LLM to compare repositories in the current context.
2

Ask for Specifics

“What authentication methods does this library support?”Directs the LLM to extract specific details from README content.
3

Request Comparisons

“Compare the features of the top 3 results”Leverages multiple context chunks for comprehensive answers.

Managing Sessions

Clear Context

Start a new search to load different repositories into context

Refine Filters

Adjust filters to change which repositories are in context without losing chat history

Session Cleanup

Delete old sessions from history to free storage space

Export Conversations

Copy useful conversations from the UI (no built-in export yet)

Provider Configuration

Configure which LLM provider to use for chat:
{
  id: "openai-compatible",
  baseUrl: "https://api.openai.com",
  model: "gpt-4o-mini",
  apiKey: "sk-..."
}
  • Requires API key
  • High quality responses
  • Usage costs apply
Provider settings are accessible via the gear icon in the chat composer.

Privacy & Security

Local Storage

All chat sessions stored locally in SQLiteNever sent to GitStarRecall servers

Remote Providers

OpenAI and other remote providers receive:
  • Your questions
  • Repository context snippets
Choose local providers for full privacy

Local Providers

Ollama and WebLLM never leave your device100% private, no network requests

API Keys

Stored in browser localStorage onlyNever transmitted except to configured provider