Przejdź do głównej zawartości

AI Suite Overview

Przegląd całego AI Suite: architektura Unified AI, kluczowe funkcje i modele kanoniczne.

Na tej stronie

AI Suite to zintegrowany system AI w Vista, oparty na architekturze Unified AI. Wszystkie wywołania AI przechodzą przez backend Tauri (unified_ai_* commands).


flowchart TB
subgraph Frontend["Frontend Components"]
Chat[AI Chat Panel]
SOAP[SOAP Generator]
Tasks[Task Suggestions]
PDF[PDF Analyzer]
Assistants[AI Specialists]
end
subgraph Hooks["React Hooks"]
useChat[useChatHistory]
useSOAP[useSOAPGeneration]
useTasks[useAiTaskSuggestions]
useNotes[useNotesAgent]
end
subgraph UnifiedAI["Unified AI Client"]
Client[UnifiedAIClient.ts]
end
subgraph Backend["Tauri Backend"]
Commands[unified_ai_*<br/>commands]
Providers[AI Providers<br/>OpenAI, Anthropic]
end
Frontend --> Hooks
Hooks --> Client
Client --> Commands
Commands --> Providers

FeatureDescriptionCommand
SOAP GenerationAuto-generowanie notatek SOAPunified_ai_generate_soap
AI ChatInteraktywny chat z AIunified_ai_chat / _stream
Task SuggestionsSugestie zadań po wizycieunified_ai_generate_tasks
PDF AnalysisAnaliza dokumentówunified_ai_analyze_document
AI AssistantsSpecjaliści AIChat z kontekstem

Plik: src/services/ai/UnifiedAIClient.ts

// Main entry point for all AI operations
const UnifiedAIClient = {
// Chat
chat(message, options),
chatStream(message, options, onToken, onComplete, onError),
chatBackground(message, options),
cancelStream(requestId),
// SOAP
generateSOAP(visitContext, templateType, userId),
// Documents
analyzeDocument(content, type, prompt),
// Audio
transcribe(audioBlob, options),
synthesize(text, options),
// Health
healthCheck(canonicalModel),
getModelMapping(canonicalModel),
testProviders(apiKeyOverride),
};

AI Suite używa abstrakcji “canonical models” do mapowania na konkretne modele:

CanonicalUse CaseExample Provider
chatGeneral chatLibraxis AI, openAI
ai-suggestionsTask/SOAP suggestionsLibraxis AI
visionImage analysisVision LLM
transcriptionAudio STTWhisper
ttsText-to-speechTTS Provider
// Get actual model for canonical name
const model = await UnifiedAIClient.getModelMapping('chat');
// → { provider: 'libraxis', model: 'chat' }

AI potrzebuje kontekstu do generowania odpowiedzi:

interface AINotesPatientData {
name: string;
species: string;
breed: string;
age: number;
weight: number;
owner: string;
medicalHistory?: string[];
allergies?: string[];
}
interface AINotesVisitData {
visitType: string;
reasonForVisit: string;
transcript: string;
vetName: string;
previousVisits?: VisitSummary[];
}

Wszystkie AI calls wymagają aktywnej sesji:

// invokeWithSession wraps all AI calls
invokeWithSession('unified_ai_chat', {
message,
options,
session_id, // Auto-injected
});

try {
const response = await UnifiedAIClient.chat(message);
} catch (error) {
if (error instanceof AIProviderError) {
// Provider-specific error (rate limit, invalid key)
} else if (error instanceof SessionNotReadyError) {
// Session expired
} else {
// Generic error
}
}

Chat obsługuje streaming dla lepszego UX:

UnifiedAIClient.chatStream(
message,
options,
(token) => {
// Append token to response
appendToResponse(token);
},
(response) => {
// Complete response
setFinalResponse(response);
},
(error) => {
// Handle error
showError(error);
}
);

AI Suite ma “pamięć” - Context Vault:

// Save context snippets
await contextVaultApi.upsert([{
namespace: 'visit',
visit_id: visitId,
label: 'patient_history',
snippet: patientHistory,
}]);
// Query similar context
const similar = await contextVaultApi.query({
namespace: 'visit',
query: currentSymptoms,
limit: 5,
});

AI features mogą być włączane/wyłączane:

FeatureSettingDefault
SOAP Auto-generateai_soap_enabledtrue
Task Suggestionsai_tasks_enabledtrue
Chat Historyai_chat_historytrue
TTStts_enabledfalse


AI Suite to 149 plików i 22,600 LOC - największy moduł w Vista.

graph TB
subgraph Frontend["AI Suite Frontend (149 files, 22.6k LOC)"]
subgraph Engine["Engine Layer"]
AIChatContext[AIChatContext]
AIChatProvider[AIChatProvider]
AssistantHostManager[AssistantHostManager]
end
subgraph Hosts["Host Components"]
FloatingHost[AIFloatingHost<br/>Orb + Panel]
SystemHost[AISystemHost<br/>Full-page]
SpecialistHost[AISpecialistHost<br/>Visit context]
end
subgraph Hooks["Core Hooks"]
useStreamingManager[useStreamingManager<br/>977 LOC]
useUnifiedAI[useUnifiedAI<br/>439 LOC]
useChatHistory[useChatHistory<br/>318 LOC]
useChatSending[useChatSending<br/>380 LOC]
end
subgraph UI["UI Components"]
InputArea[InputArea<br/>655 LOC]
ChatCanvas[AIChatCanvas<br/>1,078 LOC]
FloatingOrb[AIFloatingOrb<br/>570 LOC]
MessageList[FloatingMessageList]
end
subgraph Agents["AI Agents"]
NotesAgent[Notes Agent]
SuggestionsAgent[Suggestions Agent]
VoiceAgent[Voice Agent]
end
end
subgraph Backend["Rust Backend (40 commands)"]
UnifiedAI[unified_ai module]
SOAP[soap_generation.rs<br/>24,961 LOC]
ChatB[chat/service.rs]
VectorMemory[vector_memory/]
Diagnostics[diagnostics.rs<br/>28,669 LOC]
end
Engine --> Hosts
Hooks --> Engine
UI --> Hooks
Agents --> Hooks
Hooks --> UnifiedAI

CategoryCommands
Chatunified_ai_chat, unified_ai_chat_stream, unified_ai_poll_response
SOAPstart_soap_generation, get_soap_generation_status, unified_ai_generate_soap
Suggestionsunified_ai_generate_suggestions, unified_ai_generate_tasks, unified_ai_generate_differential
Memoryget_vista_memory, save_vista_memory, ai_context_vault_*
Sessionsai_chat_session_save, ai_chat_sessions_list, ai_chat_session_load
Diagnosticsdiagnostics_ai_config, diagnostics_ai_curl, unified_ai_health_check

ComponentLOCOdpowiedzialność
useStreamingManager977Token streaming, event handling
AIChatCanvas1,078Message rendering, scroll
InputArea655Message input, attachments
AIFloatingOrb570Floating AI button
useUnifiedAI439Main AI hook
useChatSending380Message sending logic
useChatHistory318Chat history management

HostUżycieKontekst
AIFloatingHostOrb + sliding panelGlobal, always available
AISystemHostFull-page AI viewStandalone page
AISpecialistHostSpecialist modeVisit context, patient data