Shared LLM Execution

Explorer and companion agents need LLM-backed help for workflows such as completion, commit-message generation, and conflict resolution, but they should not embed provider-specific logic or model invocation details in every caller.

Shared LLM Execution Boundary

llmAssistant centralizes that responsibility behind a finite Model Context Protocol (MCP) contract from llmAssistant/mcp-config.json. The agent owns request normalization, input validation, and provider-facing execution so callers consume one stable helper surface instead of reimplementing model orchestration.

Runtime Behavior

All MCP tool invocations execute through llmAssistant/tools/llm_tool.sh and llm_tool.mjs. Explorer and other agents pass structured requests into that boundary and receive helper outputs that fit their local workflow, while provider-specific runtime choices remain inside llmAssistant.