Tool Execution Workflow

Each Model Context Protocol (MCP) invocation is process-isolated and starts from llmAssistant/tools/llm_tool.sh.

End-to-End Flow Diagram

Explorer user interface (UI) / peer agent
    |
    | callTool(name, arguments)
    v
Ploinky Router -> AgentServer (/mcp)
    |
    | execute command from mcp-config.json
    v
llmAssistant/tools/llm_tool.sh
    |
    | launch llm_tool.mjs
    v
llmAssistant/tools/llm_tool.mjs
    |
    | parse envelope and normalize args
    | resolve TOOL_NAME
    | validate required fields
    | build prompt / helper context
    | execute default LLM agent
    v
JSON result -> MCP response -> caller

Lifecycle Stages

  1. Dispatch: tool name is mapped from MCP config through TOOL_NAME.
  2. Envelope normalization: dispatcher extracts arguments from multiple MCP envelope shapes.
  3. Input validation: tool-specific required fields are enforced before any LLM call.
  4. Prompt shaping: context is clipped and structured according to tool semantics.
  5. LLM execution: default Achilles LLM agent executes prompt with mode: fast text responses.
  6. Output normalization: fences and wrappers are stripped and response is returned as contract payload.
  7. Error response: failures are returned as { "ok": false, "error": ... }.

Conflict Resolution Path

llm_resolve_conflict first attempts deterministic merge using git merge-file. If conflict markers remain, it escalates to LLM resolution with explicit merge strategy prompts.

This dual path reduces unnecessary model calls and preserves deterministic outcomes when possible.