Tool Execution Workflow
Each Model Context Protocol (MCP) invocation is process-isolated and starts from llmAssistant/tools/llm_tool.sh.
End-to-End Flow Diagram
Explorer user interface (UI) / peer agent
|
| callTool(name, arguments)
v
Ploinky Router -> AgentServer (/mcp)
|
| execute command from mcp-config.json
v
llmAssistant/tools/llm_tool.sh
|
| launch llm_tool.mjs
v
llmAssistant/tools/llm_tool.mjs
|
| parse envelope and normalize args
| resolve TOOL_NAME
| validate required fields
| build prompt / helper context
| execute default LLM agent
v
JSON result -> MCP response -> caller
Lifecycle Stages
- Dispatch: tool name is mapped from MCP config through
TOOL_NAME. - Envelope normalization: dispatcher extracts arguments from multiple MCP envelope shapes.
- Input validation: tool-specific required fields are enforced before any LLM call.
- Prompt shaping: context is clipped and structured according to tool semantics.
- LLM execution: default Achilles LLM agent executes prompt with
mode: fasttext responses. - Output normalization: fences and wrappers are stripped and response is returned as contract payload.
- Error response: failures are returned as
{ "ok": false, "error": ... }.
Conflict Resolution Path
llm_resolve_conflict first attempts deterministic merge using git merge-file. If conflict markers remain, it escalates to LLM resolution with explicit merge strategy prompts.
This dual path reduces unnecessary model calls and preserves deterministic outcomes when possible.