skills/unggamx/ungga-skills/guv3-architecture

guv3-architecture

SKILL.md

guv3 Architecture Reference

Key Files

File Responsibility
gu/core/assistant.py Router, RouterTool, CompleteOrEscalate, agent_invoke, delegation classes
gu/graphs/init_graph.py Builder initialization, context_extractor, leave_skill node
gu/graphs/primary_assistant.py Main graph compilation, route_to_workflow
gu/state/state.py State TypedDict with dialog_state reducer
gu/utils/utilities.py create_tool_node_with_fallback, update_state_in_db
gu/utils/check_providers.py LLM configuration (llm1=GPT-4.1-mini, llm2=Claude via Vertex)

Message Flow

User Input (HumanMessage)
    |
    v
START --> context_extractor (extracts initial state)
    |
    v
route_to_workflow() [DETERMINISTIC - based on dialog_state, NOT LLM]
    |
    +---> dialog_state is empty --> primary_assistant
    +---> dialog_state == "appointment_assistant" --> appointment_assistant (subgraph)
    +---> dialog_state == "property_assistant" --> property_assistant (subgraph)
    +---> ... (one per registered agent)

Primary Assistant Flow

primary_assistant
    |
    v
Router.__call__() [checks tool_calls in last AIMessage]
    |
    +---> ToPropertyAssistant tool_call --> enter_property_assistant
    +---> ToAppointmentAssistant tool_call --> enter_appointment_assistant
    +---> ... (one per delegation class)
    +---> Regular tool_call --> primary_assistant_tools
    +---> No tool_calls --> guard_translator (respond to user)
    |
    v (if primary_assistant_tools)
RouterTool.__call__() [always returns to same assistant]
    |
    v
primary_assistant (loop continues)

Subgraph Internal Flow

enter_{agent_name} [create_entry_node - logging only]
    |
    v
{agent_name} subgraph:
    START --> {agent_name}_runnable [calls agent_invoke]
        |
        v
    route_{agent_name}() [checks for CompleteOrEscalate]
        |
        +---> CompleteOrEscalate detected --> END (exit subgraph)
        +---> Regular tool_calls --> {agent_name}_tools
        +---> No tool_calls --> END (exit subgraph)
        |
        v (if tools)
    {agent_name}_tools --> {agent_name}_runnable (loop)
    |
    v (subgraph exits)
route_after_subgraph() [in graphs/{agent_name}.py]
    |
    +---> CompleteOrEscalate in last message --> leave_skill
    +---> Otherwise --> guard_translator (respond to user)

State Management

State Definition

class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]  # Accumulates messages
    user_info: str        # User data from MongoDB
    context: str          # Extracted context
    dialog_state: Annotated[list[Literal[...]], update_dialog_stack]

Dialog State Transitions

The update_dialog_stack reducer maintains a single-element list (not a real stack):

Action Input Result Example
Enter agent "appointment_assistant" ["appointment_assistant"] User asks about a visit
Exit agent "pop" [] CompleteOrEscalate called
No change None Previous value Normal tool execution

How dialog_state Persists

update_state_in_db() saves the current dialog_state to MongoDB after each node execution. On next user message, context_extractor reads it back.

Core Components

CompleteOrEscalate

class CompleteOrEscalate(BaseModel):
    cancel: bool = True
    reason: str
  • Purpose: The ONLY way for a subgraph to return control to primary_assistant
  • Added automatically by agent_invoke() — never add it to a tool list manually
  • Exception: primary_assistant and only_owner_assistant do NOT get CompleteOrEscalate

Router (routing AFTER assistant node)

class Router:
    POSSIBLE_ROUTES = [
        "enter_property_assistant",
        "enter_appointment_assistant",
        "enter_visit_tracker_assistant",
        "enter_mortgage_loan_assistant",
        "enter_prospect_offer_assistant",
        "enter_searcher_assistant",
        "enter_owner_info_assistant",
        "leave_skill",
        "guard_translator",
        END,
    ]

Key methods:

  • __call__() — Main routing logic, saves AI message to DB
  • map_tool_calls() — Maps delegation class names to node names
  • get_role_tool_mapping() — Prevents self-delegation (e.g., appointment_assistant can't call ToAppointmentAssistant)
  • exclude_role_tools() — Removes inaccessible tools based on user role
  • route_tool_calls() — Extracts tool name from last message, looks up route

RouterTool (routing AFTER tool execution)

Always returns to the same assistant that invoked the tool. Saves AI+Tool message pair to DB.

agent_invoke() (core LLM wrapper)

def agent_invoke(
    assistant_prompt: ChatPromptTemplate,
    tools: List,
    state: State,
    config: RunnableConfig,
    name: str,
    llm_primary=None,    # Defaults to llm1 (GPT-4.1-mini)
    llm_fallback=None,    # Defaults to llm2 (Claude via Vertex)
)

What it does:

  1. Binds tools to LLM (adds CompleteOrEscalate for subgraph agents)
  2. Creates runnable: prompt | llm.with_fallbacks([fallback_llm])
  3. Cleans messages (removes orphaned tool_calls/ToolMessages, deduplicates HumanMessages)
  4. Limits to last 50 messages
  5. Invokes LLM, retries once if empty response
  6. Returns {"messages": result}

Critical: The extensive message cleaning is needed because OpenAI's API is strict about tool_call/ToolMessage parity.

Delegation Classes

Each delegation class is a Pydantic BaseModel with a query: str field:

class ToAppointmentAssistant(BaseModel):
    """When user mentions: cita, visita, agendar, ver propiedad."""
    query: str

The docstring is what the LLM sees to decide when to delegate. Write it carefully.

Registration checklist for each delegation class (all in core/assistant.py):

  1. Class definition
  2. Router.POSSIBLE_ROUTES — add "enter_{name}"
  3. Router.map_tool_calls() — add mapping
  4. Router.get_role_tool_mapping() — add reverse mapping
  5. get_agents_tools() — add to list and tool_mapping dict

Common Debugging Scenarios

Agent never gets called

  1. Check dialog_state in MongoDB — if stuck on another agent, messages go there
  2. Check delegation class docstring — is it descriptive enough for the LLM?
  3. Check primary_assistant_tools — is the To*Assistant class in the list?
  4. Check Router.POSSIBLE_ROUTES — is "enter_{name}" registered?

Agent gets called but doesn't respond

  1. Check agent_invoke name parameter — must match exactly
  2. Check tool definitions — are tools returning strings?
  3. Check prompt template — does it have ("placeholder", "{messages}")?
  4. Check LLM errors in logs — look for tool_call/ToolMessage parity issues

Agent can't return to primary

  1. Check that CompleteOrEscalate is NOT manually added to tools (it's auto-added)
  2. Check is_delegation_tool_call() in agent.py — must detect CompleteOrEscalate
  3. Check route_after_subgraph() in graphs/{name}.py — must route to "leave_skill"
  4. Check that leave_skill node exists in init_graph.py (it's global)

Messages are lost or duplicated

  1. agent_invoke limits to 50 messages — older messages are dropped
  2. Orphaned tool_calls are cleaned — if AIMessage has tool_calls but no ToolMessage, both are removed
  3. Consecutive HumanMessages are deduplicated — only the last one is kept

LLM Configuration

From gu/utils/check_providers.py:

llm1 = ChatOpenAI(model="gpt-4.1-mini")         # Primary LLM
llm2 = get_anthropic_llm()                        # Fallback (Claude 3.5 Sonnet via Vertex AI)
llm1_mini = ChatVertexAI(model="gemini-2.0-flash-001")  # For lightweight tasks

# If OpenAI is unavailable:
llm1 = get_anthropic_llm()                        # Claude becomes primary
llm2 = ChatOpenAI(model="gpt-4o-2024-08-06")     # OpenAI becomes fallback

Guard Translator

The guard_translator node is the final step before sending a response to the user. It handles translation and message formatting for WhatsApp delivery. All subgraph exits that aren't leave_skill go to guard_translator.

Weekly Installs
1
First Seen
Mar 4, 2026
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1