Goal Setting & Monitoring
Goal Setting & Monitoring
Goal Setting is the process of defining what success looks like before starting work. Monitoring is the continuous loop of checking "Are we there yet?". Together, they allow an agent to maintain focus over long horizons. Instead of executing a single prompt, the agent enters a loop: Act -> Observe -> Evaluate -> Adjust.
When to Use
- Open-Ended Tasks: "Write a high-quality blog post." (Requires iterative refinement).
- Code Generation: "Write code that passes all tests." (Requires Act -> Test -> Fix loop).
- Autonomous Agents: When the agent must operate without human intervention for a period.
- Ambiguous Instructions: To force the agent to clarify what "done" means.
Use Cases
- Test-Driven Development (TDD): Goal = "All tests pass". Loop: Write code -> Run tests -> Fix errors -> Repeat.
- Research: Goal = "Find 5 sources". Loop: Search -> Count sources -> Search more if < 5.
- Content Polish: Goal = "Score > 8/10 on readability". Loop: Rewrite -> Evaluate -> Rewrite.
Implementation Pattern
def goal_loop(objective, criteria):
# Step 1: Initialize
current_state = get_initial_state()
iterations = 0
max_iterations = 10
while iterations < max_iterations:
# Step 2: Check Goal
status = evaluator.check(
goal=objective,
criteria=criteria,
current_state=current_state
)
if status.is_complete:
return current_state
print(f"Goal not met: {status.feedback}")
# Step 3: Act to reduce gap
# The agent sees the feedback and tries to fix it.
action = planner.decide_next_step(
goal=objective,
feedback=status.feedback
)
current_state = execute(action)
iterations += 1
raise TimeoutError("Goal not reached within iteration limit.")
More from lauraflorentin/skills-marketplace
multi-agent-collaboration
A structural pattern where multiple specialized agents communicate and coordinate to solve a problem that is too complex for a single agent. Use when user asks to "build a multi-agent system", "agents working together", "agent collaboration", or mentions team of agents, distributed agents, or swarm.
21reflection
A recursive pattern where an agent evaluates and critiques its own output to iteratively improve quality and catch errors. Use when user asks to "add self-reflection", "agent introspection", "self-critique", or mentions self-evaluation, meta-cognition, or quality self-assessment.
18human-in-the-loop
A hybrid pattern where the system pauses execution to request human approval, input, or disambiguation before proceeding with critical actions. Use when user asks to "add human approval", "require human review", "human-in-the-loop", or mentions approval workflows, human oversight, or escalation.
16planning
A high-level cognitive pattern where an agent formulates a structured sequence of actions (a plan) before executing any of them, ensuring goal-directed behavior. Use when user asks to "add planning to my agent", "task planning", "agent planning", or mentions plan generation, plan execution, or step-by-step planning.
14parallelization
A concurrency pattern where multiple agent tasks are executed at the same time to speed up processing or gather diverse perspectives. Use when user asks to "run agents in parallel", "parallelize tasks", "concurrent execution", or mentions parallel processing, fan-out, or batch execution.
13routing
A control flow pattern where a central component classifies an input request and directs it to the most appropriate specialized agent or tool. Use when user asks to "route between agents", "agent routing", "task dispatch", or mentions classifier routing, intent detection, or agent selection.
12