infranodus-reasoning
InfraNodus Reasoning Engine
Programmatic cognitive reasoning system integrating:
- Temporal state tracking (BIASED/FOCUSED/DIVERSIFIED/DISPERSED with energy economics)
- Writing pattern analysis (grammatical signals of cognitive states)
- Critical perspective generation (state-aware questioning)
- Ontology validation (anti-hierarchy enforcement)
- Gap interpretation (contextual InfraNodus integration)
- Intelligent routing (context detection and pipeline selection)
Architecture
Programmatic Layer (Python modules in scripts/):
router.py- Context detection and route selectioncoordinator.py- Pipeline orchestrationstate_manager.py- Temporal state persistencepattern_detector.py- Writing pattern analysisquestion_engine.py- Critical question generationgap_analyzer.py- Gap interpretation with state contextontology_validator.py- Anti-hierarchy validationinfranodus_bridge.py- MCP tool integration interfaceutils.py- Shared data structures and utilities
Interpretive Layer (Claude):
SKILL.md(this file) - Orchestration and natural language synthesiscomponents/guidance.md- Philosophical context for interpretationcomponents/*.md- Reference component skills
Usage Workflow
Step 1: Route Detection
Invoke the router to analyze user intent and select appropriate pipeline:
python3 scripts/router.py "user message here"
Router output (JSON):
{
"route": "text_analysis",
"confidence": 0.85,
"reason": "Substantial text provided for comprehensive analysis",
"components": ["pattern_detector", "gap_analyzer", "infranodus_bridge"],
"options": null
}
Step 2: Pipeline Execution
Invoke coordinator with the selected route:
python3 scripts/coordinator.py <route> "user message" "text to analyze"
Coordinator output (JSON):
{
"route": "text_analysis",
"state_before": {...},
"state_after": {...},
"patterns": {...},
"questions": [...],
"gaps": [...],
"recommendations": [...],
"requires_mcp": true,
"mcp_requests": [...],
"errors": []
}
Step 3: MCP Tool Invocation (if required)
If requires_mcp = true, invoke InfraNodus MCP tools using mcp_requests data:
// Each request contains:
{
"tool": "generate_knowledge_graph",
"parameters": {...},
"parser": "parse_graph_response"
}
Invoke the tool, then parse response using the specified parser from infranodus_bridge.py.
Step 4: Result Interpretation
Combine programmatic output with MCP data and interpret using components/guidance.md context:
- Pattern → State correlation: Reference guidance.md cognitive states
- Questions → Priority: Use state-aware question interpretation
- Gaps → Strategies: Apply state-dependent gap interpretation
- Recommendations → Natural language: Synthesize into user-facing guidance
Routes and Pipelines
pattern_detection_only
When: Text provided without specific request Components: [pattern_detector] Output: Patterns, state detection MCP: No
Usage:
python3 scripts/coordinator.py pattern_detection_only "analyze" "text here"
Interpret: Report patterns detected and any cognitive state shifts.
text_analysis
When: Grammar fixes, text analysis, "analyze" keyword + text Components: [pattern_detector, gap_analyzer, infranodus_bridge] Output: Patterns, gap analysis request MCP: Yes (generate_content_gaps)
Usage:
python3 scripts/coordinator.py text_analysis "fix grammar" "text here"
Interpret:
- Report pattern findings
- Invoke InfraNodus MCP tool with mcp_requests
- Present grammar-corrected text with pattern-based insights
- Suggest gap development if relevant
cognitive_diagnosis
When: "stuck", "cognitive", "state", "thinking" keywords Components: [state_manager, pattern_detector, question_engine] Output: State analysis, diagnostic questions MCP: No
Usage:
python3 scripts/coordinator.py cognitive_diagnosis "I feel stuck" "user text"
Interpret:
- Report current cognitive state, dwelling time, energy level
- Present diagnostic questions generated by question_engine
- Explain state dynamics using guidance.md
- Recommend state transition if needed
critical_intervention
When: Energy <0.2, dwelling exceeded, "challenge" keyword Components: [question_engine, gap_analyzer] Output: Maximum challenge questions, state recommendations MCP: No
Usage:
python3 scripts/coordinator.py critical_intervention "challenge assumptions" "user text"
Interpret:
- Present challenging questions (8+ questions)
- Explain intervention reason (energy/dwelling)
- Recommend state transition
- Provide blind spot analysis
ontology_generation
When: "ontology", "knowledge graph" keywords Components: [ontology_validator, infranodus_bridge] Output: Validation results, graph creation request MCP: Yes (create_knowledge_graph) if valid
Usage:
python3 scripts/coordinator.py ontology_generation "create ontology" "ontology text"
Interpret:
- Report validation results (errors, warnings, metrics)
- If invalid: Explain anti-hierarchy or relation code violations
- If valid: Invoke create_knowledge_graph MCP tool
- Provide improvement recommendations
full_pipeline
When: Substantial text (>200 words) + "develop"/"strategic" keywords Components: [pattern_detector, gap_analyzer, infranodus_bridge, question_engine] Output: Comprehensive analysis MCP: Yes (develop_text_tool, generate_content_gaps)
Usage:
python3 scripts/coordinator.py full_pipeline "develop this" "long text"
Interpret:
- Report pattern analysis
- Invoke multiple InfraNodus MCP tools (develop_text_tool, generate_content_gaps)
- Parse and contextualize gap data with gap_analyzer
- Present research questions
- Provide development strategy recommendations
- Generate follow-up questions
clarify
When: Ambiguous or very short messages Components: [] Output: Clarification request MCP: No
Interpret: Ask user to specify intent (grammar? analysis? ontology? diagnosis?)
State-Aware Interpretation
Always check current conversation state before interpreting results:
python3 -c "from scripts.state_manager import load_state; import json; print(json.dumps(load_state(), indent=2))"
Key state factors:
current_state: BIASED/FOCUSED/DIVERSIFIED/DISPERSEDdwelling_time: Exchanges in current stateenergy_level: 0.0 to 1.0state_history: Transition record
State affects:
- Question intensity and type
- Gap interpretation strategy
- Intervention priority
- Recommendation tone
Reference components/guidance.md for state-specific interpretation guidelines.
Examples
Example 1: Grammar Correction with Pattern Analysis
User: "Fix grammar: Machine learning help us understand patterns. Its about connections not just data itself."
Workflow:
# Route detection
python3 scripts/router.py "Fix grammar: Machine learning..."
# Output: route="text_analysis", confidence=0.85
# Execute pipeline
python3 scripts/coordinator.py text_analysis "Fix grammar" "Machine learning help us..."
# Output: patterns detected, gap analysis request
Interpret:
- Correct grammar: "Machine learning helps us understand patterns. It's about connections, not just the data itself."
- Report patterns: repetitive_structures=false, punctuation_rhythm=mixed
- No significant cognitive state concerns
- Skip MCP gap analysis (text too short)
Example 2: Cognitive Diagnosis
User: "I keep thinking about the same problem over and over. Can't move forward."
Workflow:
# Route
python3 scripts/router.py "I keep thinking..."
# Output: route="cognitive_diagnosis"
# Execute
python3 scripts/coordinator.py cognitive_diagnosis "I keep thinking..." "same problem over and over"
# Output: state=BIASED, dwelling=4, energy=0.65, questions=[8 challenging questions]
Interpret:
- Current state: BIASED (dwelling 4 exchanges, threshold 3)
- Energy level: 65% (sustainable but declining)
- Present diagnostic questions from question_engine
- Recommend transition to FOCUSED state
- Explain BIASED state dynamics from guidance.md
Example 3: Ontology Validation
User: "Validate this ontology: [[ML]] uses [[data]] [relatedTo]\n[[ML]] has [[accuracy]] [hasAttribute]..."
Workflow:
# Route
python3 scripts/router.py "Validate this ontology..."
# Output: route="ontology_generation"
# Execute
python3 scripts/coordinator.py ontology_generation "validate" "[[ML]] uses [[data]]..."
# Output: validation results with errors/warnings
Interpret:
- Report validation status
- If errors: Explain anti-hierarchy violations ("ML dominates with 80% of statements")
- Provide correction strategy: "Distribute relationships across multiple entity pairs"
- If warnings: Note relation code imbalance
- If valid: Offer to save to InfraNodus via create_knowledge_graph
Example 4: Full Strategic Development
User: "Help me develop this 800-word article about heart rate variability for SEO."
Workflow:
# Route
python3 scripts/router.py "Help me develop..."
# Output: route="full_pipeline"
# Execute
python3 scripts/coordinator.py full_pipeline "develop article" "[800-word HRV article]"
# Output: patterns, mcp_requests=[develop_text_tool, generate_content_gaps]
# Invoke MCP tools
# 1. develop_text_tool → research questions, latent topics
# 2. generate_content_gaps → structural gaps
# Re-run coordinator with MCP data for gap interpretation
Interpret:
- Present pattern analysis
- Invoke InfraNodus MCP tools
- Interpret gaps contextually (current state: FOCUSED → "productive expansion opportunities")
- Present research questions
- Recommend specific topic development
- Provide SEO alignment suggestions (if generate_seo_report used)
Error Handling
If router errors: Default to "clarify" route If coordinator errors: Check errors array in output, report to user If MCP tools unavailable: Skip MCP-dependent routes, use pattern-only analysis If state file corrupt: State manager auto-initializes fresh state
Component Skill Reference
When additional context needed beyond programmatic output:
Writing philosophy: components/writing-assistant.md
Ontology syntax: components/ontology-creator.md
Question templates: components/critical-perspective.md
State dynamics: components/cognitive-variability.md
Interpretive guidance: components/guidance.md
Security and State Management
State persistence: conversation_state.json in skill directory
State reset: Delete conversation_state.json to start fresh
Module safety: All modules validate inputs before processing
MCP validation: infranodus_bridge validates all parameters before tool invocation
Performance Notes
Programmatic advantages:
- ~10x faster pattern detection vs manual analysis
- Deterministic state tracking across sessions
- Consistent validation (no human variability in ontology checking)
- Precise energy/dwelling calculations
Claude advantages:
- Natural language synthesis and explanation
- Contextual recommendation tailoring
- Creative examples and analogies
- Emotional intelligence in delivery
- MCP tool invocation and integration
When NOT to Use This Skill
Skip if:
- Simple question answering (no reasoning/analysis needed)
- No text analysis, pattern detection, ontology, or cognitive diagnosis requested
- User explicitly requests different skill or approach
Prefer this skill if:
- User provides text for analysis/correction
- Cognitive state concerns ("stuck", "obsessing", "scattered")
- Ontology/knowledge graph generation requested
- Strategic content development needed
- InfraNodus integration relevant
Quick Reference
# Route detection
python3 scripts/router.py "message"
# Pipeline execution
python3 scripts/coordinator.py <route> "message" "text"
# Check current state
python3 -c "from scripts.state_manager import load_state; print(load_state()['current_state'])"
# Test pattern detection
python3 scripts/pattern_detector.py
# Test ontology validation
python3 scripts/ontology_validator.py
# View module documentation
cat components/guidance.md
Remember: You (Claude) are the interpretive layer. The Python modules provide algorithmic precision; you provide contextual wisdom, natural language synthesis, and user-facing intelligence. Use components/guidance.md to ground your interpretations in the philosophical framework.