LearningAgent Module Reference¶
Overview¶
LearningAgent is the shared goal-seeking engine responsible for:
- learning facts from content
- retrieving knowledge from memory
- reasoning about temporal changes
- synthesizing final answers with an LLM
The refactor preserves the existing public API while moving implementation details into focused modules.
Compatibility imports¶
| Import | Status | Notes |
|---|---|---|
from amplihack.agents.goal_seeking.learning_agent import LearningAgent | Primary compatibility path | Stable import path for direct callers |
from amplihack.agents.goal_seeking import LearningAgent | Supported for backward compatibility | Import works even though LearningAgent is not added to __all__ |
from amplihack.agents.goal_seeking import GoalSeekingAgent | Preferred public entry point | GoalSeekingAgent delegates learning and answering work to LearningAgent |
Constructor¶
LearningAgent(
agent_name: str = "learning_agent",
model: str | None = None,
storage_path: Path | None = None,
use_hierarchical: bool = False,
prompt_variant: int | None = None,
**kwargs: Any,
)
Constructor parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
agent_name | str | "learning_agent" | Memory namespace and runtime identity |
model | str \| None | None | Explicit LLM model override |
storage_path | Path \| None | None | Custom memory storage location |
use_hierarchical | bool | False | Use cognitive or hierarchical memory adapters instead of the flat retriever |
prompt_variant | int \| None | None | Load a synthesis prompt variant instead of the default prompt |
**kwargs | Any | - | Reserved compatibility surface for callers and wrappers |
Public methods¶
learn_from_content¶
Extracts facts from content, attaches source and temporal metadata, stores the resulting facts, and optionally stores a summary concept map.
Returns
| Key | Meaning |
|---|---|
facts_extracted | Number of facts extracted from the content |
facts_stored | Number of facts written to memory |
content_summary | Short summary of the learned content |
answer_question¶
async def answer_question(
self,
question: str,
question_level: str = "L1",
return_trace: bool = False,
_skip_qanda_store: bool = False,
_force_simple: bool = False,
) -> str | tuple[str, ReasoningTrace | None]
Runs the standard answer pipeline:
- detect intent
- select retrieval strategy
- perform math or temporal preprocessing when needed
- synthesize the final answer
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
question | str | required | Question to answer |
question_level | str | "L1" | Difficulty hint for prompt selection and synthesis style |
return_trace | bool | False | Return a ReasoningTrace alongside the answer |
_skip_qanda_store | bool | False | Internal flag used to skip writing Q and A history facts |
_force_simple | bool | False | Internal flag used to force exhaustive simple retrieval in fallback cases |
answer_question_agentic¶
async def answer_question_agentic(
self,
question: str,
max_iterations: int = 3,
return_trace: bool = False,
) -> str | tuple[str, ReasoningTrace | None]
Runs the single-shot answer pipeline first, checks completeness, searches for missing facts, and re-synthesizes only when that adds information.
get_memory_stats¶
Returns memory statistics from the active backend.
flush_memory¶
Flushes memory caches without discarding stored knowledge.
close¶
Closes the underlying memory backend and releases agent resources.
Module ownership map¶
The eight primary files are the stable ownership boundaries for the refactor.
| File | Owns |
|---|---|
learning_agent.py | constructor, shared state, lifecycle, retry helpers, public delegation |
learning_ingestion.py | ingestion pipeline, fact batch preparation, source labels, summary storage |
answer_synthesizer.py | answer synthesis, completeness evaluation, prompt assembly |
retrieval_strategies.py | retrieval selection, simple/entity/concept/aggregation strategies |
intent_detector.py | intent classification and routing metadata |
temporal_reasoning.py | transition chains, temporal lookups, temporal parsing |
code_synthesis.py | generated Python for hard temporal reasoning cases |
knowledge_utils.py | arithmetic validation, knowledge explanation, fact verification helpers |
Expected method placement¶
The refactor keeps these methods with their owning modules.
learning_ingestion.py¶
learn_from_content_truncate_learning_content_extract_source_label_build_store_fact_kwargs_build_summary_store_kwargsprepare_fact_batchstore_fact_batch_store_summary_concept_map_detect_temporal_metadata_fast_detect_temporal_metadata_extract_facts_with_llm
intent_detector.py¶
_detect_intentSIMPLE_INTENTSAGGREGATION_INTENTS
temporal_reasoning.py¶
_transition_chain_from_factsretrieve_transition_chain_extract_temporal_state_values_collapse_change_count_transitions_collapse_temporal_lookup_transitions_format_temporal_lookup_answer_parse_temporal_index_heuristic_temporal_entity_field_should_short_circuit_temporal_answer
code_synthesis.py¶
temporal_code_synthesis_code_generation_tool
knowledge_utils.py¶
_validate_arithmetic_compute_math_result_explain_knowledge_find_knowledge_gaps_verify_factget_memory_stats
retrieval_strategies.py¶
_simple_retrieval_keyword_expanded_retrieval_entity_retrieval_concept_retrieval_aggregation_retrieval_get_summary_nodes- and the rest of the retrieval-adjacent helpers that support those strategies
answer_synthesizer.py¶
answer_questionanswer_question_agentic_evaluate_answer_completeness_synthesize_with_llm
Registered actions¶
The facade still registers the same high-level actions on ActionExecutor.
| Action | Backing behavior |
|---|---|
read_content | content ingestion input helper |
search_memory | memory lookup through the active backend |
synthesize_answer | LLM synthesis via answer_synthesizer.py |
calculate | deterministic arithmetic helper |
code_generation | temporal code generation via code_synthesis.py |
explain_knowledge | topic explanation helper |
find_knowledge_gaps | knowledge gap analysis |
verify_fact | fact validation against stored knowledge |
Configuration surface¶
The refactor does not add new runtime configuration. It preserves the existing knobs.
Constructor-driven configuration¶
| Knob | Scope | Effect |
|---|---|---|
model | extraction and synthesis | Chooses the LLM used for prompts and completions |
storage_path | memory backend | Changes where memory data is stored |
use_hierarchical | memory backend | Selects CognitiveAdapter or FlatRetrieverAdapter when available |
prompt_variant | synthesis | Loads a variant prompt for eval or experimentation |
Environment variables¶
| Variable | Used when | Effect |
|---|---|---|
EVAL_MODEL | model is omitted | Provides the default LLM model name |
No new environment variables are introduced by the module split.
Internal helper modules¶
Private support files may exist to keep the primary modules small. Two common patterns are expected:
- retrieval support helpers such as
_retrieval_core.py,_retrieval_entity.py, and_retrieval_meta.py - synthesis support helpers such as
_answer_prompting.py
These files stay private implementation details. Callers should continue to target LearningAgent, not individual support modules.
Test layout¶
| Test file | Scope |
|---|---|
test_learning_agent_core.py | constructor, retry, lifecycle |
test_learning_agent_ingestion.py | ingestion and storage |
test_learning_agent_retrieval.py | retrieval strategies and fallbacks |
test_learning_agent_temporal.py | temporal reasoning and code synthesis |
test_math_intent.py | math and intent behavior |
test_agentic_answer_mode.py | answer refinement behavior |
test_goal_seeking_agent.py | higher-level agent compatibility |
Validation commands¶
The refactored LearningAgent is validated with focused checks:
uv run ruff check src/amplihack/agents/goal_seeking tests/agents/goal_seeking
uv run pyright src/amplihack/agents/goal_seeking tests/agents/goal_seeking
uv run python -m pytest tests/agents/goal_seeking/