Quick Start¶
Build, install, and query a Knowledge Pack in under 5 minutes.
Prerequisites¶
- Python 3.12+
- uv (Python package manager):
curl -LsSf https://astral.sh/uv/install.sh | sh - Anthropic API key: Set
ANTHROPIC_API_KEYin your environment
1. Install Dependencies¶
git clone https://github.com/rysweet/agent-kgpacks.git
cd agent-kgpacks
uv sync
# To build packs from web content, also install build extras:
# uv sync --extra build
2. Set Your API Key¶
export ANTHROPIC_API_KEY="sk-ant-..."
3. Build a Pack¶
Build the Go expert pack in test mode (fetches a small subset of URLs for speed):
echo "y" | uv run python scripts/build_go_pack.py --test-mode
This will:
- Read URLs from
data/packs/go-expert/urls.txt - Fetch each page and extract text content
- Run LLM extraction to identify entities and relationships
- Generate BGE embeddings for all sections
- Store everything in a LadybugDB graph database at
data/packs/go-expert/pack.db - Write
manifest.jsonwith pack metadata
Build time
Test mode builds in 5-10 minutes. Full builds fetch all URLs and take 3-5 hours depending on pack size.
4. Query the Pack¶
Ask a question from the command line:
uv run wikigr query "What is goroutine scheduling?" --pack go-expert
Or use the Python API:
from wikigr.agent.kg_agent import KnowledgeGraphAgent
agent = KnowledgeGraphAgent(
db_path="data/packs/go-expert/pack.db",
use_enhancements=True,
)
result = agent.query("What is goroutine scheduling?")
print(result["answer"])
print(f"Sources: {result['sources']}")
Or use the context manager form:
from wikigr.agent.kg_agent import KnowledgeGraphAgent
with KnowledgeGraphAgent(
db_path="data/packs/go-expert/pack.db",
use_enhancements=True,
) as agent:
result = agent.query("What is goroutine scheduling?")
print(result["answer"])
print(f"Sources: {result['sources']}")
The agent will:
- Embed your question using the same model used during ingestion
- Search the vector index for relevant sections
- Check confidence -- if similarity is too low, Claude answers from its own knowledge
- If confidence is sufficient, retrieve multiple documents, rerank by graph authority, and synthesize an answer with source citations
5. Run an Evaluation¶
Evaluate the pack against the training baseline:
uv run python scripts/eval_single_pack.py go-expert --sample 5
This runs 5 questions from the pack's eval/questions.jsonl in two conditions:
- Training: Claude answers without any pack context
- Pack: Claude answers with full KG Agent retrieval pipeline
Output looks like:
Pack: go-expert
Questions: 5
Condition Avg Score Accuracy
────────── ───────── ────────
Training 8.7/10 90%
Pack 9.6/10 100%
Delta +10pp
Sample size
Use --sample 5 for a quick check (~$0.15). For reliable results, use --sample 25 or omit the flag to run all questions.
6. Install as a Claude Code Skill¶
Each pack can be installed as a Claude Code skill that auto-activates when you ask about that domain:
# Install skills for all packs
uv run python scripts/install_pack_skills.py
# Or use the /kg-pack command (if the skill is already installed)
# /kg-pack install go-expert
This creates .claude/skills/go-expert/SKILL.md which tells Claude how to query the Go knowledge graph whenever you ask Go questions.
7. Use the /kg-pack Skill Manager¶
Install the /kg-pack skill in any Claude Code project:
mkdir -p /your/project/.claude/skills/kg-pack
cp skills/kg-pack/SKILL.md /your/project/.claude/skills/kg-pack/
Then in Claude Code:
/kg-pack list # See all 49 available packs
/kg-pack install rust-expert # Install Rust expertise
/kg-pack build "WebAssembly components" # Build a new pack from scratch
/kg-pack query go-expert "how do goroutines work?"
What Just Happened?¶
-
Build: The build script fetched Go documentation pages, extracted structured knowledge (entities, relationships, facts), generated vector embeddings, and stored everything in a LadybugDB graph database.
-
Query: The KG Agent embedded your question, searched the graph for relevant content, applied enhancement modules (reranking, multi-doc synthesis), and used Claude to synthesize a grounded answer.
-
Eval: The evaluation script asked the same questions to Claude with and without pack context, then used a judge model to score both answers against ground truth.
-
Skill install: The installer generated a SKILL.md file that tells Claude how and when to query the pack. Skills auto-activate in future sessions when you mention the domain.
Next Steps¶
- Pack Catalog -- Browse all 49 available packs with stats, eval scores, and install commands
- Tutorial -- Full lifecycle walkthrough including domain selection, URL curation, and result interpretation
- Build a Pack -- Step-by-step guide for building packs from scratch
- Run Evaluations -- Understanding the evaluation framework