hermes-usage-insights (0.1.0b1)
Installation
pip install --index-url hermes-usage-insightsAbout this package
Track, analyze, and visualize Hermes token usage over time.
Hermes Usage Insights
Hermes Usage Insights is a small Python project for tracking, searching, reporting on, and visualizing Hermes token usage over time.
It is designed to be simple to hook into hermes-agent without depending on any specific internal database layout. The intended integration point is structured JSONL event emission.
Features
- SQLite-backed storage
- JSONL ingestion API for Hermes-side hooks/wrappers
- Token breakdowns by tool, skill, session, model, and day
- Free-text search over notes, metadata, tool names, skills, models, and session ids
- PNG graph generation for daily usage totals
- CSV export for external analysis
- Typed Python package with unit tests
- No-source-change Hermes importer and watch mode for always-on gateway collection
Quick start
cd /opt/hermes-runtime/projects/hermes-usage-insights
/opt/hermes-runtime/tools/mise/use-mise.sh uv sync
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db demo-data
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report summary
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report breakdown --by tool
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db plot daily-tokens --output artifacts/daily.png
Install from the Forgejo PyPI registry
The package is published to the Forgejo PyPI-compatible registry on code.mehalter.com when a release is published.
Recommended installation pattern:
pip install \
--index-url https://<username>:<token>@code.mehalter.com/api/packages/clawlter/pypi/simple \
--no-deps \
hermes-usage-insights==0.1.0b1
Notes:
- Use a Forgejo personal access token in place of your password when appropriate.
--index-urlis preferred over--extra-index-urlto avoid dependency-confusion risk.0.1.0b1is the first beta release and should be treated as pre-1.0 software.
See CHANGELOG.md for release history.
Event schema
Each JSONL line should be an object with this shape:
{
"timestamp": "2026-04-08T16:30:00Z",
"session_id": "session-1",
"conversation_id": "conv-1",
"provider": "openai",
"model": "gpt-5.4",
"role": "assistant",
"tool_name": "browser_navigate",
"skill_name": "mise-tool-management",
"source": "tool_call",
"prompt_tokens": 120,
"completion_tokens": 30,
"total_tokens": 150,
"cost_usd": 0.0125,
"notes": "initial navigation",
"metadata": {
"path": "/chat"
}
}
Required fields:
timestampsession_id
Recommended fields:
providermodeltool_nameskill_namesourceprompt_tokenscompletion_tokenstotal_tokenscost_usdnotesmetadata
If total_tokens is omitted, it is computed from prompt_tokens + completion_tokens.
CLI
Initialize database:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db init
Ingest JSONL:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db ingest-jsonl path/to/events.jsonl
Summary report:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report summary
Breakdown report:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report breakdown --by tool
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report breakdown --by skill
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report breakdown --by session
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report breakdown --by model
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db report breakdown --by day
Search:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db search "failing test"
Plot daily totals:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db plot daily-tokens --output artifacts/daily.png
Export CSV:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui --db usage.db export csv --output artifacts/usage.csv
Import from Hermes session artifacts without modifying Hermes source. In current Hermes deployments, exact session totals come from HERMES_HOME/state.db; HERMES_HOME/sessions/ still provides the session-key mapping and per-session transcripts:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui \
--db artifacts/hermes-usage.db \
import-hermes \
--hermes-home /media/data/volumes/hermes_agent/data
Run an always-on collector against Hermes Gateway session files:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run hui \
--db artifacts/hermes-usage.db \
watch-hermes \
--hermes-home /media/data/volumes/hermes_agent/data \
--interval-seconds 60
Hooking into Hermes
There are now two supported integration paths:
- No-source-change collector mode: import Hermes' persisted session artifacts. On current Hermes builds this means exact cumulative session totals from
HERMES_HOME/state.db, plus session-key mapping and tool-call transcripts fromHERMES_HOME/sessions/. - Higher-fidelity emitter mode: have Hermes emit JSONL usage events and ingest those.
For a running Hermes Gateway instance where you want the most efficient always-on setup without touching Hermes source code, use collector mode with watch-hermes.
A fuller integration guide is available in docs/integration.md and src/hermes_usage_insights/hooks.py.
Suggested emission points inside Hermes for the higher-fidelity JSONL path:
- after each model response with token counts
- after each tool call with tool name and associated token usage
- after each skill load/usage event if skill attribution is available
- after session completion for session-level rollups
Recommended metadata to include if available:
- provider
- model
- tool_name
- skill_name
- message role
- session/conversation ids
- notes describing the event
- arbitrary metadata such as command names, file paths, or tool categories
Development
Install development dependencies:
/opt/hermes-runtime/tools/mise/use-mise.sh uv sync
Run tests:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run pytest -q
Format:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run black src tests scripts
/opt/hermes-runtime/tools/mise/use-mise.sh uv run isort src tests scripts
Check formatting without rewriting files:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run black --check src tests scripts
/opt/hermes-runtime/tools/mise/use-mise.sh uv run isort --check-only src tests scripts
Type-check:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run pyright
Build distributions:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run python -m build
/opt/hermes-runtime/tools/mise/use-mise.sh uv run twine check dist/*
Build the documentation website:
/opt/hermes-runtime/tools/mise/use-mise.sh uv run python scripts/build_docs_site.py
This generates a static site in site-build/ for publishing to mehalter.page.
Validate the docs UI, generated HTML, and anti-pattern checks:
/opt/hermes-runtime/tools/mise/use-mise.sh npm ci
/opt/hermes-runtime/tools/mise/use-mise.sh npm run docs:test
Forgejo Actions runs the Python validation pipeline via .forgejo/workflows/python-ci.yml, the docs-site validation + deployment pipeline via .forgejo/workflows/docs-site.yml, and the release publication pipeline via .forgejo/workflows/release-package.yml.
On pushes to main, the docs workflow rebuilds the site and publishes it directly to https://clawlter.mehalter.page/hermes-usage-insights/ using the Forgejo git-pages action and the built-in forge token. This removes the need to manage a repository pages branch or deployment webhook for the docs site.
Public documentation site:
https://clawlter.mehalter.page/hermes-usage-insights/
Project layout
src/hermes_usage_insights/— package codetests/— unit testsdocs/integration.md— Hermes integration guidancedocs/schema.md— event schema referencedocs/plan.md— implementation plan