Quickstart
From zero to ingesting events in under 2 minutes.
1. Install
Build from source:
$ git clone https://github.com/bravo1goingdark/keplor.git
$ cd keplor
$ cargo build --release
$ cp target/release/keplor /usr/local/bin/ 2. Start the server
$ keplor run Binds to 0.0.0.0:8080 with a local keplor.db SQLite database. No config needed.
Verify:
$ curl http://localhost:8080/health
{"status":"ok","version":"0.1.0","db":"connected"} 3. Send your first event
$ curl -X POST http://localhost:8080/v1/events \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"provider": "openai",
"usage": {"input_tokens": 500, "output_tokens": 200},
"latency": {"total_ms": 1200, "ttft_ms": 180},
"user_id": "alice",
"source": "my-app"
}' Response:
{
"id": "01J5XQKR...",
"cost_nanodollars": 6250000,
"model": "gpt-4o",
"provider": "openai"
} Cost is auto-computed. 6250000 nanodollars = $0.00625.
4. Query it back
$ curl "http://localhost:8080/v1/events?user_id=alice&limit=5" Or from the CLI:
$ keplor query --user-id alice
ID PROVIDER MODEL TOKENS COST ($)
--------------------------------------------------------------------------------------------
01J5XQKR... openai gpt-4o 700 0.00625000
1 event(s) 5. Check stats
$ keplor stats
=== Keplor Storage Statistics ===
Database: keplor.db
Total events: 1
Database size: 0.1 MB Next steps
Integration Guide — Python, Node.js, LiteLLM, S3/R2 setup, tiered retention.
API Reference — full endpoint docs.
Configuration — auth, archival, retention tiers, tuning.
CLI Reference — all commands.