AI-powered log analysis
from your terminal

Stop staring at walls of text at 2am. Get instant answers.

Get early access

New features, tips, and occasional discounts. No spam.

Currently in beta

Pricing structure coming soon.

It's this easy...

What you get

Instant analysis

Single binary, zero setup, works in moments

💬

Plain English

Not just "error detected" — explains why and what to do

🔒

Works offline

Local LLMs via Ollama, your logs never leave your machine

☁️

Cloud LLM support

Bring your own OpenAI or Anthropic API key

📁

Multiple formats

Nginx, Apache, Docker, Kubernetes, syslog, JSON logs

🔗

Pipe anything

kubectl logs pod | logtorque analyze -

FAQ

What log formats are supported?
Nginx, Apache, Docker, Kubernetes, syslog, and any text-based log file. JSON logs are also supported.
Do I need an API key?
No. You can use Ollama for free, local analysis. cloud LLMs require API keys.
Can I use it offline?
Yes! With Ollama, everything runs locally. Your logs never leave your machine.
What's the difference between tiers?
Demo is free — basic analysis with Ollama, up to 5 findings. Lite adds cloud LLM backends (OpenAI, Anthropic) and deeper analysis. Pro adds unlimited findings, JSON/MD output, filters, temporal analysis, and custom prompts. Team adds watch mode, alerting, and multiple seats.

© 2026 SkinnyBinder

Made with “FortiGrit --coarse”™