Documentation
Installation
# Download from your purchase email # Extract and move to PATH # macOS/Linux chmod +x logtorque sudo mv logtorque /usr/local/bin/ # Windows (PowerShell as Admin) Move-Item logtorque.exe C:\Windows\System32\ # Verify installation logtorque --version
Ollama Setup (Demo)
The Demo tier uses Ollama for local AI processing. Your logs never leave your machine. Paid tiers also support cloud LLMs.
Install / Update Ollama
Always use the latest version. Older versions may cause model errors.
# macOS brew install ollama # Linux curl -fsSL https://ollama.com/install.sh | sh # Windows # Download from https://ollama.com/download
Recommended Models
| Model | Size | RAM Required | Best For |
|---|---|---|---|
tinyllama | 637 MB | 4 GB | Quick test (low quality) |
llama3.2:3b | 2 GB | 8 GB | Fast, lightweight analysis |
llama3.2 | 4.7 GB | 16 GB | Balanced speed/quality |
llama3.1:70b | 40 GB | 64 GB | Best quality (slow) |
Note: tinyllama is useful for quickly testing that the app works, but output quality is poor. Use llama3.2 or larger for real analysis.
Pull a Model
# Recommended for most users ollama pull llama3.2 # Start Ollama server (runs in background) ollama serve
Hardware Requirements
- Minimum: 8 GB RAM, any modern CPU
- Recommended: 16 GB RAM, Apple Silicon or NVIDIA GPU
- GPU acceleration: NVIDIA (CUDA), Apple Silicon (Metal), AMD (ROCm on Linux)
Quick Start
# Analyze a log file logtorque analyze /var/log/nginx/error.log # Pipe from another command docker logs my-container | logtorque analyze - # From kubectl kubectl logs pod-name | logtorque analyze -
Configuration
# Set default LLM provider logtorque config set provider ollama # Set API key (for cloud providers) logtorque config set openai-key sk-xxx # View current config logtorque config list
LLM Providers
| Provider | Setup | Cost |
|---|---|---|
| Ollama | ollama pull llama3.2 | Free (local) |
| OpenAI | Set API key | Pay per token |
| Anthropic | Set API key | Pay per token |
Output Formats
# Plain text (default) logtorque analyze error.log # JSON logtorque analyze error.log --format json # Markdown logtorque analyze error.log --format markdown
Watch Mode
Available in Pro and Team tiers.
# Monitor a log file in real-time logtorque watch /var/log/app.log # With alerts logtorque watch /var/log/app.log --alert-level error
Troubleshooting
"Ollama not found"
- Install Ollama: https://ollama.com
- Pull a model:
ollama pull llama3.2
"API key not set"
- Run:
logtorque config set openai-key YOUR_KEY
"Permission denied"
- Check file permissions on the log file
- Try with sudo if needed