15 releases
new 0.1.16 | Jan 14, 2025 |
---|---|
0.1.15 | Jan 14, 2025 |
0.1.6 | Dec 19, 2024 |
#722 in Command line utilities
999 downloads per month
505KB
671 lines
aicommit
A CLI tool that generates concise and descriptive git commit messages using LLMs (Large Language Models).
Features
Implemented Features
- โ Uses LLMs to generate meaningful commit messages from your changes
- โ Supports multiple LLM providers (OpenRouter, Ollama)
- โ Custom api keys for services through open router api (for google aistudio and etc) - go to https://openrouter.ai/settings/integrations and paste key from any of them: AI21, Amazon BedRock, Anthropic, AnyScale, Avian.io, Cloudflare, Cohere, DeepInfra, DeepSeek, Fireworks, Google AI Studio, Google Vertex, Hyperbolic, Infermatic, Inflection, Lambda, Lepton, Mancer, Mistral, NovitaAI, OpenAI, Perplexity, Recursal, SambaNova, SF Compute, Together, xAI
- โ Fast and efficient - works directly from your terminal
- โ Easy configuration and customization
- โ Transparent token usage and cost tracking
- โ Version management with automatic incrementation
- โ Version synchronization with Cargo.toml
- โ Provider management (add, list, set active)
- โ Interactive configuration setup
- โ Configuration file editing
- โ
Auto push functionality (
aicommit --push
) - โ
Auto push functionality (
aicommit --pull
) - โ
Interactive commit message generation (
aicommit --dry-run
) - โ Basic .gitignore file checks and management (create ~/.default_gitignore and use it as template if there is no .gitignore in this directory)
- โ
Watch mode (
aicommit --watch 1m
) - โ
Watch with edit delay (
aicommit --watch 1m --wait-for-edit 30s
)
Planned Features
- ๐ง Tests for each feature to prevent breaking changes
- ๐ง Split commits by file (
aicommit --by-file
) - ๐ง Split commits by feature (
aicommit --by-feature
) - ๐ง Version management for multiple languages (package.json, requirements.txt, etc.)
- ๐ง Branch safety checks for push operations
Legend:
- โ Implemented
- ๐ง Planned
- ๐งช Has tests
Installation
Install via cargo:
cargo install aicommit
Or build from source:
git clone https://github.com/suenot/aicommit
cd aicommit
cargo install --path .
Quick Start
- Add a provider:
aicommit --add
-
Make some changes to your code
-
Create a commit:
aicommit
Provider Management
List all configured providers:
aicommit --list
Set active provider:
aicommit --set <provider-id>
Version Management
Automatically increment version in a file before commit:
aicommit --version-file "./version" --version-iterate
Synchronize version with Cargo.toml:
aicommit --version-file "./version" --version-cargo
Both operations can be combined:
aicommit --version-file "./version" --version-cargo --version-iterate
Configuration
The configuration file is stored at ~/.aicommit.json
. You can edit it directly with:
aicommit --config
Provider Configuration
Each provider can be configured with the following settings:
max_tokens
: Maximum number of tokens in the response (default: 50)temperature
: Controls randomness in the response (0.0-1.0, default: 0.3)
For OpenRouter, token costs are automatically fetched from their API. For Ollama, you can specify your own costs if you want to track usage.
Supported LLM Providers
OpenRouter
{
"providers": [{
"id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "openrouter",
"api_key": "sk-or-v1-...",
"model": "mistralai/mistral-tiny",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.25,
"output_cost_per_1k_tokens": 0.25
}],
"active_provider": "550e8400-e29b-41d4-a716-446655440000"
}
Ollama
{
"providers": [{
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"provider": "ollama",
"url": "http://localhost:11434",
"model": "llama2",
"max_tokens": 50,
"temperature": 0.3,
"input_cost_per_1k_tokens": 0.0,
"output_cost_per_1k_tokens": 0.0
}],
"active_provider": "67e55044-10b1-426f-9247-bb680e5fe0c8"
}
Recommended Providers through OpenRouter
- ๐ Google AI Studio - 1000000 tokens for free
- "google/gemini-2.0-flash-exp:free"
- ๐ DeepSeek
- "deepseek/deepseek-chat"
Usage Information
When generating a commit message, the tool will display:
- Number of tokens used (input and output)
- Total API cost (calculated separately for input and output tokens)
Example output:
Generated commit message: Add support for multiple LLM providers
Tokens: 8โ 32โ
API Cost: $0.0100
You can have multiple providers configured and switch between them by changing the active_provider
field to match the desired provider's id
.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Watch Mode
The watch mode allows you to automatically commit changes at specified intervals. This is useful for:
- Automatic backups of your work
- Maintaining a detailed history of changes
- Not forgetting to commit your changes
Basic Watch Mode
aicommit --watch 1m # Check and commit changes every minute
aicommit --watch 30s # Check every 30 seconds
aicommit --watch 2h # Check every 2 hours
Watch with Edit Delay
You can add a delay after the last edit before committing. This helps avoid creating commits while you're still actively editing files:
aicommit --watch 1m --wait-for-edit 30s # Check every minute, but wait 30s after last edit
Time Units
s
: secondsm
: minutesh
: hours
Additional Options
You can combine watch mode with other flags:
# Watch with auto-push
aicommit --watch 1m --push
# Watch with version increment
aicommit --watch 1m --version-file version --version-iterate
# Interactive mode with watch
aicommit --watch 1m --dry-run
Tips
- Use shorter intervals (30s-1m) for active development sessions
- Use longer intervals (5m-15m) for longer coding sessions
- Add
--wait-for-edit
when you want to avoid partial commits - Use
Ctrl+C
to stop watching
Dependencies
~12โ25MB
~361K SLoC