20 releases (7 breaking)

new 0.8.7 Nov 21, 2024
0.7.4 Nov 2, 2024

#61 in Configuration

Download history 318/week @ 2024-09-15 228/week @ 2024-09-22 82/week @ 2024-09-29 6/week @ 2024-10-06 176/week @ 2024-10-13 12/week @ 2024-10-20 504/week @ 2024-10-27 344/week @ 2024-11-03 254/week @ 2024-11-10 532/week @ 2024-11-17

1,634 downloads per month

MIT license

69KB
1.5K SLoC

Shelf: AI-based command-line tools for developers

Shelf CI

Shelf is a command-line tool for managing dotconf files and generating git commit messages using AI. It provides a simple interface to track dotfiles across your system and integrates with multiple AI providers to automatically generate meaningful commit messages through git hooks. With support for local and cloud-based AI models, Shelf makes both dotfile management and git commits effortless.

Features

  • Track dotconf files from anywhere in your file system recursively
  • List all tracked dotfiles
  • Remove dotconf files recursively from database
  • AI-powered git commit message generation with multiple providers:
    • Groq
    • Google Gemini
    • Anthropic Claude
    • OpenAI
    • XAI grok
    • Ollama (local)
  • Git hooks integration for automatic commit message generation

Installation

To install Shelf, you need to have Rust and Cargo installed on your system. If you don't have them, you can install them from rustup.rs.

Once you have Rust and Cargo installed, you can build and install Shelf using the following command:

cargo install --path .

Usage

Shelf provides commands for both dotfile management and git integration:

Dotfile Management

# Add a new dotfile to track
slf dotconf cp ~/.bashrc

# List all tracked dotfiles
slf dotconf ls

# Remove a dotfile from tracking
slf dotconf rm ~/.bashrc

# Interactive selection of dotfiles to track
slf dotconf suggest -i

# Show help
slf --help

Each command can be run with -h or --help for more information.

Git AI Integration

The gitai subcommand provides AI-powered git commit message generation:

# Generate commit message for staged changes
slf gitai commit

# Install git hook for automatic message generation
slf gitai commit --install-hook

# Remove git hook
slf gitai commit --uninstal-hook

# Configure AI provider
slf gitai config set provider openai
slf gitai config set openai_api_key "your-api-key"

# Use specific provider for one commit
slf gitai commit -p openai

# List current configuration
slf gitai config list

The GitAI features support multiple AI providers:

  • Groq (default): GroqCloud-based models
  • Google Gemini: Cloud-based using Gemini models
  • OpenAI: Cloud-based using GPT models
  • Anthropic Claude: Cloud-based using Claude models
  • XAI Grok: Cloud-based using Grok models
  • Ollama: Local, privacy-friendly AI using models like Qwen

The git hook integrates seamlessly with your normal git workflow:

# Hook will automatically generate message if none provided
git commit

# Your message takes precedence
git commit -m "feat: your message"

# AI helps with amending
git commit --amend

Shell Completion

Shelf supports generating shell completion scripts for various shells. You can generate these scripts using the completion subcommand:

# Generate completion script for Bash
slf completion bash > slf.bash

# Generate completion script for Zsh
slf completion zsh > _slf

# Generate completion script for Fish
slf completion fish > slf.fish

To use the completion scripts:

  • For Bash, add the following line to your ~/.bashrc:

    source /path/to/slf.bash
    
  • For Zsh, place the _slf file in ~/.zfunc, then add source ~/.zfunc/_slf in ~/.zshrc.

  • For Fish, place the slf.fish file in ~/.config/fish/completions.

After setting up the completion script, restart your shell or source the respective configuration file to enable completions for the slf command.

Configuration

GitAI settings are stored in ~/.config/shelf/gitai.json (or $XDG_CONFIG_HOME/shelf/gitai.json if set). You can configure:

  • provider: AI provider to use (ollama, openai, anthropic, gemini, groq)
  • ollama_host: Ollama server URL (default: http://localhost:11434)
  • ollama_model: Ollama model to use (default: qwen2.5-coder)
  • openai_api_key: OpenAI API key for GPT models

Example configuration:

{
  "provider": "ollama",
  "ollama_host": "http://localhost:11434",
  "ollama_model": "qwen2.5-coder",
}

Development

To build the project locally:

cargo build

To run tests:

cargo test

To run the project directly without installing:

cargo run --bin slf -- [SUBCOMMAND]

Replace [SUBCOMMAND] with the command you want to run, such as dotconf or gitai.

Contributing

Contributions are welcome! Please feel free tor submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Dependencies

~43–58MB
~1M SLoC