#llm #model #best #request #conversation #oneshot #days

bin+lib ask_llm

make a request to whatever llm is the best these days, without hardcoding model/provider

2 releases

Uses new Rust 2024

new 0.1.2 Mar 14, 2025
0.1.0 Mar 14, 2025

#3 in #conversation

Download history 86/week @ 2025-03-09

86 downloads per month

MIT license

30KB
376 lines

ask_llm

Minimum Supported Rust Version crates.io docs.rs Lines Of Code
ci errors ci warnings

Layer for llm requests, generic over models and providers

Usage

Provides 2 simple primitives:

oneshot and conversation functions, which follow standard logic for llm interactions, that most providers share.

Then the model is automatically chosen based on whether we care about cost/speed/quality. Currently this is expressed by choosing Model::{Fast/Medium/Slow}, from which we pick a model as hardcoded in current implementation.

Semver

Note that due to specifics of implementation, minor version bumps can change effective behavior by changing what model processes the request. Only boundary API changes will be marked with major versions.


This repository follows my best practices and Tiger Style (except "proper capitalization for acronyms": (VsrState, not VSRState) and formatting).

License

Licensed under Blue Oak 1.0.0
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.

Dependencies

~10–23MB
~309K SLoC