2 releases
Uses new Rust 2024
new 0.1.2 | Mar 14, 2025 |
---|---|
0.1.0 | Mar 14, 2025 |
#3 in #conversation
86 downloads per month
30KB
376 lines
ask_llm
Layer for llm requests, generic over models and providers
Usage
Provides 2 simple primitives:
oneshot
and conversation
functions, which follow standard logic for llm interactions, that most providers share.
Then the model is automatically chosen based on whether we care about cost/speed/quality. Currently this is expressed by choosing Model::
{Fast
/Medium
/Slow
}, from which we pick a model as hardcoded in current implementation.
Semver
Note that due to specifics of implementation, minor version bumps can change effective behavior by changing what model processes the request. Only boundary API changes will be marked with major versions.
This repository follows my best practices and Tiger Style (except "proper capitalization for acronyms": (VsrState, not VSRState) and formatting).
License
Licensed under Blue Oak 1.0.0Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.
Dependencies
~10–23MB
~309K SLoC