14 releases
0.4.0-rc.1 | Aug 31, 2024 |
---|---|
0.3.6 | May 2, 2022 |
0.3.5 | Dec 29, 2021 |
0.3.4 | Aug 31, 2021 |
0.1.2 | Oct 29, 2018 |
#76 in HTTP server
49 downloads per month
120KB
2K
SLoC
http-serve
Rust helpers for serving HTTP GET and HEAD responses with hyper 1.x and tokio.
This crate supplies two ways to respond to HTTP GET and HEAD requests:
- the
serve
function can be used to serve anEntity
, a trait representing reusable, byte-rangeable HTTP entities.Entity
must be able to produce exactly the same data on every call, know its size in advance, and be able to produce portions of the data on demand. - the
streaming_body
function can be used to add a body to an otherwise-complete response. If a body is needed (onGET
rather thanHEAD
requests, it returns aBodyWriter
(which implementsstd::io::Writer
). The caller should produce the complete body or callBodyWriter::abort
, causing the HTTP stream to terminate abruptly.
It supplies a static file Entity
implementation and a (currently Unix-only)
helper for serving a full directory tree from the local filesystem, including
automatically looking for .gz
-suffixed files when the client advertises
Accept-Encoding: gzip
.
Why two ways?
They have pros and cons. This table shows some of them:
serve | streaming_body | |
---|---|---|
automatic byte range serving | yes | no [1] |
backpressure | yes | no [2] |
conditional GET | yes | no [3] |
sends first byte before length known | no | yes |
automatic gzip content encoding | no [4] | yes |
[1]: streaming_body
always sends the full body. Byte range serving
wouldn't make much sense with its interface. The application will generate all the bytes
every time anyway, and http-serve
's buffering logic would have to be complex
to handle multiple ranges well.
[2]: streaming_body
is often appended to while holding
a lock or open database transaction, where backpressure is undesired. It'd be
possible to add support for "wait points" where the caller explicitly wants backpressure. This
would make it more suitable for large streams, even infinite streams like
Server-sent events.
[3]: streaming_body
doesn't yet support
generating etags or honoring conditional GET requests. PRs welcome!
[4]: serve
doesn't automatically apply Content-Encoding: gzip
because the content encoding is a property of the entity you supply. The
entity's etag, length, and byte range boundaries must match the encoding. You
can use the http_serve::should_gzip
helper to decide between supplying a plain
or gzipped entity. serve
could automatically apply the related
Transfer-Encoding: gzip
where the browser requests it via TE: gzip
, but
common browsers have
chosen to avoid
requesting or handling Transfer-Encoding
.
See the documentation for more.
There's a built-in Entity
implementation, ChunkedReadFile
. It serves
static files from the local filesystem, reading chunks in a separate thread
pool to avoid blocking the tokio reactor thread.
You're not limited to the built-in entity type(s), though. You could supply your own that do anything you desire:
- bytes built into the binary via
include_bytes!
. - bytes retrieved from another HTTP server or network filesystem.
- memcached-based caching of another entity.
- anything else for which it's cheaper to compute the etag, size, and a byte
range than the entirety of the data. (See
moonfire-nvr's logic for
generating
.mp4
files to represent arbitrary time ranges.)
http_serve::serve
is similar to golang's
http.ServeContent. It was
extracted from moonfire-nvr's
.mp4
file serving.
Examples:
- Serve a single file:
$ cargo run --example serve_file /usr/share/dict/words
- Serve a directory tree:
$ cargo run --features dir --example serve_dir .
Authors
See the AUTHORS file for details.
License
Your choice of MIT or Apache; see LICENSE-MIT.txt or LICENSE-APACHE, respectively.
Dependencies
~4–10MB
~99K SLoC