4 releases
0.1.37 | May 29, 2020 |
---|---|
0.1.36 | May 13, 2020 |
0.1.35 | May 13, 2020 |
0.1.34 | May 13, 2020 |
#4 in #greengrass
135KB
2.5K
SLoC
AWS Greengrass Core Rust SDK
Provides an idiomatic Rust wrapper around the AWS Greengrass Core C SDK to more easily enable Greengrass native lambda functions in Rust.
Features
- Publishing to MQTT topics
- Registering handlers and receiving messages from MQTT topics
- Logging to the Greengrass logging backend via the log crate
- Acquiring Secrets
Examples
- hello.rs - Simple example for initializing the greengrass runtime and sending a message on a topic
- echo.rs - Example that shows how to register a Handler with the greengrass runtime and listen for message.
- shadow.rs - Example showing how to acquire and manipulate shadow documents.
- longlived.rs - Example showing how to create a longlived greengrass lambda that exposes a http endpoint.
- invoker.rs - An example of invoking one lambda for another lambda. Should be used with invokee.rs
Building examples
Examples can be built following the directions in Quick start. Use cargo build --example <example>
to build.
Quickstart
Prerequisites and Requirements
- Install the Greengrass C SDK (fails on Mac OS X, see note below)
- Install Rust
- Install the AWS CLI
- A Device running green grass version v1.6 or newer
- Create and configure a Greengrass group as described in the Getting started with Amazon Greengrass
Note for Building on mac
The C Greengrass SDK fails to build on Mac OS X. The stubs directory contains a simple stubbed version of the SDK that can be used for compiling against Mac OS X.
To Install:
cd stubs
mkdir build && cd build
cmake ..
make
make install
Create new cargo project
cargo new --bin my_gg_lambda
Add the library to the Cargo.toml
Additionally, defined the logging crate
aws_greengrass_core_rust = "0.1.36"
log = "^0.4"
Edit main.rs
- Initialize logging, greengrass runtime, and register a Handler
use aws_greengrass_core_rust::Initializer;
use aws_greengrass_core_rust::log as gglog;
use aws_greengrass_core_rust::handler::{Handler, LambdaContext};
use log::{info, error, LevelFilter};
use aws_greengrass_core_rust::runtime::Runtime;
struct HelloHandler;
impl Handler for HelloHandler {
fn handle(&self, ctx: LambdaContext) {
info!("Received context: {:#?}", ctx);
let msg = String::from_utf8(ctx.message).expect("Message was not a valid utf8 string");
info!("Received event: {}", msg);
}
}
pub fn main() {
gglog::init_log(LevelFilter::Info);
let runtime = Runtime::default().with_handler(Some(Box::new(HelloHandler)));
if let Err(e) = Initializer::default().with_runtime(runtime).init() {
error!("Initialization failed: {}", e);
std::process::exit(1);
}
}
Build and package your lambda function
cargo build --release
zip zip -j my_gg_lambda.zip "./target/release/my_gg_lambda"
Note: The binaries must be built on the operating system and architecture you are deploying to. If you are not on linux (Mac OS/windows) you can use the docker build:
./dockerbuild.sh cargo build
This will only work for x86 builds.
Deploy your lambda function
Using the information you used when creating your Greengrass group:
aws lambda create-function \
--region aws-region \
--function-name my_gg_lambda_x86 \
--handler executable-name \
--role role-arn \
--zip-file fileb://file-name.zip \
--runtime arn:aws:greengrass:::runtime/function/executable
aws lambda publish-version \
--function-name my_gg_lambda_x86 \
--region aws-region
aws lambda create-alias \
--function-name my_gg_lambda_x86 \
--name alias-name \
--function-version version-number \
--region aws-region
Note: We recommend adding an architecture suffix like x86 or arm to the lambda name if you are planning on deploying to multiple architectures.
Configure your lambda function in your greengrass group
Follow the instructions found in Configure the Lambda Function for AWS IoT Greengrass
Further reading:
Testing in your project
When the feature "mock" is turned during the test phase the various clients will:
- Allow you outputs to be overridden
- Save arguments that methods have been called with
Example
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_publish_str() {
let topic = "foo";
let message = "this is my message";
let mocks = MockHolder::default().with_publish_raw_outputs(vec![Ok(())]);
let client = IOTDataClient::default().with_mocks(mocks);
let response = client.publish(topic, message).unwrap();
println!("response: {:?}", response);
let PublishRawInput(raw_topic, raw_bytes, raw_read) =
&client.mocks.publish_raw_inputs.borrow()[0];
assert_eq!(raw_topic, topic);
}
}
Building from source
Building
cargo build
Testing Mock feature
The examples will not build appropriately when the mock feature is enabled. To run the tests you must skip the examples:
cargo test --features mock --lib
Testing with code coverage
There are some issues with coverage tools running correctly with our bindgen configuration in build.rs. Most of the tests do not actually need this as bindings.rs contains a mock set of bindings. To get around the failure the feature "coverage" can be enabled. This will avoid the bindings being generate and disable the couple of spots where the real bindings are needed.
Coverage with grcov
- Install gperftools
- Install Rust nightly:
rustup install nightly
- Install grcov:
cargo +nightly install grcov
- Set the following environment variables:
export CARGO_INCREMENTAL=0
export RUSTFLAGS="-Zprofile -Ccodegen-units=1 -Cinline-threshold=0 -Clink-dead-code -Coverflow-checks=off -Zno-landing-pads"
- Build with coverage information:
cargo clean
cargo +nightly build --features coverage
cargo +nightly test --features coverage
- Run grcov:
grcov ./target/debug/ -s . -t html --llvm --branch --ignore-not-existing -o ./target/debug/coverage/
Cross compilation
AWS_GREENGRASS_STUBS=yes CMAKE_TOOLCHAIN_FILE=$(pwd)/linux-gnu-x86_64.cmake cargo build --target=x86_64-unknown-linux-gnu
Coverage with Jetbrains CLion
- Create a run coverage named Test
- Set the command to be:
test --features coverage
- Run with coverage
Dependencies
~1.4–5MB
~94K SLoC