3 releases
0.1.4 | Nov 8, 2024 |
---|---|
0.1.3 | Oct 14, 2024 |
0.1.0 | Oct 14, 2024 |
#149 in Hardware support
379 downloads per month
180KB
3K
SLoC
ocp-diag-core-rust
The OCP Test & Validation Initiative is a collaboration between datacenter hyperscalers having the goal of standardizing aspects of the hardware validation/diagnosis space, along with providing necessary tooling to enable both diagnostic developers and executors to leverage these interfaces.
Specifically, the ocp-diag-core-rust project makes it easy for developers to use the OCP Test & Validation specification artifacts by presenting a pure-rust api that can be used to output spec compliant JSON messages.
To start, please see below for installation instructions and usage.
This project is part of ocp-diag-core and exists under the same MIT License Agreement.
Installation
Stable releases of the ocp-diag-core-rust codebase are published to crates.io under the package name ocptv, and can be easily installed with cargo.
For general usage, the following steps should be sufficient to get the latest stable version using the Package Installer for Rust:
-
[option 1] using
cargo add
$ cargo add ocptv
-
[option 2] specifying it in Cargo.toml file
[dependencies] ocptv = "~0.1"
To use the bleeding edge instead of the stable version, the dependecies section should be modified like this:
[dependencies]
ocptv = { git = "https://github.com/opencomputeproject/ocp-diag-core-rust.git", branch = "dev" }
The instructions above assume a Linux-type system. However, the steps should be identical on Windows and MacOS platforms.
See The Cargo Book for more details on how to use cargo.
Usage
The specification does not impose any particular level of usage. To be compliant, a diagnostic package just needs output the correct artifact messages in the correct format. However, any particular such diagnostic is free to choose what aspects it needs to use/output; eg. a simple validation test may not output any measurements, opting to just have a final Diagnosis outcome.
Full API reference is available here.
A very simple starter example, which just outputs a diagnosis:
use anyhow::Result;
use ocptv::output as tv;
use ocptv::{ocptv_diagnosis_fail, ocptv_diagnosis_pass};
use rand::Rng;
use tv::{TestResult, TestStatus};
fn get_fan_speed() -> i32 {
let mut rng = rand::thread_rng();
rng.gen_range(1500..1700)
}
async fn run_diagnosis_step(step: tv::ScopedTestStep) -> Result<TestStatus, tv::OcptvError> {
let fan_speed = get_fan_speed();
if fan_speed >= 1600 {
step.add_diagnosis("fan_ok", tv::DiagnosisType::Pass).await?;
} else {
step.add_diagnosis("fan_low", tv::DiagnosisType::Fail).await?;
}
Ok(TestStatus::Complete)
}
async fn run_diagnosis_macros_step(step: tv::ScopedTestStep) -> Result<TestStatus, tv::OcptvError> {
let fan_speed = get_fan_speed();
/// using the macro, the source location is filled automatically
if fan_speed >= 1600 {
ocptv_diagnosis_pass!(step, "fan_ok").await?;
} else {
ocptv_diagnosis_fail!(step, "fan_low").await?;
}
Ok(TestStatus::Complete)
}
#[tokio::main]
async fn main() -> Result<()> {
let dut = tv::DutInfo::builder("dut0").build();
tv::TestRun::builder("simple measurement", "1.0")
.build()
.scope(dut, |r| async move {
r.add_step("step0")
.scope(run_diagnosis_step)
.await?;
r.add_step("step1")
.scope(run_diagnosis_macros_step)
.await?;
Ok(tv::TestRunOutcome {
status: TestStatus::Complete,
result: TestResult::Pass,
})
})
.await?;
Ok(())
}
Expected output (slightly reformatted for readability):
{"sequenceNumber":0, "schemaVersion":{"major":2,"minor":0},"timestamp":"2024-10-11T11:39:07.026Z"}
{"sequenceNumber":1,"testRunArtifact":{
"testRunStart":{
"name":"simple diagnosis",
"commandLine":"","parameters":{},"version":"1.0",
"dutInfo":{"dutInfoId":"dut0"}
}},"timestamp":"2024-10-11T11:39:07.026Z"}
{"sequenceNumber":2,"testStepArtifact":{
"testStepId":"step0","testStepStart":{"name":"step0"}
},"timestamp":"2024-10-11T11:39:07.026Z"}
{"sequenceNumber":3,"testStepArtifact":{
"diagnosis":{"type":"PASS","verdict":"fan_ok"},
"testStepId":"step0"},"timestamp":"2024-10-11T11:39:07.027Z"}
{"sequenceNumber":4,"testStepArtifact":{
"testStepEnd":{"status":"COMPLETE"},"testStepId":"step0"
},"timestamp":"2024-10-11T11:39:07.027Z"}
{"sequenceNumber":5,"testStepArtifact":{
"testStepId":"step1","testStepStart":{"name":"step1"}
},"timestamp":"2024-10-11T11:39:07.027Z"}
{"sequenceNumber":6,"testStepArtifact":{
"diagnosis":{
"sourceLocation":{"file":"examples/diagnosis.rs","line":40},
"type":"FAIL","verdict":"fan_low"
},"testStepId":"step1"
},"timestamp":"2024-10-11T11:39:07.027Z"}
{"sequenceNumber":7,"testStepArtifact":{
"testStepEnd":{"status":"COMPLETE"},"testStepId":"step1"
},"timestamp":"2024-10-11T11:39:07.027Z"}
{"sequenceNumber":8,"testRunArtifact":{
"testRunEnd":{"result":"PASS","status":"COMPLETE"}
},"timestamp":"2024-10-11T11:39:07.027Z"}
Examples
The examples in examples folder can be run using cargo.
# run diagnosis example
$ cargo run --example diagnosis
Developer notes
If you would like to contribute, please head over to developer notes for instructions.
Contact
Feel free to start a new discussion, or otherwise post an issue/request.
An email contact is also available at: ocp-test-validation@OCP-All.groups.io
Dependencies
~7–14MB
~159K SLoC