Decorator tracking
Turn any Python function into a tracked run.
The @hub_run decorator turns any plain Python function into a tracked Hub
run. Add the decorator, accept a context as the first argument, and you
immediately get:
- A logging client — record parameters, metrics, and artifacts to the Hub.
- Devices — access to any connected hardware.
- Artifact directories — timestamped, run-scoped folders on the local machine and on each connected device, for full traceability.
If you want to hand off a result to a built-in profiling or invocation component, that works too — see Handing off to built-in components.
Prerequisites
Before following this guide, make sure you have completed the setup guide to:
- Create an Embedl Hub account
- Install the
embedl-hubPython library - Configure an API key
Setting up a context
HubContext is the object you pass to every hub run function. At minimum,
all you need is a project name:
from embedl_hub.core import HubContextctx = HubContext(project_name="my-project")Wrapping a function
Add @hub_run above your function and declare ctx: HubContext as its
first parameter. Call the function from inside a with ctx: block:
from embedl_hub.core import HubContext, hub_run@hub_run("train")def train_model(ctx: HubContext, data_path: str): ctx.client.log_param("data_path", data_path) # ... your logic here ... return ctx.run_logctx = HubContext(project_name="my-project")with ctx: train_model(ctx, "data/train.csv")The decorator handles the run lifecycle automatically — it opens a tracking
session when the function is called and closes it (or marks it failed) when
the function returns or raises. Returning ctx.run_log is recommended so
that downstream runs can be linked to this one (see Chaining runs).
The run type
The first argument to @hub_run is the run type. It tells the Hub which
category of work this run represents.
Use a built-in type from RunType to map onto a known Hub category:
from embedl_hub.core.component import RunType@hub_run(RunType.COMPILE)def compile_model(ctx: HubContext, onnx_path: str): ...@hub_run(RunType.PROFILE)def profile_model(ctx: HubContext, model_path: str): ...Use a custom string for any domain-specific step:
@hub_run("evaluate")def evaluate(ctx: HubContext, checkpoint: str): ...@hub_run("prep")def prepare_data(ctx: HubContext, raw_path: str): ...You can also pass an optional name to override the display name shown on
the Hub (the function name is used by default):
@hub_run("train", name="Fine-tune ResNet-50")def train_model(ctx: HubContext, data_path: str): ...The context block
The with ctx: block is where your runs execute. Entering it prepares
artifact directories for the coming runs and connects any devices you added
to the context. Exiting it disconnects them and cleans up. All decorated
function calls must happen inside this block.
What the context gives you
The client
ctx.client lets you log parameters, metrics, and artifacts to the Hub:
from pathlib import Path@hub_run("evaluate")def evaluate(ctx: HubContext, checkpoint: str): ctx.client.log_param("checkpoint", checkpoint) ctx.client.log_metric("accuracy", 0.923) ctx.client.log_metric("val_loss", 0.081, step=10) ctx.client.log_artifact(Path("outputs/report.json")) return ctx.run_logAll logged data appears on your Hub run page alongside the run’s status, duration, and lineage.
Run logs
For every run, all logged parameters, metrics, artifacts, and device info are recorded into a run log — along with metadata like the run type and name.
ctx.run_log gives you access to the log of the currently executing run.
Returning it from your function is what enables chaining and lineage tracking,
as described below.
Devices
Devices are optional — skip this if your script runs entirely on the host
machine. To connect remote hardware, create a device with DeviceManager.get_ssh_device and pass it to the context:
from embedl_hub.core import HubContextfrom embedl_hub.core.device import DeviceManager, SSHConfigdevice = DeviceManager.get_ssh_device( SSHConfig(host="192.168.1.42", username="pi"), name="pi",)ctx = HubContext(project_name="my-project", devices=[device])The name is how you refer to the device inside your functions — ctx.devices["pi"] — and how it appears in run logs on the Hub. Choose
something that identifies the hardware clearly, especially when working
with multiple devices at once.
SSH devices come with a command runner (device.runner) that lets you
execute shell commands on the remote machine over the established
connection. This is how you trigger inference, run benchmarks, or invoke
any script that needs to run on the target hardware.
Artifact directories
Each run gets its own timestamped directories — one on the local machine and one on each connected device. You don’t have to use them, but if you want your outputs to be traceable and reproducible across runs, writing into these directories is the way to go.
Local: ctx.artifact_dir — write model files, reports, or any other
output that lives on the host:
@hub_run("train")def train_model(ctx: HubContext, data_path: str): output_path = ctx.artifact_dir / "model.onnx" your_training_loop(data_path, output_path) ctx.client.log_artifact(output_path) return ctx.run_logPer device: ctx.devices["pi"].artifact_dir — a corresponding
directory on the device itself, useful when the device produces its own
output files:
@hub_run("benchmark")def benchmark(ctx: HubContext, model_path: str): device = ctx.devices["pi"] remote_out = device.artifact_dir / "results.json" device.runner.run("python3 bench.py --out " + str(remote_out)) return ctx.run_logAlways write outputs into the provided artifact directories rather than hard-coded paths. Directories are unique per run, so re-running your script never overwrites previous results.
Chaining runs
Passing a RunLog from a previous run into another hub run function will
automatically link them in the Hub’s lineage view. The same applies to ComponentOutput — the result type that pre-made components produce and
consume. For run linkage to work, the passed value must be one of these types:
from embedl_hub.core import HubContext, hub_run@hub_run("prep")def prepare(ctx: HubContext, raw_path: str): out = ctx.artifact_dir / "clean.csv" clean_data(raw_path, out) ctx.client.log_param("input", raw_path) return ctx.run_log@hub_run("train")def train(ctx: HubContext, prep_result, *, epochs: int = 10): ctx.client.log_param("epochs", str(epochs)) ctx.client.log_metric("accuracy", run_training(epochs)) return ctx.run_logctx = HubContext(project_name="my-project")with ctx: prep_result = prepare(ctx, "data/raw.csv") train_result = train(ctx, prep_result, epochs=20)Validating run inputs
When a function receives a RunLog, you can inspect its contents to verify
it is compatible before proceeding — checking the run type, name, or any
logged parameters, metrics, or artifacts:
@hub_run("train")def train(ctx: HubContext, prep_result): assert prep_result.run_type == "prep", ( "Expected a prep run, got: " + str(prep_result.run_type) ) assert "input" in prep_result.params, "Missing input param from prep" # Access artifacts from the previous run's local directory clean_data_path = prep_result.artifact_dir / "clean.csv" ... return ctx.run_logThis is useful for catching mismatched pipelines early, especially when functions are reused across different run sequences.
Handing off to built-in components
Your script can produce output that feeds directly into a built-in profiling
or invocation component. To do this, return a typed ComponentOutput subtype
instead of ctx.run_log.
The key is to log your artifacts with the field names the output type
expects, then call OutputType.from_run_log(ctx.run_log) to build the
typed result:
from embedl_hub.core import HubContext, hub_runfrom embedl_hub.core.component import RunTypefrom embedl_hub.core.compile import TFLiteCompiledModelfrom embedl_hub.core.profile import TFLiteProfiler@hub_run(RunType.COMPILE)def my_compile(ctx: HubContext, onnx_path: str) -> TFLiteCompiledModel: tflite_path = ctx.artifact_dir / "model.tflite" your_tflite_converter(onnx_path, tflite_path) # Log the compiled model under the name "path" — this is the field # name TFLiteCompiledModel expects. ctx.client.log_artifact(tflite_path, name="path") return TFLiteCompiledModel.from_run_log(ctx.run_log)Once you have a TFLiteCompiledModel, pass it straight to the profiler:
profiler = TFLiteProfiler()ctx = HubContext(project_name="my-project", devices=[...])with ctx: compiled = my_compile(ctx, "model.onnx") result = profiler.run(ctx, compiled) # linked to my_compile in the HubThe profiler receives a fully typed result and the two runs are linked in the Hub’s lineage view. See the Your hardware guides for how to configure devices for profiling.
Complete example
The following script shows a three-step pipeline: data preparation, training, and evaluation. The last step produces a compiled model that is handed off to a built-in profiler.
from pathlib import Pathfrom embedl_hub.core import HubContext, hub_runfrom embedl_hub.core.component import RunTypefrom embedl_hub.core.compile import ONNXRuntimeCompiledModelfrom embedl_hub.core.profile import ONNXRuntimeProfiler@hub_run("prep")def prepare(ctx: HubContext, raw_path: str): out = ctx.artifact_dir / "clean.npy" clean_and_save(raw_path, out) ctx.client.log_param("source", raw_path) return ctx.run_log@hub_run("train")def train(ctx: HubContext, prep_result): model_path = ctx.artifact_dir / "model.onnx" accuracy = train_and_export(model_path) ctx.client.log_metric("accuracy", accuracy) ctx.client.log_artifact(model_path) return ctx.run_log@hub_run(RunType.COMPILE)def compile_model(ctx: HubContext, train_result) -> ONNXRuntimeCompiledModel: onnx_path = train_result.artifact_dir / "model.onnx" compiled_path = ctx.artifact_dir / "model.onnx" optimise_for_runtime(onnx_path, compiled_path) ctx.client.log_artifact(compiled_path, name="path") return ONNXRuntimeCompiledModel.from_run_log(ctx.run_log)profiler = ONNXRuntimeProfiler()ctx = HubContext(project_name="my-project", devices=[...])with ctx: prep = prepare(ctx, "data/raw.npy") trained = train(ctx, prep) compiled = compile_model(ctx, trained) result = profiler.run(ctx, compiled)