Quickstart
Choose your starting point.
There are two ways to get started.
Track your own code — wrap any block or function and it shows up as a tracked run on the Hub.
Use a built-in component — pre-made profilers, compilers, and invokers for common toolchains. Plug your model in and they handle the rest.
Need to finish setup?
Complete the setup guide if you still need to install the embedl-hub Python library or configure authentication.
Choose your path
Track your own code
Direct tracking
The minimal path — a few lines and your code is tracked.
with client.start_run("train"):
client.log_param()
train_model() Open guide →
Decorator tracking
The complete path — turn any function into a tracked process, with additional benefits.
@hub_run("train")
def train_model(ctx):
ctx.client.log_param()
... Open guide →
Use a built-in component
Your hardware
The full-control path — compile and profile on your own hardware.
device = "My Jetson Orin"
compiler = TensorRTCompiler(device)
compiler.run(ctx, model) Open guide →
Device clouds
The flexible path — try any device without owning it.
device = "Samsung Galaxy S24"
profiler = TFLiteProfiler(device)
profiler.run(ctx, model) Open guide →
After your first run
Open Exploring results to learn how to review runs, metrics, and artifacts.