Skip to main content
Documentation

Your first benchmark

Benchmark your own model with embedl-hub

This interactive guide helps you benchmark your first model using the Embedl Hub Python library. Click the blue, underlined text in each codeblock, and replace it with relevant values. Then, copy and paste the content of the codeblocks into a terminal and run the commands.

Prerequisites

If you haven’t already done so, follow the instructions in the setup guide to configure the Embedl Hub Python library.

Create a project and experiment

embedl-hub init \
    --project "My first project" \
    --experiment "baseline"

Compile your model from ONNX to TFLite

embedl-hub compile \
    --model /path/to/model.onnx

(Optional) Quantize the model

embedl-hub quantize \
    --model /path/to/model.tflite \
    --data /path/to/dataset \
    --num-samples 100

Benchmark the model on remote hardware

embedl-hub benchmark \
    --model /path/to/model.quantized.tflite \
    --device "Samsung Galaxy S24"