Skip to content

Conversation

@mirkoCrobu
Copy link
Contributor

@mirkoCrobu mirkoCrobu commented Nov 20, 2025

Closes #87

Motivation

The Applab shows the list of the bricks (with minimal information)
When the user clicks on a single brick, the details of the brick are shown.

Only in the brick details do we see the compatible model

Change description

  • remove the models from the v1/bricks endpoint. It is not currently used; it should be safe to remove.
 curl 127.0.0.1:8800/v1/bricks 

{
  "bricks": [
     "id": "arduino:object_detection",
      "name": "Object Detection",
      "author": "Arduino",
      "description": "",
      "category": "video",
      "status": "installed",
      "models": [
        "face-detection",
        "yolox-object-detection"
      ]
    },
  • add the models into the brick details
GET /v1/bricks/{brickID}"
{
 "id": "arduino:object_detection",
  "name": "Object Detection",
  "author": "Arduino",
  "description": "",
  "category": "video",
  "status": "installed",
   ...other

  // the list of compatible models for the brick with minimal info.
 "models":[
     {
      "id": "yolox-object-detection",
      "name": "General purpose object detection - YoloX",
      "description": "",

       // TODO: what to show in the  "more info" button? Maybe only the "url"? 
    },
    {
      "id": "face-detection",
      "name": "Lightweight-Face-Detection",
      "description": "Face bounding box detection. This model is trained on the WIDER FACE dataset and can detect faces in images.",
    },
]
}

Additional Notes

Reviewer checklist

  • PR addresses a single concern.
  • PR title and description are properly filled.
  • Changes will be merged in main.
  • Changes are covered by tests.
  • Logging is meaningful in case of troubleshooting.

@mirkoCrobu mirkoCrobu force-pushed the issue_87_show_comaptible_models_in_brick_list_details branch from a520a42 to 9f73db4 Compare November 20, 2025 16:34
@mirkoCrobu
Copy link
Contributor Author

Response GET http://localhost:8080/v1/bricks/arduino:image_classification

Should have "models" field

{
    "id": "arduino:image_classification",
    "name": "Image Classification",
    "author": "Arduino",
    "description": "Brick for image classification using a pre-trained model. It processes images and returns the predicted class label and confidence score.\nBrick is designed to work with pre-trained models provided by framework or with custom image classification models trained on Edge Impulse platform. \n",
    "category": "video",
    "status": "installed",
    "variables": {
        "CUSTOM_MODEL_PATH": {
            "default_value": "/home/arduino/.arduino-bricks/ei-models",
            "description": "path to the custom model directory",
            "required": false
        },
        "EI_CLASSIFICATION_MODEL": {
            "default_value": "/models/ootb/ei/mobilenet-v2-224px.eim",
            "description": "path to the model file",
            "required": false
        }
    },
    "readme": "# Image Classification Brick\n\nThis Brick lets you perform image classification using a pre-trained neural network model.\n\n## Overview\n\nThe Image Classification Brick allows you to:\n\n- Analyze images and categorize their contents using a machine learning model.\n- Use locally stored image files or camera feeds as input.\n- Easy integration with your project using simple Python APIs.\n\n## Features\n\n- Detects multiple objects in a single image\n- Returns class names and confidence scores for detected objects\n- Supports input as bytes, file paths or PIL images\n- Configurable model parameters (e.g., image type, confidence threshold)\n\n## Code example and usage\n\n```python\nimport os\nfrom arduino.app_bricks.image_classification import ImageClassification\n\nimage_classification = ImageClassification()\n\n# Image frame can be as bytes or PIL image\nframe = os.read(\"path/to/your/image.jpg\")\n\nout = image_classification.classify(frame)\n# is it possible to customize image type and confidence level\n# out = image_classification.classify(frame, image_type = \"png\", confidence = 0.35)\nif out and \"classification\" in out:\n    for i, obj_det in enumerate(out[\"classification\"]):\n        # For every object detected, get its details\n        detected_object = obj_det.get(\"class_name\", None)\n        confidence = obj_det.get(\"confidence\", None)\n```\n\n## Image Classification Working Principle\n\nImage classification models take an input image and assign one or more class labels to it, representing the most likely categories present in the image. These models analyze the image as a whole and do not localize objects within the frame. The result is a ranked list of predicted labels, each accompanied by a confidence score indicating the model's likelihood of each label being correct.",
    "api_docs_path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/api-docs/arduino/app_bricks/image_classification/API.md",
    "code_examples": [
        {
            "path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/examples/arduino/image_classification/__pycache__"
        },
        {
            "path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/examples/arduino/image_classification/image_classification_example.py"
        }
    ],
    "used_by_apps": [
        {
            "id": "ZXhhbXBsZXM6aW1hZ2UtY2xhc3NpZmljYXRpb24",
            "name": "Classify images",
            "icon": "📊"
        },
        {
            "id": "dXNlcjpleGFtcGxlcy9pbWFnZS1jbGFzc2lmaWNhdGlvbg",
            "name": "Classify images",
            "icon": "📊"
        }
    ],
    "models": [
        {
            "ID": "mobilenet-image-classification",
            "Name": "General purpose image classification",
            "ModuleDescription": "General purpose image classification model based on MobileNetV2. This model is trained on the ImageNet dataset and can classify images into 1000 categories."
        },
        {
            "ID": "person-classification",
            "Name": "Person classification",
            "ModuleDescription": "Person classification model based on WakeVision dataset. This model is trained to classify images into two categories: person and not-person."
        }
    ]
}

@mirkoCrobu
Copy link
Contributor Author

Response GET http://localhost:8080/v1/bricks/arduino:image_classification

Should have empty "models" field

{
    "id": "arduino:dbstorage_tsstore",
    "name": "Database - Time Series",
    "author": "Arduino",
    "description": "Simplified time series database storage layer for Arduino sensor samples built on top of InfluxDB.",
    "category": "storage",
    "status": "installed",
    "variables": {
        "APP_HOME": {
            "default_value": ".",
            "required": false
        },
        "DB_PASSWORD": {
            "default_value": "Arduino15",
            "description": "Database password",
            "required": false
        },
        "DB_USERNAME": {
            "default_value": "admin",
            "description": "Edge Impulse project API key",
            "required": false
        },
        "INFLUXDB_ADMIN_TOKEN": {
            "default_value": "392edbf2-b8a2-481f-979d-3f188b2c05f0",
            "description": "InfluxDB admin token",
            "required": false
        }
    },
    "readme": "# Database - Time Series Brick\n\nThis brick helps you manage and store time series data efficiently using InfluxDB.\n\n## Overview\n\nThe Database - Time series brick allows you to:\n\n- Efficiently store and retrieve time series data\n- Use a simple API for writing and reading time series measurements\n- Handle database connections automatically\n- Integrate your projects easily with InfluxDB\n- Use methods for querying and managing stored data\n- Handle errors and manage resources robustly\n\nIt provides a refined interface for working with time series data, automatically managing InfluxDB connections and providing flexible querying capabilities with time ranges, aggregation functions, and data retention policies.\n\n## Features\n\n- Automatic data retention management with configurable retention periods\n- Flexible time range queries with relative periods (e.g., `-1d`, `-2h`) or absolute timestamps\n- Data aggregation support with functions like *mean*, *max*, *min*, and *sum*\n- Configurable measurement organization and field naming\n- Thread-safe operations for concurrent access\n- Built-in validation for time parameters and aggregation settings\n\n## Code example and usage\n\nInstantiate a new class to open a database connection:\n\n```python\nimport time\nfrom arduino.app_bricks.dbstorage_tsstore import TimeSeriesStore\n\ndb = TimeSeriesStore()\ndb.start()\n\ndb.write_sample(\"temp\", 21)\ndb.write_sample(\"hum\", 45)\ntime.sleep(1)\n\nlast_temp = db.read_last_sample(\"temp\")\nlast_hum = db.read_last_sample(\"hum\")\nprint(f\"Last temperature: {last_temp}\")\nprint(f\"Last humidity: {last_hum}\")\n\ndb.stop()\n```\n\n## Understanding Time Series Operations\n\nThe TimeSeriesStore organizes data using InfluxDB's measurement and field structure, where measurements work as containers for related metrics and fields represent individual sensor readings or data points. Each data point includes a timestamp, allowing for precise time-based queries and analysis.\n\nThe brick supports flexible time range specifications using relative periods, such as `-1d` for the last day or `-2h` for the last two hours, as well as absolute timestamps in RFC 3339 format. Data retention is automatically managed based on the configured retention period, allowing for controlled storage usage while maintaining relevant historical data.",
    "api_docs_path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/api-docs/arduino/app_bricks/dbstorage_tsstore/API.md",
    "code_examples": [
        {
            "path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/examples/arduino/dbstorage_tsstore/1_write_read.py"
        },
        {
            "path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/examples/arduino/dbstorage_tsstore/2_read_all_samples.py"
        },
        {
            "path": "/home/mirkocrobu/.config/arduino-app-cli/assets/0.5.0/examples/arduino/dbstorage_tsstore/__pycache__"
        }
    ],
    "used_by_apps": [
        {
            "id": "ZXhhbXBsZXM6aG9tZS1jbGltYXRlLW1vbml0b3JpbmctYW5kLXN0b3JhZ2U",
            "name": "Home climate monitoring and storage",
            "icon": "🎛️"
        },
        {
            "id": "ZXhhbXBsZXM6c3lzdGVtLXJlc291cmNlcy1sb2dnZXI",
            "name": "System resources logger",
            "icon": "📈"
        },
        {
            "id": "dXNlcjpleGFtcGxlcy9ob21lLWNsaW1hdGUtbW9uaXRvcmluZy1hbmQtc3RvcmFnZQ",
            "name": "Home climate monitoring and storage",
            "icon": "🎛️"
        },
        {
            "id": "dXNlcjpleGFtcGxlcy9zeXN0ZW0tcmVzb3VyY2VzLWxvZ2dlcg",
            "name": "System resources logger",
            "icon": "📈"
        }
    ],
    "models": []
}

@mirkoCrobu
Copy link
Contributor Author

List response GET http://localhost:8080/v1/bricks

Should not contain "models" field

{
    "bricks": [
        {
            "id": "arduino:arduino_cloud",
            "name": "Arduino Cloud",
            "author": "Arduino",
            "description": "Connects to Arduino Cloud",
            "category": "",
            "status": "installed"
        },
        {
            "id": "arduino:image_classification",
            "name": "Image Classification",
            "author": "Arduino",
            "description": "Brick for image classification using a pre-trained model. It processes images and returns the predicted class label and confidence score.\nBrick is designed to work with pre-trained models provided by framework or with custom image classification models trained on Edge Impulse platform. \n",
            "category": "video",
            "status": "installed"
        },
        {
            "id": "arduino:streamlit_ui",
            "name": "WebUI - Streamlit",
            "author": "Arduino",
            "description": "A simplified user interface based on Streamlit and Python.",
            "category": "ui",
            "status": "installed"
        },
        {
            "id": "arduino:mood_detector",
            "name": "Mood Detection",
            "author": "Arduino",
            "description": "This brick analyzes text sentiment to detect the mood expressed.\nIt classifies text as positive, negative, or neutral.\n",
            "category": "text",
            "status": "installed"
        },
        .....
}

@mirkoCrobu mirkoCrobu marked this pull request as ready for review November 20, 2025 16:55
@mirkoCrobu mirkoCrobu requested review from a team, dido18 and giulio93 and removed request for dido18 November 20, 2025 16:56
@mirkoCrobu mirkoCrobu self-assigned this Nov 21, 2025
Copy link
Contributor

@dido18 dido18 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a general suggestion on the code logical structure.

I think that the orchestrator package (to be renamed) is the core layer of the data model (model and bricks).
The conversion into other data modelling is then performed into external layers (HTTP, and CLI).

In this way:

  • the HTTP layer converts the AiModel[] mode into AiModelLIte
  • the CLI layer uses the AiModel to show all the model info.

I also suggest adding the info into the ```arduino-app-cli brick details`

return matches
}

func (m *ModelsIndex) GetModelsLiteInfoByBrick(brick string) []AIModelLite {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func (m *ModelsIndex) GetModelsLiteInfoByBrick(brick string) []AIModelLite {
func (m *ModelsIndex) GetModelsLiteInfoByBrick(brickID string) []AIModelLite {

func (m *ModelsIndex) GetModelsLiteInfoByBrick(brick string) []AIModelLite {
var matches []AIModelLite
for i := range m.models {
if len(m.models[i].Bricks) > 0 && slices.Contains(m.models[i].Bricks, brick) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if len(m.models[i].Bricks) > 0 && slices.Contains(m.models[i].Bricks, brick) {
if len(m.models[i].Bricks) > 0 && slices.Contains(m.models[i].Bricks, brickID) {

type AIModelLite struct {
ID string `json:"id"`
Name string `json:"name"`
ModuleDescription string `json:"description"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ModuleDescription string `json:"description"`
Description string `json:"description"`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

api: show compatible models in the brick list/details

2 participants