Welcome to Dataguess

Dataguess provides a comprehensive suite of AI-powered tools for computer vision, time series prediction, and visual workflow automation. This guide will help you understand and effectively use all three products: Inspector, Project Studio, and Predictor.

Inspector

Computer vision model training and deployment platform for object detection, segmentation, and anomaly detection.

Project Studio

Visual workflow builder for creating and managing AI dataflows with drag-and-drop interface.

Predictor

Time series forecasting platform with machine learning algorithms for predictive analytics.

Getting Started

Each Dataguess product is designed to work independently or as part of an integrated AI pipeline. Choose the product that best fits your use case:

  • Inspector - Best for visual inspection, quality control, and object detection tasks
  • Project Studio - Best for creating automated AI workflows and data pipelines
  • Predictor - Best for time series forecasting and predictive maintenance

Inspector

Inspector is a comprehensive computer vision platform that enables you to train, test, and deploy AI models for object detection, object segmentation, and visual anomaly detection.

Overview

Inspector provides an end-to-end workflow for computer vision projects:

  1. Upload - Import images or videos for your dataset
  2. Label - Annotate objects in your images with bounding boxes or segmentation masks
  3. Train - Train AI models using state-of-the-art algorithms
  4. Test - Evaluate model performance on new data
  5. Deploy - Deploy models to edge devices for real-time inference

Dashboard

The Dashboard provides a quick overview of your Inspector workspace, including project counts, computer status, and system resource usage. It serves as the central hub for monitoring your AI operations.

Inspector Dashboard

Inspector Dashboard showing project count, computer count, camera count, and system resource usage (GPU, VRAM, CPU, RAM)

Dashboard Components

Component Description
Project Count Total number of projects in your workspace
Computer Count Number of connected edge AI computers
Camera Count Total cameras connected across all computers
GPU/VRAM/CPU/RAM Usage Real-time system resource monitoring
Active Training Models Models currently being trained
Deployed Models Models deployed to edge devices

Projects

Projects are the core organizational unit in Inspector. Each project contains a dataset and associated models for a specific computer vision task. The Projects page displays all your projects as cards with key information including project type, image count, and labeled image count.

Projects List

Projects page showing all available projects with their types (Object Detection, Object Segmentation, Visual Anomaly Detection) and dataset statistics

Project Types

Object Detection

Identify and locate objects in images using bounding boxes. Ideal for counting, tracking, and quality inspection tasks.

Object Segmentation

Pixel-level object identification for precise boundary detection. Best for applications requiring exact object shapes.

Visual Anomaly Detection

Detect defects and anomalies by learning from normal samples. Perfect for quality control and defect detection.

Creating a New Project

Create Project Dialog

Create Project dialog showing name field, description field, and project type selection (Object Detection, Object Segmentation, Visual Anomaly Detection)

To create a new project:

  1. Click the CREATE PROJECT button on the right side of the Projects page
  2. Enter a Name for your project
  3. Add an optional Description
  4. Select the Project Type (Object Detection, Object Segmentation, or Visual Anomaly Detection)
  5. Click Create to create the project

Upload Images/Videos

After creating a project, you will be taken to the Upload interface where you can add images and videos to your dataset. This is the first step in the project workflow (Upload > Label > Train > Test). The Upload interface provides multiple methods to add data to your project.

Upload Images/Videos Dialog

The Upload images/videos dialog showing the drag-and-drop area with folder icon and three upload options: Import dataset, Choose Files, and From Camera

Upload Interface Overview

The Upload interface consists of a large drag-and-drop area in the center and three action buttons at the bottom. You can also access this interface at any time by clicking the Upload step in the project workflow bar or by clicking the upload icon in the image list toolbar.

Element Description
Drag-and-Drop Area The large central area with a folder icon. Drag files directly from your computer and drop them here to upload. The icon animates when files are dragged over it.
Import dataset Import a pre-packaged dataset in ZIP format. Useful for loading existing labeled datasets or transferring projects.
Choose Files Open a file browser to select individual images or videos from your computer. Supports multiple file selection.
From Camera Capture images directly from connected edge AI cameras. Redirects to the Data Collection interface.

Supported File Formats

Inspector supports a variety of image and video formats for maximum flexibility:

Category Supported Formats Notes
Images PNG, JPG, JPEG, BMP Standard image formats. PNG recommended for lossless quality.
Videos MP4, MOV, AVI, MKV Videos are automatically extracted into individual frames.
Datasets ZIP Compressed archives containing images and optional annotation files.

Upload Method 1: Drag and Drop

The fastest way to upload files is by dragging them directly onto the upload area:

  1. Open your file explorer and navigate to the folder containing your images or videos
  2. Select the files you want to upload (use Ctrl+Click or Shift+Click for multiple selection)
  3. Drag the selected files over the upload area - the folder icon will animate to indicate it's ready to receive files
  4. Drop the files to start the upload process
  5. Wait for the upload to complete - a progress indicator will show the status

Upload Method 2: Import Dataset

Use this option to import a complete dataset packaged as a ZIP file. This is useful for:

  • Importing datasets exported from other Inspector projects
  • Loading pre-labeled datasets in standard formats (COCO, YOLO, etc.)
  • Transferring projects between Inspector instances
  • Bulk importing large numbers of images efficiently

To import a dataset:

  1. Click the Import dataset button
  2. Select a ZIP file from your computer
  3. Wait for the import process to complete - Inspector will extract and process all files

Upload Method 3: Choose Files

Use this option to manually select files from your computer:

  1. Click the Choose Files button
  2. A file browser dialog will open
  3. Navigate to the folder containing your images or videos
  4. Select one or more files (hold Ctrl or Shift for multiple selection)
  5. Click Open to start the upload

Upload Method 4: From Camera (Data Collection)

Capture images directly from connected edge AI cameras. Clicking the From Camera button redirects you to the Data Collection interface - a powerful 4-step wizard that allows you to configure automated image capture from your edge devices. This method is ideal for collecting real-world data directly from your production environment.

Data Collection Wizard Overview

The Data Collection wizard guides you through four steps to configure automated image capture from edge AI cameras. Each step must be completed before proceeding to the next. The wizard header shows your progress with checkmarks indicating completed steps.

Step Name Purpose
1 Information Enter a name and optional description for this data collection session
2 Camera Select an edge AI computer and choose a camera connected to it
3 Input Configure input/output settings and select the target project for collected images
4 Extra Settings Set image count, capture interval, resolution, and Inspector IP address
Step 1: Information

The first step requires you to provide basic information about this data collection session.

Data Collection Step 1 - Information

Data Collection Step 1: Enter a name (required, minimum 3 characters) and optional description for your data collection session

Field Required Description
Name Yes A unique identifier for this data collection session. Must be at least 3 characters. Example: "Camera_1", "Production_Line_A"
Description No Optional text to describe the purpose or context of this data collection. Useful for organizing multiple collection sessions.

After entering the name (and optionally a description), click Next to proceed to Step 2.

Step 2: Camera

In this step, you select which edge AI computer and camera will be used for data collection. All registered computers are displayed as expandable cards.

Data Collection Step 2 - Camera Selection

Data Collection Step 2: Select an edge AI computer and choose a camera from the list of connected cameras. The selected camera is highlighted with a blue border.

Element Description
Computer Card Each registered edge AI computer is shown as a card with its name (e.g., "Edge AI PC", "Jetson Edge AI PC") and IP address (e.g., 192.168.0.104)
Camera(s) Count Shows the number of cameras connected to each computer
Refresh Icon Click to reload the camera list from the edge device
Connected Camera(s) Expandable list showing camera name, type (IP Camera, RVC2, SOLO), and a live preview thumbnail
Image Preview Live thumbnail from the camera feed to help you identify the correct camera

Click on a computer card to expand it and view connected cameras. Then click on a camera row to select it (highlighted with a blue border). Click Next to proceed.

Step 3: Input

Configure input/output settings and select which project will receive the collected images. This step allows you to optionally configure external triggers for data collection.

Data Collection Step 3 - Input Settings

Data Collection Step 3: Configure Input/Output Settings and select the target project. The dropdown shows "None" for manual collection or choose MQTT, Siemens S7, or Modbus TCP for triggered collection.

Field Options Description
Input/Output Settings None, MQTT, Siemens S7, Modbus TCP Select an external trigger source for automated data collection. Choose "None" for manual/timed collection without external triggers.
Project Dropdown list of all projects Select the target project where collected images will be automatically added. All your Inspector projects are listed here.

Input/Output Settings Options

  • None - Collect images based on time interval only (no external trigger)
  • MQTT - Trigger collection when a message is received on a specified MQTT topic
  • Siemens S7 - Trigger collection based on PLC data from Siemens S7 protocol
  • Modbus TCP - Trigger collection based on Modbus TCP register values

For most use cases, select "None" to collect images at regular intervals. The industrial protocols (MQTT, Siemens S7, Modbus TCP) are useful for synchronizing data collection with production events.

Select the target project from the dropdown and click Next to proceed to the final step.

Step 4: Extra Settings

Configure the final parameters for data collection including how many images to capture, the capture interval, resolution, and the Inspector server IP address.

Data Collection Step 4 - Extra Settings

Data Collection Step 4: Configure Image Count (25), Interval in milliseconds (1000ms = 1 second), Inspector IP Address, and Resolution (1280x720)

Parameter Default Description
Image Count 25 Total number of images to capture in this session. Only shown when Input/Output Settings is "None".
Interval (ms) 1000 Time between captures in milliseconds. 1000ms = 1 second. Lower values capture more frequently.
Inspector IP Address Auto-detected The IP address of the Inspector server where images will be sent. Usually auto-filled with your server's IP.
Resolution 1280x720 (720p) The resolution at which images will be captured from the camera.

Available Resolutions

  • 640x480 (VGA) - Low resolution, smallest file size, fastest transfer
  • 1280x720 (720p) - HD resolution, good balance of quality and size (recommended)
  • 1920x1080 (1080p) - Full HD, higher quality for detailed inspection
  • 2560x1440 (2K) - High resolution for detecting small defects
  • 3840x2160 (4K UHD) - Ultra-high resolution for maximum detail

Choose a resolution based on your inspection requirements. Higher resolutions provide more detail but result in larger file sizes and slower transfer times.

Starting Data Collection

After configuring all settings, click the Collect button to start the data collection process. The system will:

  1. Connect to the selected edge AI computer
  2. Start capturing images from the selected camera at the specified interval
  3. Transfer each captured image to the Inspector server
  4. Automatically add images to the selected project
  5. Stop after reaching the specified image count (or continue indefinitely if using external triggers)

During collection, a loading indicator shows that the process is active. Once complete, you will be redirected to the Computers page where you can see the collection status.

Tips for Effective Data Collection

  • Test First - Start with a small image count (10-25) to verify camera positioning and lighting
  • Appropriate Interval - Use longer intervals (2000-5000ms) for slow-moving scenes, shorter intervals (100-500ms) for fast processes
  • Network Stability - Ensure stable network connection between edge device and Inspector server
  • Storage Space - Verify sufficient storage on both edge device and server before large collections
  • Consistent Conditions - Maintain consistent lighting and camera position during collection for better training results

Video Upload and Frame Extraction

When you upload video files (MP4, MOV, AVI, MKV), Inspector automatically extracts individual frames for labeling. A Frame Skip dialog will appear allowing you to control how frames are extracted:

Frame Skip Parameter

The frame skip value determines how many frames to skip between each extracted frame:

  • 0 - Extract every frame (highest detail, largest dataset)
  • 1 - Extract every 2nd frame (skip 1 frame between extractions)
  • 5 - Extract every 6th frame (good balance for most videos)
  • 10+ - Extract fewer frames (useful for long videos or slow-moving scenes)

Choose a higher frame skip value for videos with little motion between frames to avoid redundant images in your dataset.

Upload Limits and Batch Processing

Inspector handles large uploads efficiently through batch processing:

  • Files are uploaded in batches of up to 900 files at a time
  • Large uploads are automatically split into multiple batches
  • Progress is tracked across all batches
  • You can continue working while uploads process in the background

Image List (Dataset Gallery)

After uploading images or when opening an existing project, you will see the Image List interface. This is the central hub for managing your dataset - viewing uploaded images, selecting images for operations, filtering by label status, and accessing trained models. The Image List provides a visual gallery of all images in your project with powerful tools for dataset management.

Image List Overview

The Image List interface showing the project workflow bar (Upload, Label, Train, Test), image count with pagination, toolbar icons, image grid with selection checkboxes, and the MODELS sidebar tab on the right

Image List Interface Layout

The Image List interface consists of several key components designed to help you efficiently manage your dataset:

Component Location Description
Project Name Top Left Displays the current project name with a back arrow to return to the Projects list. Hover to reveal an edit icon for renaming the project.
Workflow Bar Top Center Shows the project workflow steps (Upload, Label, Train, Test) with checkmarks indicating completed steps. Click any step to navigate directly to that stage.
Image Count Below Project Name Shows the current page range and total image count (e.g., "1-20 of 50 images"). Click the number dropdown to change images per page (20, 50, 100, 500, 1000).
Toolbar Top Right Contains action buttons for Export, Drop Duplicates, Upload, AI Model Selection, and Filter.
Image Grid Center Displays image thumbnails in a responsive grid. Each image has a selection checkbox and optional status indicators.
Pagination Bottom Center Navigate between pages of images using numbered buttons or previous/next arrows.
MODELS Sidebar Right Edge Collapsible panel showing trained models for this project with their performance metrics.

Toolbar Actions

The toolbar at the top right of the Image List provides quick access to important dataset management functions:

Icon Action Description
Export (Arrow Up) Export Dataset Export your dataset as a ZIP file. You can choose the export format (COCO, YOLO, etc.) and include annotations. Useful for backup or transferring to other systems.
Search (Magnifying Glass) Drop Duplicates Automatically detect and remove duplicate images from your dataset. This helps clean up datasets that may have accidentally received the same images multiple times.
Cloud Upload Upload Images/Videos Opens the Upload dialog to add more images or videos to your project. Same interface as the initial upload after project creation.
Robot (SmartToy) AI Model Selection Select an AI model for assisted labeling. Opens a menu with Zero-Shot Models, Pre-trained Models, and Trained Models. The selected model will be used when you press 'L' in the labeling interface.
Filter (Funnel) Filter Images Filter which images are displayed based on their labeling status or class assignments. A badge shows the number of active filters.

Filtering Images

The Filter feature allows you to focus on specific subsets of your dataset. Click the Filter button to open the filter menu:

Filter Menu

The Filter menu showing options to filter by Labeled, Unlabeled, Test set status, and by specific classes (e.g., "Blue" class)

Filter Option Description
Labeled Show only images that have at least one annotation (bounding box or polygon)
Unlabeled Show only images without any annotations - useful for finding images that still need labeling
Test set Show only images marked as part of the test set (used for model evaluation, not training)
Class Names Filter by specific classes - show only images containing annotations of the selected class(es)

You can combine multiple filters - for example, select both "Labeled" and a specific class to see only labeled images containing that class. The filter badge on the toolbar shows how many filters are currently active.

Image Selection and Actions

You can select images for bulk operations using several methods:

  • Click Checkbox: Click the checkbox icon in the top-left corner of any image thumbnail to select/deselect it
  • Click Image (when selected): When images are already selected, clicking another image toggles its selection
  • Drag Selection: Click and drag on the background to create a selection rectangle - all images within the rectangle will be selected
  • Select All: When images are selected, a "Select All" button appears to select all images on the current page
  • Deselect All: Click "Deselect All" to clear your selection

When images are selected, additional options appear in the toolbar:

  • Selection Count: Shows how many images are currently selected (e.g., "5 image(s) selected")
  • Delete: Delete the selected images from your dataset (requires confirmation)

Image Status Indicators

Each image thumbnail displays visual indicators to help you understand its status:

Indicator Location Meaning
Checkbox (Circle with Check) Top Left Blue when selected, dark gray when not selected. Click to toggle selection.
Test Set Icon (T) Top Right Appears on hover or when image is in test set. Blue when in test set, gray otherwise. Click to toggle test set status.
"Unlabeled" Badge Bottom Right Displayed on images that have no annotations yet. Helps identify images that need labeling.

Opening Images for Labeling

To open an image in the labeling interface, simply click on the image thumbnail (when no images are selected). This will open the full labeling interface where you can:

  • Draw bounding boxes (Object Detection) or polygons (Object Segmentation)
  • Use AI Assist to automatically generate annotations
  • Navigate between images using the carousel or keyboard shortcuts
  • Manage classes and view annotation details

Note for Visual Anomaly Detection Projects

For Visual Anomaly Detection projects, clicking on images does not open the labeling interface since these projects don't require manual annotation. Instead, the model learns from all uploaded images as "normal" samples.

MODELS Sidebar

The MODELS tab on the right side of the screen provides quick access to all trained models for the current project. Click the tab to expand the sidebar:

MODELS Sidebar

The MODELS sidebar showing trained models with their performance metrics (F1 Score, mAP, Precision, Recall) and configuration details (Model Type, Epochs, Batch Size, Image Size, Algorithm)

Each model card in the sidebar displays:

  • Model Name: The name you assigned during training
  • Performance Metrics: F1 Score, mAP 0.5, mAP 0.9, Precision, Recall
  • Configuration: Model Type (Tiny/Small/Medium/Large), Epochs, Batch Size, Image Size, Algorithm
  • Actions Menu: Three-dot menu for additional options like viewing details or deleting the model

Click on a model card to navigate to the Training interface and view detailed results, or to use the model for testing.

Pagination and Images Per Page

For large datasets, images are displayed across multiple pages. You can control pagination using:

  • Page Numbers: Click numbered buttons at the bottom to jump to a specific page
  • Previous/Next Arrows: Navigate one page at a time
  • Images Per Page Dropdown: Click the number next to the image count (default: 20) to change how many images are displayed per page. Options: 20, 50, 100, 500, 1000

Tips for Managing Large Datasets

  • Use Filters: Filter by "Unlabeled" to quickly find images that need annotation
  • Increase Page Size: Set images per page to 100 or 500 for faster navigation through large datasets
  • Use Drag Selection: Quickly select multiple images by dragging a selection rectangle
  • Check for Duplicates: Use "Drop Duplicates" to clean up accidentally duplicated images
  • Export Regularly: Export your dataset periodically as a backup

After Upload: Next Steps

Once your images are uploaded, the project workflow guides you through the next steps:

  1. For Object Detection/Segmentation projects: Proceed to the Label step to annotate objects in your images
  2. For Visual Anomaly Detection projects: Proceed directly to the Train step (no labeling required - the model learns from "normal" images)

Best Practices for Uploading Data

  • Image Quality: Use high-resolution images for better model accuracy. Avoid blurry or poorly lit images.
  • Variety: Include images with different lighting conditions, angles, and backgrounds to improve model generalization.
  • Consistency: Ensure all images are relevant to your detection task and contain the objects you want to detect.
  • Dataset Size: Start with at least 50-100 images per class for Object Detection. Visual Anomaly Detection requires at least 15 "normal" images.
  • File Naming: Use descriptive file names to help organize your dataset.

Labeling

The Labeling interface is where you annotate objects in your images to train AI models. This is a crucial step that directly impacts model accuracy. Inspector provides a powerful, intuitive labeling interface with support for bounding boxes (Object Detection), polygons (Object Segmentation), and AI-assisted annotation tools. This section covers everything you need to know to efficiently label your dataset.

Getting Started with Labeling

To access the labeling interface, navigate to your project and click the Label tab in the project workflow bar. The labeling interface will open with your first image displayed on the canvas.

Labeling Interface Overview

The labeling interface for Object Detection projects showing the main canvas, image carousel at the bottom, and right sidebar with Objects/Classes panels

Interface Layout

The labeling interface is divided into several key areas, each designed to help you work efficiently:

Component Location Description
Image Canvas Center The main workspace where you view images and draw annotations. Supports zoom (mouse wheel) and pan (drag in Pan mode).
Toolbar Top Contains mode buttons (Labeling, Pan, AI Assist), navigation controls, undo/redo, filter, and other tools.
Image Carousel Bottom Thumbnail strip showing all images in your dataset. Click to navigate between images.
Right Sidebar Right Contains three expandable panels: Hotkeys (H), Objects, and Classes. Also shows class quick-select buttons.
Project Name Top Left Displays the current project name as a watermark overlay.

Cursor Modes

The labeling interface has four cursor modes that determine how your mouse interacts with the canvas:

Labeling Mode (W)

The default mode for creating annotations. Click and drag to draw bounding boxes (Object Detection) or click to add polygon points (Object Segmentation). Press W to activate.

Pan Mode (S)

Navigate around the image by clicking and dragging. Useful when zoomed in on large images. Press S to activate.

AI Assist Mode (L)

Use AI models to automatically detect and annotate objects. Requires selecting an AI model first. Press L to activate.

Polygon AI Assist (P)

Available only in Object Segmentation projects. Uses SAM (Segment Anything Model) to automatically generate polygon masks based on click points. Press P to activate.

Object Detection Labeling (Bounding Boxes)

For Object Detection projects, you annotate objects by drawing rectangular bounding boxes around them. Here's how to create and manage bounding box annotations:

Creating a Bounding Box

  1. Ensure you are in Labeling Mode (press W or click the crop icon in the toolbar)
  2. Select the appropriate class from the right sidebar (use number keys 1-9 for quick selection)
  3. Click and hold at one corner of the object you want to annotate
  4. Drag to the opposite corner while holding the mouse button
  5. Release the mouse button to complete the bounding box
Bounding Box Annotations

Object Detection labeling showing bounding box annotations with the Objects panel displaying annotation details (coordinates, dimensions, and class)

Editing a Bounding Box

To modify an existing bounding box:

  • Select - Click on a bounding box or select it from the Objects panel
  • Resize - Drag the corner or edge handles to adjust size
  • Move - Drag the center of the selected box to reposition it
  • Change Class - With the box selected, click a different class in the sidebar or press 1-9
  • Delete - Press Delete to remove the selected annotation

Object Segmentation Labeling (Polygons)

For Object Segmentation projects, you create pixel-precise polygon masks around objects. This provides more accurate boundaries than bounding boxes.

Polygon Annotations

Object Segmentation labeling showing polygon annotations with different colors representing different classes (welding defect types)

Creating a Polygon

  1. Ensure you are in Labeling Mode (press W)
  2. Select the appropriate class from the right sidebar
  3. Click to place the first point of your polygon
  4. Continue clicking to add more points around the object boundary
  5. To close the polygon, click near the starting point (within 3 pixels)
Polygon Objects Panel

Objects panel for polygon annotations showing multiple objects with their class assignments and coordinates

AI-Assisted Polygon Labeling (SAM)

Inspector integrates with SAM (Segment Anything Model) to help you create polygon masks faster:

  1. Press P to enter Polygon AI Assist mode
  2. Left-click on points inside the object you want to segment (positive points)
  3. Right-click on points outside the object to exclude areas (negative points)
  4. The AI will automatically generate a polygon mask based on your clicks
  5. Press N to reset and start a new polygon

Objects Panel

The Objects panel displays all annotations on the current image. Click the Objects tab on the right sidebar to expand it.

Feature Description
Object List Shows all annotations with their class, coordinates, and dimensions
Selection Click an object to select it on the canvas
Visibility Toggle Eye icon to show/hide individual annotations
Delete Trash icon to delete individual annotations
Clear All Button to remove all annotations from the current image
Track/Untrack Mark annotations for tracking across video frames

Classes Panel

The Classes panel allows you to manage object classes for your project. Click the Classes tab on the right sidebar to expand it.

Classes Panel

Classes panel showing the class list with color indicators and hotkey numbers for quick selection

Polygon Classes Panel

Classes panel for a segmentation project showing multiple classes (WM300-WM307) with their assigned colors and hotkeys

Class Management

  • Create New Class - Click the "Create New Class" button to add a new class
  • Select Class - Click a class or press 1-9 to select it for annotation
  • Class Colors - Each class has a unique color for visual distinction
  • Visibility - Toggle class visibility to show/hide all annotations of that class
  • Hot Keys - Classes 1-9 can be quickly selected using number keys

Hotkeys Panel

The Hotkeys panel shows all available keyboard shortcuts and allows you to customize them. Click the H button on the right sidebar to expand it.

Hotkeys Panel

Hotkey Editor showing all available keyboard shortcuts with their current key bindings and edit buttons

Complete Keyboard Shortcuts Reference

Master these keyboard shortcuts to significantly speed up your labeling workflow:

Action Primary Shortcut Alternative Description
Labeling Mode W - Switch to annotation drawing mode
Pan Mode S - Switch to canvas panning mode
AI Assist L - Trigger AI-assisted annotation
Polygon AI Assist P - Enter SAM polygon assist mode (Segmentation only)
Reset Circle Points N - Clear SAM click points and start new polygon
Undo Ctrl+Z Q Undo the last action
Redo Ctrl+Shift+Z E Redo the previously undone action
Previous Image Arrow Left A Navigate to the previous image
Next Image Arrow Right D Navigate to the next image
Save Image Ctrl+S - Save current annotations
Unselect ROI I - Deselect the currently selected annotation
Delete ROI Delete - Delete the selected annotation
Duplicate ROI Ctrl+D - Create a copy of the selected annotation
Copy ROI Ctrl+C - Copy the selected annotation to clipboard
Paste ROI Ctrl+V - Paste the copied annotation
Track/Untrack T - Toggle tracking status for the selected annotation
Select Class 1-9 1 - 9 - Quickly select a class by its number

Image Filtering

Use the Filter button in the toolbar to filter which images are displayed in the carousel:

  • Labeled - Show only images that have annotations
  • Unlabeled - Show only images without annotations
  • Test Set - Show only images marked for testing
  • By Class - Filter images containing specific classes

Canvas Controls

Navigate and zoom the canvas to work with images of any size:

  • Zoom In/Out - Use the mouse scroll wheel to zoom
  • Pan - In Pan mode (S), click and drag to move around the image
  • Fit to Screen - The image automatically fits to the canvas when loaded
  • Fullscreen - Click the fullscreen button in the toolbar to maximize the labeling interface

Best Practices for Labeling

Follow these tips to create high-quality annotations that will improve your model's accuracy:

  1. Be Consistent - Apply the same labeling standards across all images
  2. Tight Bounding Boxes - Draw boxes that closely fit the object without excessive padding
  3. Label All Instances - Make sure to annotate every visible instance of each class
  4. Handle Occlusion - For partially visible objects, annotate only the visible portion
  5. Use Keyboard Shortcuts - Learn the hotkeys to significantly speed up your workflow
  6. Save Frequently - Press Ctrl+S regularly to save your work
  7. Review Your Work - Use the Objects panel to verify all annotations before moving to the next image

Toolbar Reference

The toolbar at the top of the labeling interface provides quick access to all labeling functions:

Button Function Hotkey
Crop Icon Labeling Mode - Draw annotations W
Hand Icon Pan Mode - Navigate the canvas S
Robot Icon AI Assist - Auto-annotate with AI L
Circle Check Icon Polygon AI Assist (Segmentation only) P
Filter Icon Filter images by status or class -
Left Arrow Previous image A or Arrow Left
Right Arrow Next image D or Arrow Right
Undo Icon Undo last action Ctrl+Z or Q
Redo Icon Redo last undone action Ctrl+Shift+Z or E
Trash Icon Delete current image -
Fullscreen Icon Toggle fullscreen mode -
Exit Icon Exit labeling and return to project -

AI Assist Mode - Comprehensive Guide

AI Assist Mode is one of Inspector's most powerful features, enabling you to dramatically speed up the labeling process by using AI models to automatically detect and annotate objects in your images. Instead of manually drawing every bounding box or polygon, you can leverage pre-trained models, zero-shot models, or your own trained models to generate annotations automatically. This section provides a complete guide to understanding and using AI Assist Mode effectively.

What is AI Assist Mode?

AI Assist Mode uses machine learning models to automatically detect objects in your images and create annotations for them. This can reduce labeling time by 80-90% compared to manual annotation, especially for datasets with many similar objects. The AI generates bounding boxes (for Object Detection) or polygon masks (for Object Segmentation) that you can then review, adjust, or accept.

Types of AI Models for Assistance

Inspector supports three types of AI models for assisted labeling, each with different use cases:

Model Type Description Best For
Zero-Shot Models (YOLOE) Pre-built models that can detect objects based on text descriptions without any training. Uses YOLOE (You Only Look Once - Everything) technology. Quick labeling of common objects, starting new projects, when you don't have a trained model yet
Pre-trained Models Models uploaded to Inspector that were trained on external datasets. These models have learned to detect specific object classes. Specialized domains where you have access to pre-trained weights, transfer learning scenarios
Trained Models Models you have trained within Inspector on your own labeled data. These are the most accurate for your specific use case. Production labeling, iterative improvement, when you have already trained a model on similar data

Step 1: Selecting an AI Model for Assistance

Before you can use AI Assist Mode, you must first select which AI model will be used for automatic annotation. This is done from the project's image list page (Upload tab):

  1. Navigate to your project and go to the Upload tab where your images are displayed
  2. Look for the Robot Icon (SmartToy icon) in the toolbar at the top right of the image list
  3. Click the Robot Icon to open the AI Model Selection Menu
  4. The menu displays three categories of models:
    • Zero-Shot Models - YOLOE and other models that work without training
    • Pre-trained Models - Models you've uploaded to Inspector
    • Trained Models - Models you've trained within this project
  5. Click on a model to select it
  6. After selection, the Label Mapping Dialog will appear (see next step)
AI Model Selection Menu

AI Model Selection Menu showing Zero-Shot Models (YOLOE), Pre-trained Models, and Trained Models categories. Click on a model to select it for AI-assisted annotation.

Once selected, the model name will be displayed below the toolbar showing "Selected AI Model For Assist: [Model Name]".

Step 2: Configuring Label Mapping

After selecting an AI model, you need to configure how the model's output labels map to your project's classes. This is crucial because the AI model may use different class names than your project.

Why Label Mapping Matters

For example, if your project has a class called "WeldPorosity" but the AI model outputs "porosity", you need to map "porosity" to "WeldPorosity" so the annotations are assigned to the correct class. You can also choose to "Ignore" certain model outputs if they don't apply to your project.

The Label Mapping Dialog shows:

  • Project Labels (left column) - The classes defined in your project
  • Model Labels (right column) - Dropdown menus to select which model output maps to each project class
  • Ignore Option - Select "Ignore" to exclude a project class from AI-assisted annotations

Configure the mapping carefully, then click Save to store your mapping. The mapping is saved per model, so you only need to configure it once for each model you use.

Label Mapping Dialog

Link Project Labels to Model Labels dialog. Map each project label (IncompleteWeld, WeldPorosity, WeldDent) to the corresponding model label. You can also choose to ignore labels you don't want to use.

Step 3: Using AI Assist in the Labeling Interface

Once you have selected an AI model and configured label mapping, you can use AI Assist Mode in the labeling interface:

  1. Open the labeling interface by clicking on any image in your project
  2. Click the Robot Icon (SmartToy) in the toolbar, or press L
  3. The AI Assist Threshold Dialog will appear
  4. Enter a Threshold value between 0 and 1 (see threshold explanation below)
  5. Click OK to run AI-assisted annotation
  6. The AI model will analyze the current image and automatically create annotations
  7. Review the generated annotations in the Objects panel on the right
  8. Adjust, delete, or add annotations as needed
  9. Press Ctrl+S to save your work

Understanding the Threshold Parameter

The threshold is a confidence value between 0 and 1 that controls how selective the AI is when creating annotations:

Threshold Value Behavior When to Use
Low (0.1 - 0.3) More detections, including uncertain ones. May include false positives. When you want to catch all possible objects and manually filter out incorrect ones. Good for recall-focused labeling.
Medium (0.4 - 0.6) Balanced detection. Reasonable confidence with moderate false positives. General-purpose labeling. Good starting point for most projects.
High (0.7 - 0.9) Fewer detections, but higher confidence. May miss some objects. When you want only high-confidence detections. Good for precision-focused labeling.
Very High (0.9 - 1.0) Only the most confident detections. Will miss many objects. When you need very precise annotations and prefer to manually add missed objects.

Threshold Best Practice

Start with a threshold of 0.5 and adjust based on results. If you see too many false positives (incorrect detections), increase the threshold. If the AI is missing objects you want to detect, decrease the threshold. The optimal threshold depends on your model's accuracy and your tolerance for manual corrections.

AI Assist for Object Detection (Bounding Boxes)

When using AI Assist Mode in an Object Detection project, the AI model will automatically create bounding box annotations around detected objects:

  • Each detected object gets a rectangular bounding box
  • The box is assigned to the appropriate class based on your label mapping
  • Annotations appear in the Objects panel with coordinates (x, y, width, height)
  • You can resize, move, or delete any annotation as needed
  • Duplicate detections are automatically filtered out
AI Assist Detection Results

AI Assist automatically detected 5 WeldPorosity defects in this welding image. The Objects panel on the right shows each detected object with its coordinates and class assignment.

Multiple Images Labeled with AI Assist

After running AI Assist, you can navigate through images using the carousel at the bottom. The progress bar shows how many images have been labeled. Each image shows the AI-generated bounding boxes that you can review and adjust.

AI Assist for Object Segmentation (Polygons)

When using AI Assist Mode in an Object Segmentation project, the AI model creates polygon masks that precisely outline detected objects:

  • Each detected object gets a polygon mask following its contours
  • Polygons provide pixel-level precision for object boundaries
  • The polygon is assigned to the appropriate class based on your label mapping
  • You can edit polygon points to refine the mask if needed
  • Works with both trained models and zero-shot models (YOLOE)

Polygon AI Assist Mode (SAM)

In addition to the standard AI Assist Mode, Object Segmentation projects have access to Polygon AI Assist Mode, which uses SAM (Segment Anything Model) for interactive segmentation:

What is SAM?

SAM (Segment Anything Model) is a foundation model developed by Meta AI that can segment any object in an image based on user-provided points. Unlike standard AI Assist which detects all objects automatically, SAM allows you to interactively select specific objects by clicking on them.

To use Polygon AI Assist Mode:

  1. Press P or click the Circle Check Icon in the toolbar to enter Polygon AI Assist mode
  2. Left-click on points inside the object you want to segment (positive points - shown in green)
  3. Right-click on points outside the object to exclude areas (negative points - shown in red)
  4. SAM will automatically generate a polygon mask based on your clicks
  5. Continue adding points to refine the segmentation
  6. Press N to reset and start a new polygon
  7. Select the appropriate class for the generated polygon
Segmentation Project with SAM Mode

Object Segmentation project showing the labeling interface. The Classes panel on the right shows available classes. You can use Polygon AI Assist (SAM) to quickly segment objects by clicking on them.

SAM Polygon Generation Result

SAM-generated polygon masks for objects. The white dots around the objects show the polygon points that SAM automatically generated based on user clicks. You can see the precise boundary detection around the metallic objects.

Workflow: Efficient Labeling with AI Assist

Here's a recommended workflow for using AI Assist Mode to label your dataset efficiently:

  1. Initial Setup
    • Create your project and define all classes
    • Upload your images
    • Select an AI model (start with Zero-Shot if you don't have a trained model)
    • Configure label mapping
  2. First Pass - AI Annotation
    • Open the labeling interface
    • Press L to trigger AI Assist with threshold 0.5
    • Review the generated annotations
    • Delete false positives, adjust bounding boxes/polygons as needed
    • Manually add any missed objects
    • Save and move to the next image
  3. Train Your Own Model
    • After labeling 50-100 images, train a model on your data
    • Select your trained model as the AI Assist model
    • Continue labeling with improved accuracy
  4. Iterative Improvement
    • As you label more images, periodically retrain your model
    • Each iteration improves AI Assist accuracy
    • Eventually, AI Assist will require minimal corrections

AI Assist Keyboard Shortcuts

Shortcut Action Description
L AI Assist Opens the threshold dialog and triggers AI-assisted annotation on the current image
P Polygon AI Assist Enters SAM mode for interactive polygon segmentation (Segmentation projects only)
N Reset Circle Points Clears SAM click points and starts a new polygon
W Labeling Mode Returns to manual labeling mode

Troubleshooting AI Assist

Issue Possible Cause Solution
"You must select an AI model firstly" error No AI model has been selected for assistance Go to the Upload tab, click the Robot icon, and select a model
No annotations generated Threshold too high or model doesn't recognize objects Lower the threshold value or try a different model
Too many false positives Threshold too low Increase the threshold value (try 0.7 or higher)
Wrong class assignments Label mapping not configured correctly Re-select the model and update the label mapping
AI Assist is slow Large image or complex model Wait for processing to complete; consider using a smaller model
SAM not generating polygons Not enough click points or points placed incorrectly Add more positive points inside the object and negative points outside

Best Practices for AI Assist

  1. Start with Zero-Shot - If you don't have a trained model, YOLOE zero-shot models can provide a good starting point for common objects
  2. Configure Label Mapping Carefully - Incorrect mapping will result in annotations assigned to wrong classes
  3. Use Appropriate Threshold - Start at 0.5 and adjust based on your model's performance
  4. Always Review AI Annotations - AI Assist is meant to speed up labeling, not replace human review
  5. Train Your Own Model - For best results, train a model on your specific data and use it for AI Assist
  6. Iterate and Improve - Periodically retrain your model as you label more data
  7. Use SAM for Complex Shapes - For objects with irregular boundaries, Polygon AI Assist (SAM) often produces better results than automatic detection
  8. Save Frequently - Press Ctrl+S after reviewing and correcting AI-generated annotations

Training

Training is the process of teaching an AI model to recognize patterns in your labeled data. Inspector provides a powerful, flexible training interface that supports multiple algorithms, training modes, and advanced hyperparameters. This section covers everything you need to know to train high-quality AI models, from basic Express Mode training to advanced Standard Mode with custom hyperparameters.

Getting Started with Training

To access the training interface, navigate to your project and click the Train tab in the project workflow bar. The training interface is divided into three main sections:

Section Description
Project Displays project information including name, type, and dataset statistics (visual count, labeled count)
Model Information Shows a summary of current training configuration: Model Type, Image Size, Epochs, Algorithm, Batch Size, and Training Mode
Model Settings The main configuration panel where you set all training parameters
Training Interface Express Mode

Training interface for Object Detection showing Express Mode selected, with Model Information card on the left and Model Settings panel on the right

Training Modes

Inspector offers two training modes to accommodate different user needs and experience levels:

Express Mode

Simplified training with optimized default settings. In Express Mode, you only need to configure basic parameters (Algorithm, Model Size, Resolution), while Epochs and Batch Size are automatically set to optimal values. Best for beginners, quick experiments, and getting started with model training.

Standard Mode

Full control over all training parameters including Epochs, Batch Size, and advanced hyperparameters. Standard Mode unlocks additional configuration options for learning rate, momentum, augmentation, and more. Recommended for production models, fine-tuning, and experienced users who want maximum control.

Training Interface Standard Mode

Training interface with Standard Mode selected, showing editable Epochs and Batch Size fields

Fine-tuning Models

Inspector supports fine-tuning, which allows you to train a new model starting from a previously trained model's weights. This is useful when you want to improve an existing model with additional data or adapt it to slightly different conditions.

To enable fine-tuning:

  1. Check the Fine-tune Model checkbox in the Model Settings panel
  2. Select a previously trained model from the dropdown (only models trained on the same project are available)
  3. Configure your training parameters as usual
  4. Click TRAIN to start fine-tuning

Tip: When to Use Fine-tuning

Fine-tuning is most effective when you have a well-performing base model and want to improve it with additional labeled data. It typically requires fewer epochs than training from scratch and can achieve better results faster.

Algorithms for Object Detection and Segmentation

For Object Detection and Object Segmentation projects, Inspector supports the YOLO family of algorithms and RT-DETR (Real-Time Detection Transformer). Each algorithm has different characteristics in terms of speed, accuracy, memory usage, and stability.

Algorithm Comparison Guide

Algorithm Comparison Guide showing YOLO Family (YOLOv7, YOLOv8, YOLOv11, YOLOv12) and Transformer-based (RT-DETR) algorithms with their speed, memory, accuracy, and stability ratings

YOLO Family Algorithms

Algorithm Description Speed Memory Accuracy Stability
YOLOv7 Proven and stable YOLO version. Best for production environments where stability is critical. Good Medium Good Excellent
YOLOv8 (Recommended) Enhanced architecture and training. Offers the best balance of speed, accuracy, and stability. Better Medium Better Good
YOLOv11 Latest optimizations and improvements. Excellent for high-performance applications. Excellent Medium Excellent Good
YOLOv12 Cutting-edge YOLO with best accuracy. Use when maximum accuracy is required. Excellent Medium Maximum Fair

Transformer-based Algorithm

Algorithm Description Speed Memory Accuracy Stability
RT-DETR Real-Time Detection Transformer. Uses attention mechanisms for excellent accuracy, especially on complex scenes. Good High Excellent Good
Algorithm Selector Dropdown

Algorithm selector dropdown showing all available algorithms: YOLOv7, YOLOv8, YOLOv11, YOLOv12, and RT-DETR

Model Size

Model size determines the complexity and capacity of the neural network. Larger models can learn more complex patterns but require more memory and compute time.

Size Parameters Speed Memory Best For
Nano/Tiny ~3-6M Very Fast Very Low Edge devices, real-time applications, resource-constrained environments
Small ~11M Fast Medium Balanced performance, general-purpose applications
Medium ~26M Medium High Higher accuracy requirements, server deployments
Large ~44M Slower Very High Maximum accuracy, offline processing, powerful hardware

Note: Model Size Names

YOLOv8, YOLOv11, YOLOv12, and RT-DETR use "Nano" instead of "Tiny" for the smallest model size. YOLOv7 uses "Tiny". The functionality is equivalent.

Resolution (Image Size)

Resolution determines the input image size for the model. Higher resolutions provide better accuracy for detecting small objects but require more memory and training time.

Training Settings Full View

Full training settings panel showing Model Size options (Tiny, Small, Medium, Large), Resolution options (320x320 to 1024x1024), Epochs, Batch Size, and TRAIN button

Resolution Training Speed Memory Usage Best For
320x320 Fastest Lowest Quick experiments, large objects, limited GPU memory
480x480 Fast Low Medium-sized objects, balanced performance
640x640 (Recommended) Medium Medium General-purpose, good balance of accuracy and speed
736x736 Slower High Smaller objects, higher accuracy requirements
1024x1024 Slowest Highest Very small objects, maximum accuracy, powerful GPUs

Basic Training Parameters

Parameter Description Default Recommendation
Model Name A unique name for your trained model - Use descriptive names like "product-detection-v1"
Epochs Number of complete passes through the training dataset 100 Start with 100, increase if model is underfitting
Batch Size Number of images processed in each training iteration 16 Reduce if you get out-of-memory errors, increase for faster training

Algorithms for Visual Anomaly Detection

Visual Anomaly Detection projects use specialized algorithms designed to learn from normal samples and detect anomalies. These algorithms do not require labeled defect images - they learn what "normal" looks like and flag anything different.

Anomaly Detection Training Interface

Training interface for Visual Anomaly Detection projects showing the simplified workflow (Upload, Train, Test) and algorithm selection

Anomaly Detection Algorithm Selector

Algorithm selector for Visual Anomaly Detection showing PaDiM, EfficientAD, and DINO/DINOv3 options

Anomaly Detection Algorithms

Algorithm Description Speed Memory Accuracy Complexity
PaDiM Patch Distribution Modeling. Statistical approach that models the distribution of image patches. Good interpretability with anomaly heatmaps. Fast Low Good Low
EfficientAD Efficient Anomaly Detection. Optimized for speed and resource usage. Ideal for real-time applications and edge deployment. Very Fast Very Low Good Low
DINO/DINOv3 (Recommended) Self-supervised Vision Transformer. Uses powerful pre-trained features for excellent anomaly detection accuracy. Medium High Excellent Medium
Anomaly Detection Algorithm Guide

Anomaly Detection Algorithm Comparison Guide showing Statistical Methods (PaDiM), Efficient Methods (EfficientAD), and Transformer-based (DINO/DINOv3) with their characteristics

Choosing an Anomaly Detection Algorithm

DINO/DINOv3 provides the best accuracy and is recommended for most use cases. EfficientAD is the fastest and best for real-time edge deployment. PaDiM offers good interpretability with clear anomaly heatmaps.

Starting Training

Once you have configured all your training parameters, click the TRAIN button to start training. The training process will begin and you can monitor progress in real-time.

Training Requirements

  • Object Detection/Segmentation: Requires labeled images with bounding boxes or polygons
  • Visual Anomaly Detection: Requires at least 15 labeled images (labeled as "normal")

Training Results and Statistics

After training completes, Inspector displays comprehensive training results including performance metrics, training curves, and sample predictions.

Object Detection/Segmentation Results

Object Detection Training Results

Training results for Object Detection showing F1 Score, mAP 0.5, mAP 0.9, Precision, Recall, training progress, Quick Inspection images, and Model Statistics curves (F1, P, R, PR curves)

Metric Description Good Value
F1 Score Harmonic mean of Precision and Recall. Balanced measure of model performance. > 0.90
mAP 0.5 Mean Average Precision at 50% IoU threshold. Standard detection metric. > 0.90
mAP 0.9 Mean Average Precision at 90% IoU threshold. Measures precise localization. > 0.70
Precision Ratio of correct positive predictions to total positive predictions. > 0.90
Recall Ratio of correct positive predictions to total actual positives. > 0.90

Model Statistics Visualizations (Object Detection/Segmentation)

  • F1_curve.png - F1 score vs confidence threshold curve
  • P_curve.png - Precision vs confidence threshold curve
  • R_curve.png - Recall vs confidence threshold curve
  • PR_curve.png - Precision-Recall curve showing the trade-off
  • confusion_matrix.png - Matrix showing true vs predicted classes
  • results.png - Training metrics over epochs (loss, mAP, etc.)

Visual Anomaly Detection Results

Anomaly Detection Training Results

Training results for Visual Anomaly Detection showing Throughput, GPU Memory, Val Mean, Val Std, Val P95, Val P99, training progress, and Model Statistics

Metric Description
Throughput (img/s) Number of images the model can process per second. Higher is better for real-time applications.
GPU Memory (MB) Amount of GPU memory used during inference. Important for edge deployment.
Val Mean Mean anomaly score on validation set. Lower values indicate better separation.
Val Std Standard deviation of anomaly scores. Lower values indicate more consistent predictions.
Val P95 95th percentile of anomaly scores. Useful for setting detection thresholds.
Val P99 99th percentile of anomaly scores. Useful for setting conservative thresholds.

Model Statistics Visualizations (Anomaly Detection)

  • score_distribution.png - Distribution of anomaly scores across the validation set
  • score_boxplot.png - Box plot showing score statistics
  • model_summary.png - Summary of model architecture and parameters
  • training_loss.png - Training loss over epochs (EfficientAD only)
  • sample_anomaly_maps.png - Sample heatmaps showing detected anomaly regions (PaDiM and EfficientAD)

Managing Trained Models

After training, your models are saved and can be accessed from the MODELS panel on the right side of the training interface.

Trained Models List

Trained Models panel showing model cards with performance metrics, configuration details, and action buttons

Model Actions

  • View Details - Click on a model card to view detailed training results and statistics
  • Test Model - Click "Let's Test Trained AI Model!" to evaluate the model on new images
  • Delete Model - Click "DELETE MODEL" to remove a trained model (this action cannot be undone)
  • Fine-tune - Use the model as a starting point for training a new model

Training Best Practices

Tips for Successful Training

  • Quality over Quantity - Well-labeled images are more important than having many poorly-labeled images
  • Diverse Dataset - Include images with different lighting, angles, and backgrounds
  • Start Small - Begin with Express Mode and a smaller model to validate your approach
  • Monitor Metrics - Watch for overfitting (high training accuracy but low validation accuracy)
  • Iterate - Training is often an iterative process. Analyze results and improve your dataset
  • Use Fine-tuning - When adding new data, fine-tune from your best model rather than training from scratch

Testing

Evaluate your trained models by testing them on new images or video streams. The testing interface allows you to adjust detection thresholds and visualize model predictions.

Testing Interface

Testing interface for Visual Anomaly Detection showing threshold slider, heatmap visualization, and anomaly detection results with confidence scores

Testing Options

  • Upload Images/Videos - Test with local files (up to 10 images or 1 video)
  • Edge Device Camera - Test with live camera feed from connected edge devices

Performance Metrics

Metric Description Good Value
mAP@0.5 Mean Average Precision at 50% IoU threshold > 90%
Precision Ratio of correct positive predictions > 90%
Recall Ratio of actual positives correctly identified > 90%

Tip: Confidence Threshold

Adjust the confidence threshold slider to control the sensitivity of detections. Lower values detect more objects but may include false positives. Higher values are more selective but may miss some objects.

Computers

The Computers page is your central hub for managing edge AI devices and their connected cameras. Edge AI computers are physical devices (such as NVIDIA Jetson, industrial PCs, or other AI-capable hardware) that run your trained models for real-time inference. This section provides a comprehensive guide to managing computers, cameras, and dataflows in Inspector.

Computers Page Overview

Computers page showing registered edge AI devices with their names, IP addresses, and Cameras/Dataflows tabs. The right sidebar contains ADD COMPUTER and ADD IP CAMERA buttons.

Understanding the Computers Page Layout

The Computers page consists of two main areas: the computer cards on the left and the action panel on the right.

Computer Cards

Each registered edge AI computer is displayed as a card containing:

Element Description
Computer Name A user-defined identifier for the edge device (e.g., "Edge AI PC", "Jetson Edge AI PC")
IP Address The network address of the device (e.g., 192.168.0.104)
Cameras Button Click to expand and view all cameras connected to this computer
Dataflows Button Click to expand and view all AI dataflows deployed on this computer
Reload Icon Appears when expanded - click to refresh cameras and dataflows from the device
Three-dot Menu Access Edit and Delete options for the computer

Right Sidebar Actions

The right sidebar contains two vertical tabs for adding new devices:

  • ADD COMPUTER - Opens the panel to discover and register new edge AI computers on your network
  • ADD IP CAMERA - Opens the panel to add a new IP camera to an existing computer

Adding a New Computer

To add a new edge AI computer to Inspector, click the ADD COMPUTER tab on the right sidebar. This opens the "Find a Computer" panel.

Add Computer Panel

The Add Computer panel showing the IP Range search functionality. Enter the starting IP address and ending range to scan your network for edge AI devices.

Network Discovery Process

  1. Set IP Range - Enter the starting IP address (e.g., 192.168.0.1) and the ending octet (e.g., 255) to define the search range
  2. Click Search - Click the search icon to scan the network for edge AI devices
  3. Select Device - From the discovered devices, click "Select" on the device you want to add
  4. Configure Name - Enter a descriptive name for the computer (minimum 3 characters)
  5. Add Description - Optionally add a description to help identify the device's purpose
  6. Click Add Computer - Save the computer to your Inspector workspace

Tip: Network Requirements

Edge AI computers must be on the same network as the Inspector server and have the required services running. The discovery process looks for devices with the Inspector edge agent installed.

Adding an IP Camera

To add a new IP camera to an existing computer, click the ADD IP CAMERA tab on the right sidebar. This opens the "Add an IP Camera" panel.

Add IP Camera Panel

The Add IP Camera panel showing camera type selection, computer dropdown, and camera configuration fields. The background shows an expanded Cameras view with a connected IP camera.

Camera Types

Inspector supports two types of cameras:

Camera Type Description URL Format
IP Camera Network cameras that stream video over RTSP protocol. Most common type for industrial and surveillance applications. rtsp://username:password@ip:port/path
Integrated Camera (RVC2/SOLO) Cameras directly connected to edge AI devices like OAK-D or other DepthAI hardware. IP address of the camera module

Adding an IP Camera - Step by Step

  1. Select Camera Type - Choose "IP Camera" or "Integrated Camera" based on your hardware
  2. Select Computer - Choose the edge AI computer that will receive the camera feed from the dropdown
  3. Enter Camera Name - Provide a descriptive name (minimum 3 characters)
  4. Add Description - Optionally describe the camera's location or purpose
  5. Enter URL/IP - For IP cameras, enter the full RTSP URL; for integrated cameras, enter the camera's IP address
  6. Click Add IP Camera - Save the camera configuration

Tip: RTSP URL Format

A typical RTSP URL looks like: rtsp://admin:password@192.168.0.22:554/cam/realmonitor?channel=1&subtype=0. Check your camera's documentation for the exact URL format. Common manufacturers have different path structures.

Viewing Connected Cameras

Click the Cameras button on any computer card to expand and view all connected cameras. The expanded view shows a table with detailed information about each camera.

Cameras Expanded View

Expanded Cameras view showing the connected camera list with columns for Camera Name, Type, URL, and a live preview Image. The "ofis" camera is shown as an IP Camera with its RTSP URL and thumbnail.

Camera List Columns

Column Description
Connected Camera(s) The camera name and optional description
Type Camera type: "Ip Camera", "RVC2" (integrated), or "SOLO"
URL The RTSP URL or IP address used to connect to the camera
Image A live thumbnail preview from the camera feed. Hover over integrated cameras to access the "Edit" button for camera settings.

Camera Settings (Integrated Cameras Only)

For integrated cameras (RVC2 and SOLO types), Inspector provides a comprehensive Camera Settings dialog that allows you to fine-tune image capture parameters in real-time. This is essential for optimizing image quality for your specific AI detection tasks. To access Camera Settings, hover over the camera's image thumbnail in the Cameras list and click the "Edit" button that appears.

Camera Settings Dialog with Live Preview

The Camera Settings dialog showing a live camera preview on the left and all adjustable parameters on the right. The dialog title shows the camera name (OAK-RVC4). Each parameter displays its current value in parentheses.

Camera Settings Dialog Overview

The Camera Settings dialog consists of three main areas:

  • Live Preview (Left) - Shows a real-time video stream from the camera so you can immediately see the effect of your adjustments. The preview updates automatically as you change settings.
  • Settings Panel (Right) - Contains sliders for all adjustable parameters. Each slider shows the parameter name and current value in parentheses (e.g., "Exposure (5000)").
  • Action Buttons (Bottom) - Three buttons for managing settings: Reload stream, Load default values, and Update values.
Camera Settings Parameters

The following table describes all available camera settings parameters with their valid ranges and recommended use cases:

Parameter Range Default Description
Exposure 1 - 33000 5000 Controls how long the camera sensor is exposed to light (in microseconds). Lower values result in darker images but reduce motion blur. Higher values brighten the image but may cause blur on moving objects. For industrial inspection with good lighting, values between 3000-8000 work well.
ISO 100 - 1600 800 Adjusts the sensor's light sensitivity. Lower ISO values (100-400) produce cleaner images with less noise but require more light. Higher ISO values (800-1600) work better in low light but introduce more image noise. For AI detection, lower noise is generally preferred.
Focus 0 - 255 130 Sets the lens focus position for cameras with adjustable focus. Lower values focus on closer objects, higher values focus on distant objects. The optimal value depends on the distance between the camera and the objects being inspected. Adjust until objects appear sharp in the preview.
White Balance 1000 - 12000 4000 Adjusts color temperature in Kelvin. Lower values (1000-3000) produce warmer (yellowish) tones, while higher values (6000-12000) produce cooler (bluish) tones. Set to match your lighting conditions: ~2700K for incandescent, ~4000K for fluorescent, ~5500K for daylight, ~6500K for cloudy conditions.
Brightness -10 to +10 0 Adjusts the overall brightness of the image. Negative values darken the image, positive values brighten it. Use this for fine-tuning after setting Exposure and ISO. Keep close to 0 for most accurate color representation.
Saturation -10 to +10 0 Controls color intensity. Negative values reduce color saturation (toward grayscale), positive values increase color vibrancy. For defect detection where color is important, slight positive values (1-3) can help. For texture-based detection, 0 or slightly negative may work better.
Contrast -10 to +10 0 Adjusts the difference between light and dark areas. Higher contrast makes edges more defined but may lose detail in shadows and highlights. For AI detection, moderate contrast (0-3) usually provides the best balance between edge definition and detail preservation.
Sharpness 0 - 4 1 Enhances edge definition in the image. Higher values make edges appear sharper but can introduce artifacts. For AI detection, values of 1-2 are recommended. Avoid maximum sharpness as it may create false edges that confuse detection models.
Luma Denoise 0 - 4 1 Reduces luminance (brightness) noise in the image. Higher values reduce more noise but may blur fine details. Use higher values (2-3) when shooting in low light with high ISO. Keep at 1 for well-lit environments to preserve detail.
Chroma Denoise 0 - 4 1 Reduces color noise (random colored pixels) in the image. Higher values reduce color noise more aggressively. Color noise is more common in low-light conditions. Values of 1-2 work well for most scenarios without affecting color accuracy.
Rotation 0°, 90°, 180°, 270° Rotates the camera image by the specified angle. Use this when the camera is physically mounted at an angle. Select the rotation that makes the image appear correctly oriented for your use case. This is applied before any AI processing.
Camera Settings with Default Values

Camera Settings dialog showing default parameter values. The live preview area shows a loading indicator while the camera stream initializes. Default values provide a good starting point for most lighting conditions.

Action Buttons

The Camera Settings dialog provides three action buttons at the bottom:

Button Description
Reload stream Refreshes the live camera preview. Use this if the preview becomes unresponsive or shows a stale image. The stream will reconnect and display the current camera view.
Load default values Resets all camera settings to their factory default values. This is useful if you've made many changes and want to start fresh, or if the current settings are producing poor image quality.
Update values Applies the current slider settings to the camera. Changes are sent to the camera hardware and take effect immediately. The live preview will update to show the new settings. Always click this button after making adjustments to save your changes.
Camera Settings Update Values

Camera Settings dialog with adjusted parameters (Exposure: 20608, ISO: 124, Focus: 190) and the cursor hovering over the "Update values" button. The live preview shows the effect of the current settings on the captured image.

How to Adjust Camera Settings

Follow these steps to optimize camera settings for your AI detection task:

  1. Open Camera Settings - Navigate to Computers, expand the Cameras section for your device, hover over the camera thumbnail, and click "Edit".
  2. Wait for Preview - The live preview will load showing the current camera view. A loading indicator appears while connecting to the camera stream.
  3. Adjust Exposure and ISO First - Start by setting the overall brightness. Increase Exposure for brighter images, or increase ISO if you need to keep Exposure low to reduce motion blur.
  4. Set Focus - Adjust the Focus slider until objects at your target distance appear sharp in the preview.
  5. Fine-tune White Balance - Adjust White Balance to correct any color cast from your lighting. Objects that should appear white or gray should look neutral.
  6. Adjust Image Quality Settings - Fine-tune Brightness, Saturation, Contrast, and Sharpness based on your specific needs.
  7. Apply Noise Reduction - If you see noise (graininess) in the image, increase Luma Denoise and Chroma Denoise values.
  8. Set Rotation if Needed - If the camera is mounted at an angle, select the appropriate rotation to orient the image correctly.
  9. Click Update values - Apply your settings to the camera. The preview will update to show the final result.

Tips for Optimal Camera Settings

  • Consistent Lighting - Camera settings work best with consistent lighting. If your lighting varies, you may need to adjust settings periodically.
  • Test with AI Models - After adjusting settings, test your AI model's detection accuracy. Sometimes settings that look good visually may not be optimal for AI detection.
  • Document Your Settings - Note down settings that work well for your use case so you can replicate them on other cameras.
  • Avoid Extreme Values - Settings at extreme ends of their ranges often produce poor results. Start with defaults and make incremental adjustments.
  • Consider Motion - If detecting moving objects, prioritize lower Exposure values to reduce motion blur, even if it means increasing ISO.
Troubleshooting Camera Settings
Issue Possible Cause Solution
Preview not loading Camera stream connection issue Click "Reload stream" button. If still not working, check camera connectivity and ensure the camera is powered on.
Image too dark Low Exposure or ISO Increase Exposure first (up to 15000-20000), then increase ISO if needed. Also check Brightness setting.
Image too bright/washed out High Exposure or ISO Decrease Exposure and/or ISO. Reduce Brightness if it's above 0.
Image appears blurry Incorrect Focus or motion blur Adjust Focus slider until objects are sharp. If objects are moving, reduce Exposure to minimize motion blur.
Colors look wrong Incorrect White Balance Adjust White Balance to match your lighting type. Use lower values for warm lighting, higher for cool/daylight.
Grainy/noisy image High ISO or low light Increase Luma Denoise and Chroma Denoise. If possible, add more lighting and reduce ISO.
Settings not saving Forgot to click Update values Always click "Update values" after making changes. Settings are not applied until this button is clicked.

Managing Dataflows

Click the Dataflows button on any computer card to view and manage AI dataflows deployed on that device. Dataflows are the running AI pipelines that process camera feeds and generate inference results.

Dataflows Expanded View

Expanded Dataflows view showing the dataflow count (1/5), list of deployed dataflows with Input/Output/Status columns, and the "What is Dataflow?" information panel.

Understanding Dataflows

A dataflow is a diagram that controls data processing from the moment it is received from a source (camera) to the moment it is published to external environments. Each dataflow runs as a separate operating system service, ensuring they don't affect each other in terms of performance and scalability.

Dataflow List Columns

Column Description
Dataflows on This Device Name and description of the dataflow
Input Input connector type (MQTT, Siemens S7, Modbus TCP) or "-" if none
Output Output connector type (MQTT, Siemens S7, Modbus TCP) or "-" if none
Status Green circle = running, Gray circle = stopped
Dashboard Icon Click to open the dataflow's live dashboard in a new tab
Three-dot Menu Access Edit, Start/Stop, and Delete options

Dataflow Capacity

Each edge AI computer has a maximum number of dataflows it can run simultaneously, shown as "X / Y" (e.g., "1 / 5" means 1 dataflow is deployed out of a maximum of 5). This limit depends on the device's hardware capabilities.

Dataflow Actions

  • Deploy an AI Model - Click the button at the bottom to create a new dataflow using the Deploy wizard
  • Edit - Modify the dataflow configuration (available from three-dot menu)
  • Start/Stop - Toggle the dataflow's running state
  • Delete - Remove the dataflow (only available when stopped)
  • View Dashboard - Open the live inference dashboard to see real-time results

Dataflow Details Panel

When you click on a dataflow in the list, the right panel shows detailed information about the AI model used in that dataflow, including the model name, description, and class configurations with their detection thresholds and tracking settings.

Editing and Deleting Computers

To edit or delete a computer, click the three-dot menu icon on the computer card:

  • Edit - Opens a dialog to change the computer's name and description
  • Delete - Removes the computer from Inspector (requires confirmation)

Warning: Deleting Computers

Deleting a computer will remove all associated camera configurations and dataflows. Make sure to stop all running dataflows before deleting a computer. This action cannot be undone.

Best Practices for Computer Management

  1. Use Descriptive Names - Name computers based on their location or purpose (e.g., "Assembly Line 1", "Warehouse Entry")
  2. Document Camera Locations - Use the description field to note camera positions and coverage areas
  3. Monitor Dataflow Status - Regularly check that critical dataflows are running
  4. Balance Dataflow Load - Distribute dataflows across multiple computers to avoid overloading a single device
  5. Test Camera Feeds - Verify camera connectivity before deploying AI models
  6. Keep IP Addresses Static - Configure static IP addresses for edge devices to prevent connection issues

Deploy

Deploy trained models to edge devices for real-time inference. The Deploy wizard guides you through a 5-step process to configure and launch your AI pipeline on edge devices.

Deploy Step 1 - Camera

Deploy wizard Step 1 (Camera): Configure the camera source, name, description, and resolution settings for your deployment

Deployment Steps

  1. Camera - Select the camera source and configure resolution
  2. AI Model - Choose the trained model to deploy
  3. Trigger - Configure when inference should run
  4. Output - Define what happens with detection results
  5. Track & Count - Enable object tracking and counting features

Deployment Configuration

Setting Description
Name Identifier for the deployment
Description Optional description of the deployment purpose
Resolution Camera resolution (e.g., 1280x720)

Model Registry

View and manage all trained models in your workspace. The Model Registry provides a centralized view of all your trained and pre-trained models with their performance metrics and configurations.

Model Registry

Model Registry showing trained models with performance metrics (F1 Score, mAP 0.5, mAP 0.9, Precision, Recall), model configuration (Model Type, Epochs, Batch Size, Image Size), and algorithm used

Model Information

Each model card displays:

  • Model Name - Identifier for the model
  • Performance Metrics - F1 Score, mAP, Precision, Recall
  • Training Configuration - Model Type, Epochs, Batch Size, Image Size, Algorithm

Pre-trained Models

You can also add pre-trained models to your registry using the Add Pre-trained Model button. This allows you to use models trained elsewhere or download community models.

Project Studio

Project Studio is a visual workflow builder that enables you to create and manage AI dataflows using a drag-and-drop interface.

Overview

Project Studio allows you to create complex AI pipelines by connecting different processing nodes. This visual approach makes it easy to design, test, and deploy sophisticated data processing workflows.

Key Features

  • Visual Flowchart Editor - Drag-and-drop interface for building workflows
  • Workstation Management - Organize and manage multiple edge devices
  • Dataflow Templates - Pre-built templates for common use cases
  • Real-time Monitoring - Monitor dataflow execution in real-time

Workstations

Workstations represent the edge devices where your dataflows will run. Each workstation can have multiple cameras and dataflows configured.

Workstation Features

  • Device Registration - Add and configure edge AI computers
  • Camera Management - Connect and configure IP cameras
  • Status Monitoring - View device health and connectivity

Flowchart Editor

The Flowchart Editor is the core interface for building AI dataflows. It provides a visual canvas where you can add, connect, and configure processing nodes.

Node Types

Input Nodes

Camera feeds, video files, or image sources that provide data to the pipeline.

Processing Nodes

AI models, filters, and transformations that process the input data.

Output Nodes

Actions triggered by processing results, such as alerts, logging, or API calls.

Building a Dataflow

  1. Drag nodes from the toolbar onto the canvas
  2. Connect nodes by drawing links between ports
  3. Configure each node by clicking on it
  4. Save and deploy the dataflow

Dataflows

Dataflows are the executable pipelines created in the Flowchart Editor. They define how data flows through your AI processing pipeline.

Dataflow Management

  • Create - Build new dataflows using the Flowchart Editor
  • Edit - Modify existing dataflows
  • Start/Stop - Control dataflow execution
  • Monitor - View real-time execution status and logs

Predictor

Predictor is a time series forecasting platform that uses machine learning algorithms to predict future values based on historical data.

Overview

Predictor enables you to build and deploy predictive models for various use cases including demand forecasting, predictive maintenance, and anomaly detection in time series data.

Key Features

  • Multiple Algorithms - Support for LightGBM, Prophet, and other ML algorithms
  • Environment Management - Organize predictions by environment/location
  • Feature Importance - Understand which features drive predictions
  • Model Deployment - Deploy models for real-time predictions

Environments

Environments represent the physical or logical locations where predictions are made. Each environment can have multiple items to predict.

Environment Configuration

  • Environment Name - Identifier for the prediction context
  • Items - Individual entities to predict (e.g., machines, sensors)
  • Data Sources - Connected data feeds for training and inference

Datasets

Datasets contain the historical time series data used to train prediction models.

Dataset Requirements

  • Timestamp Column - Date/time for each data point
  • Target Variable - The value you want to predict
  • Feature Columns - Additional variables that may influence predictions

Data Preparation Tips

Best Practices

  • Ensure consistent time intervals between data points
  • Handle missing values before training
  • Include relevant external features (weather, holidays, etc.)
  • Use at least 2-3 cycles of seasonal data for best results

Training

Train prediction models using your historical data and selected algorithms.

Supported Algorithms

LightGBM

Gradient boosting framework that uses tree-based learning. Fast and accurate for tabular data.

Prophet

Facebook's forecasting tool designed for business time series with strong seasonal patterns.

Training Configuration

Parameter Description
Forecast Horizon How far into the future to predict
Training Window Amount of historical data to use
Seasonality Periodic patterns in the data (daily, weekly, yearly)

Testing

Evaluate model performance using holdout data and various metrics.

Evaluation Metrics

Metric Description
MAE Mean Absolute Error - average prediction error
RMSE Root Mean Square Error - penalizes large errors
MAPE Mean Absolute Percentage Error - relative error

Feature Importance

After training, you can view feature importance to understand which variables have the most impact on predictions. This helps in:

  • Identifying key drivers of the target variable
  • Removing irrelevant features to improve model performance
  • Gaining business insights from the data