Skip to main content

Segmentation

What it does: Identifies and delineates regions in an image by assigning each pixel to a category.

Key characteristics:

  • Pixel-level precision – exact boundaries
  • Single blob per class – multiple regions of the same class are merged together
  • No instance distinction – cannot separate overlapping objects of the same class

Input

  • Whole images (e.g., photos of products, surfaces, or scenes)
  • Corresponding pixel-level masks for training (each pixel labeled with a class)

Labels

  • Each pixel must be assigned to a class
  • Use consistent class IDs or colors across the dataset
  • Avoid overlapping labels unless using Instance Segmentation
  • Ensure masks cover all relevant regions without gaps
Labeling tips
  • Use precise boundaries when labeling regions
  • Double-check masks to avoid missing pixels
  • Avoid ambiguous or inconsistent labels
  • Maintain consistent labeling conventions across all images

Output

A mask showing which pixels belong to which category

How it looks in the platform

  • Model outputs a mask overlayed on the original image
  • Each pixel is assigned a class label
  • Confidence per pixel may be optionally displayed

When to use segmentation

Use segmentation when you need to:

Find a single object – Get exact contours of objects
Extract regions – Identify specific areas or zones
Obtain precise boundaries – Pixel-level accuracy matters
Multiple regions of the same class – Several areas of the same type, but no need to separate individual instances

Example use cases

ApplicationWhat it segments
Defect detectionOutline of damaged area on a surface
Background removalSeparate product from background
Zone identificationIdentify different material regions
Surface analysisFind coating defects or contamination areas
Limitations of segmentation

Limitations:

  • ❌ Cannot distinguish overlapping or touching instances of the same class
  • ❌ Single blob per class – multiple objects merged
  • ❌ Not suitable for counting individual items
  • ❌ May produce coarse masks on very small objects or thin boundaries

Better alternatives: Use Instance segmentation if you need:

  • ✅ To separate multiple instances of the same class
  • ✅ To count objects individually
  • ✅ Pixel-precise boundaries for each instance
  • ✅ Applications with overlapping objects or complex scenes
When to choose segmentation over instance segmentation

Use regular segmentation if:

  • You only need to know "where is this type of object/defect"
  • You don't need to count individual instances
  • Objects do not overlap or touch
  • Slightly faster inference is preferred

Considerations for training

  • Input size: Typically 256x256 or 512x512 px for good boundary accuracy.
  • Data augmentation: Rotations, flips, cropping, color jitter.
  • Class balance: Ensure enough labeled regions per class.
  • Loss function: Pixel-wise cross-entropy or dice loss.
  • Batch size and learning rate: Adjust according to dataset size and GPU memory.
  • Regularization: Dropout or weight decay may help reduce overfitting.

Additional configuration

  • Label consistency: Make sure class IDs/colors are consistent across the dataset.