Detect Any Object at Scale with a Text Prompt

Meta’s Segment Anything Model 3 (SAM3) brings powerful zero-shot segmentation to geospatial imagery. Describe what you’re looking for in a simple text prompt (buildings, vehicles, solar panels, crop fields) and SAM3 will find and outline every instance across your imagery. With Wherobots RasterFlow, you can run SAM3 across massive datasets, no training required, no infrastructure to manage.

In this “Getting Started” session, we will demonstrate how to go from raw imagery to precise, queryable object detections using prompt-based inference at planetary scale.

What you will learn:

  • Prompt-Based Object Detection: How to use SAM3’s zero-shot segmentation to detect any object in satellite and aerial imagery with a single prompt. No custom model training needed.
  • Planetary-Scale Inference: Leveraging RasterFlow’s distributed engine to run SAM3 across massive Areas of Interest without managing or optimizing infrastructure.
  • Agentic Post-Processing: Using agentic workflows to automatically transform model predictions into queryable vector data with WherobotsDB for immediate analysis and business intelligence.

Live Demo:

We will walk through a complete end-to-end workflow, including raw imagery ingestion and preparation, large-scale SAM3 inference, and automated result processing with agentic pipelines in WherobotsDB.

Why it Matters:

RasterFlow eliminates the 80% of time typically spent on raster and image preparation and raster processing. Combined with SAM3’s zero-shot capabilities and agentic post-processing, you get a production-ready pipeline that goes from imagery to actionable insights in minutes.