Reality Analysis API
The Reality Analysis API provides the ability to run Artificial Intelligence / Machine Learning (AI/ML) on photos, maps, meshes or point clouds. It can detect objects or features in 2D and 3D for defect analysis, image anonymization, image indexing, asset management, mobile mapping, aerial surveying, and more.
Reality Analysis is articulated around Reality Data and Jobs. Reality Data are the data be to analyzed (photos, maps, meshes or point clouds). To describe on which data the analysis should be run and potentially some extra metadata (e.g. photo positions), we introduce a new type of Reality Data named ContextScenes. The analysis usually requires one or more Machine Learning models (e.g. a deep learning neural network). We call them ContextDetectors. Given ContextScenes and ContextDetectors, a Reality Analysis job produces different kinds of annotations (e.g. detected objects), which we store again in a ContextScene.
Once a Reality Analysis job is completed, it emits a realityAnalysis.jobCompleted.v1 iTwin event. Using the Webhooks API, you can create a webhook that will be triggered by this event. On completion of a job, the endpoint specified in the webhook will be called into.
All the above mentioned reality data are stored via the Reality Management Service. The rest of the presentation assumes that you are familiar with the Reality Management API.
The tutorials and samples are good places to start using Reality Analysis, as well as the API Reference. Before we recommend reading:
- The Context Scene description.
- What a Context Detector is.
- An overview of the different Reality Analysis job types.
- Python and TypeScript SDKs and samples to use iTwin Capture services.