After loading a model, you can add information to the workflow. To define where the model will be later displayed in the Spatial Workplace app, a spatial reference must be added to the workflow.
There are three types of references:
- Markers
- Object trackers
- Model placement
Note: Mixing different spatial reference types in a workflow is currently not supported.
Markers
A marker is used to position information that is to be displayed within a workflow at the desired spot over the real-life component. For this, at least one virtual marker needs to be added in the editor where the real-life marker will be in relation to the real-life component. Different devices use different types of markers. The virtual model is then loaded in Spatial Workplace according to the scanned position of the marker.
The following types of markers exist:
- ArUco marker: This type of marker is meant to be used with mobile devices (iOS and Android). The size of the marker can be adjusted by the user and can be between 1 and 99 cm. As a rule of thumb, markers with a size of 10 cm (12 cm with borders) or 15 cm (18 cm with borders) should be used, but the user has the option to select the size that works best with the respective component.
- QR code marker: This type of marker is meant to be used with a HoloLens 2. Again, the size can be adjusted by the user. The proposed default is 15 cm (17 cm with borders).
Note: For correct tracking, it is necessary to print the marker in the same size as it has been added in Spatial Editor.
To add a marker to your model:
- Click on Add on the top.
- Choose either ArUco or QR code marker depending on your requirements.
- Click on the surface of the model where you want to place it. The other options/buttons of the editor are disabled until you place the marker.
- To change the position of the marker on the model's surface, select it and click on Object > Snap in the top menu or press
S
on the keyboard. - Change the position and rotation of markers independent of the model's surface by using the transformation gizmos or the Transform menu on the right.
6. On the right side, you can edit the marker's reference (ID and size).
Note: The virtual marker used in the editor must be the same as the real-life marker that is put on the real-life component when using Spatial Workplace, so ensure that the marker ID matches. It is important to print the correct marker and place it in the same position both virtually in the editor and on the real-life component.
All markers can be downloaded by clicking on Marker PDF in the top menu. For large models, it is recommended to add more than one marker to facilitate tracking with HoloLens 2. A large model is one that, in order to see all pins, the user needs to move their point of vision more than 90 degrees in relation to the position of the original marker. If this is the case, add an additional marker to each section (i.e., side) of your real-life component. Each marker helps the device recalibrate the position of the pins, assuring their correct placement.
Object trackers
When using object trackers, the real-life object is used to calculate the position of the information that is to be displayed within a workflow at the desired spot. Object trackers can be used in workflows that will be viewed on HoloLens 2, iOS, and Android devices.
To add an object tracker to your model:
- Click on Add > Object Tracker in the menu at the top of the 3D scene.
- You will see a red hologram of smartglasses (you need to zoom out using the scroll wheel on your mouse). This hologram shows how the object will be perceived through smartglasses.
3. The position of the object tracker in relation to the model in the scene represents the position and distance in which the user will have to position their device to scan the real object while playing the workflow in Spatial Workplace.
4. Add the object tracker. It is now positioned automatically where the 3D scene camera is (i.e., the perspective in which the user is currently looking at the model in the 3D scene).
Note: Using the mouse, the user can rotate the scene to see it better from different perspectives.
5. Use the gizmo over the object tracker to refine its positions or move the camera.
6. Optional: Click on Set Transform From View in the menu on the right to move it again to your viewing perspective.
Note: It is important that the object tracker is at a reasonable distance from the model and that the line coming out of it is pointing the model.
⇒ After uploading your workflow, test the scanning perspective and distance on a viewing device and fine-tune it in the editor. With this, it will be guaranteed for the final user to have a better scanning experience.
Note: The red color of the smartglass hologram means that there is no .obj file attached. The .obj file aids object tracking from VisionLib to track the real-life component.
7. To create a .obj file from the scene, select the red hologram
8. Go to settings on the right.
9. Click on Assign > Generate new from scene under Tracked Object
10. Optional: The user can also save the .obj file on their computer by clicking on Export and saving the file.
Note: Independent from the model format imported into Spatial Editor, a .obj file needs to be generated from the scene or provided from disk.
11. Now, the hologram of the Object Tracker in the 3D scene should change its color to green.
12. Optional: If parts are hidden or moved from the model in Spatial, the .obj file needs to be regenerated to include these changes in your workflow. To be able to adjust the position and rotation of the initial tracking when using the Workplace app, enable the Dynamic Initial Pose option.
Note: For object tracking in HoloLens 2, the scale of the .obj is required to be in meters. When generating the .obj from the scene, Spatial will automatically ensure this. However, if the user imports an existing .obj with a VisionLib license from a disk, it is the user's responsibility to ensure that the scale is in meters. Other devices do not have this limitation.
13. Change the position and rotation of the object tracker using the menu on the right.
14. Click on Set Transform From View. The object tracker is automatically moved to the position and point of view of the 3D scene.
15. Finally, you can change the values of the tracking parameters (explained below) to improve tracking for a specific object.
Note: One of these parameters is the Static Scene, which the user can disable if the scene they are working with is dynamic. This feature is currently available on mobile devices only.
Note: The default values are general parameters chosen to work well with most objects.
Here's a list of all available tracking parameters:
- Dynamic Initial Pose: When enabled, the user can dynamically set the initial tracking viewpoint during runtime.
- Continuous Tracking (Mobile Only): If enabled (default) the object tracker would have continuous object tracking for mobile devices. It is more suitable for objects that can be moved or rotated during the task but keep their form. Non-continuous tracking only tracks the object at the start of the task and then continues the tracking using SLAM. Non-continuous tracking is more suitable for objects that are not moved or rotated during the task and that change their form (e.g. parts are added or removed).
- Extendible Tracking: If enabled (default), the model-based tracking will be extended with SLAM-based tracking. This allows tracking to be continued even if the model isn't visible in the camera image anymore. The user needs to perform a SLAM dance, which means translating and rotating the camera so that there is enough baseline for the feature reconstruction.
- Min. Inlier Ratio Init: Threshold for validating tracking during initialization. The value range reaches from 0.5 to 0.9, with 0.6 being the default value. Higher values are recommended if the line model matches the real-life object perfectly with no occlusion. However, usually they will not match perfectly, which is why a lower value works better.
- Laplace Threshold: Threshold for creating the line model (mm). The value range reaches from 0.0001 to 100000, with 5 being the default value. This specifies the minimum depth between two neighbouring pixels to be recognized as an edge.
- Normal Threshold: Threshold for generating the line model. The value range reaches from 0.0001 to 1000, with 1000 being the default value. This specifies the minimum normal difference between two neighbouring pixels necessary to be recognized as an edge. Usually, it is set to a high value because normal-based lines can't be recognized very reliably. Though, for certain models, it might make sense to use a lower value.
- Line Gradient Threshold: Threshold for edge candidates in the image. The value range reaches from 0 to 256, with 40 being the default value. High values will only consider pixels with high contrast as candidates while low values will also consider other pixels. This is a trade-off. If there are too many candidates, the algorithm might choose the wrong pixels. If there are not enough candidates, the line model might not stick to the object in the image.
- Keyframe Distance: Minimum distance between keyframes (mm). The value range reaches from 0.001 to 100000, with 100 being the default value. The line model is only generated for certain keyframes. Higher values improve performance but come with lower precision (and vice versa).
- Line Search Length Init Relative: Length of the orthogonal search lines (in per cent) relative to the minimum resolution during initialization and tracking. The value range reaches from 0.00625 to 1, with 0.03125 being the default value. The model-based tracker projects the 3D line model into the camera image and searches for edge pixels orthogonal to the projected lines.
- Use Color: This is disabled by default. If enabled, colored edges are distinguished better while tracking. It is only useful for objects with colored edges. It can increase the tracking quality but requires more processing power.
- Field of View (HoloLens 2 Only): A larger field of view makes the object appear smaller during image capturing. It is recommended to use 'wide' for large objects and 'narrow' for small ones.
Model placement
Model placement uses the user position when Spatial Workplace was started to position all models and pins connected to the spatial reference.
It can be used in workflows that will be viewed on HoloLens 2, iOS, and Android devices.
To add a model placement spatial reference:
- Click on Add > Model Placement on the top of the 3D scene. The gizmos allow movements only along the green and red axes and rotation around the blue axis. This restriction is meant to keep the model placement reference on the same plane.
2. The green arrow symbolizes the view direction of the user. The user can choose which models are positioned according to this reference in the menu on the right. When starting the workflow in Spatial Workplace, the selected models and connected pins will be positioned in relation to the viewing direction of the user when they started the Spatial Workplace app.
Model visibility: Different from the pins connected to a spatial reference, models will not be visible by default. To make them visible while playing the workflow, you need to either:
For more information, please see Tracking Recommendations.