qr-based ar anchoring in construction
date: April 15, 2020
Problem: GPS doesn’t work indoors. Construction sites need AR overlays positioned within centimeters, not meters. Traditional AR uses feature tracking or SLAM, but these drift over time and struggle in sparse environments. We needed something deterministic that works on day one when there’s nothing but bare concrete.Solution: QR markers placed at measured positions enable instant, deterministic BIM anchoring. Scanning a code sets the BIM model in the correct location immediately.
Threading strategy
QR detection is expensive. Processing frames on the main thread drops framerate. The solution: a dedicated detection thread with concurrent queues.
private Thread processThread;
private readonly ConcurrentQueue<string> detectedCodes = new ConcurrentQueue<string>();
private byte[] pixels;
private bool bufferAvailable;
private void Update()
{
if (detectedCodes.TryDequeue(out string newCode))
SetLastDetectedCode(newCode);
if (bufferAvailable) return;
if (cameraImage == null || cameraImage.PixelBufferPtr == IntPtr.Zero)
{
cameraImage = VuforiaBehaviour.Instance.CameraDevice
.GetCameraImage(PixelFormat.GRAYSCALE);
return;
}
if (pixels == null)
InitializeBuffer(cameraImage.BufferWidth, cameraImage.BufferHeight);
Marshal.Copy(cameraImage.PixelBufferPtr, pixels, 0,
cameraImage.BufferWidth * cameraImage.BufferHeight);
bufferAvailable = true;
}
private void DetectorThread()
{
while (isRunning)
{
if (!bufferAvailable) continue;
string detectResult = DetectFunction();
if (!string.IsNullOrWhiteSpace(detectResult))
detectedCodes.Enqueue(detectResult);
bufferAvailable = false;
Thread.Sleep((int)(detectionTimeInterval * 1000));
}
}
Main thread grabs camera frames and copies pixels to a shared buffer. Detection thread waits for the flag, processes with OpenCV, queues results. This maintains 60fps AR rendering while continuously scanning for markers.
Marker management
Each physical marker stores position, rotation, and neighboring markers. On detection, the system disables all active markers, activates the detected one, and activates the N closest neighbors. This creates tracking redundancy. If the user moves and loses sight of the first marker, neighbors are already active and tracking continues.
Position calculation
Each marker stores its position relative to the BIM model’s origin. When tracked, we compute the inverse: where should the model be, relative to this marker?
public void UpdateModelPosition()
{
if (!ModelController.Instance) return;
ModelController.Instance.transform.SetParent(transform);
ModelController.Instance.transform.localPosition =
Vector3.Scale(Data.Matrix.inverse.ExtractPosition(),
new Vector3(-1, 1, 1));
ModelController.Instance.transform.localRotation =
Data.Matrix.ExtractRotation();
}
The marker and AR camera stay at the world origin (0,0,0), and the model moves relative to them. The alternative would place the camera at coordinates like (900, 300, 100), where Unity’s 32-bit floats lose precision and cause visible jitter. The Vector3.Scale with (-1, 1, 1) handles left/right-handed coordinate conversion.
Drift and world center switching
Construction floors can span 30 by 50 meters. Walk away from a marker and the device infers position from motion alone. Drift accumulates. We placed markers at regular intervals. The user walks until drift becomes noticeable, scans the next marker, and the model snaps back.
private void OnTrackingChanged(ObserverBehaviour observer, TargetStatus newStatus)
{
if (observer != observerBehaviour) return;
if (VuforiaBehaviour.Instance.WorldCenter == observerBehaviour) return;
if (newStatus.Status != Status.TRACKED) return;
CurrentTrackedMarker = this;
if (VuforiaBehaviour.Instance.WorldCenterMode == WorldCenterMode.SPECIFIC_TARGET)
VuforiaBehaviour.Instance.SetWorldCenter(WorldCenterMode.SPECIFIC_TARGET,
observerBehaviour);
UpdateModelPosition();
}
The guards matter: only switch if this marker isn’t already the world center, and only if tracking is solid. Any drift that accumulated during extended tracking disappears.
Why not visual positioning?
We tested Visual Positioning Systems that match camera frames against pre-built 3D maps. Accurate in stable environments, but construction sites change constantly. A wall goes up. Scaffolding moves. VPS positioning became unreliable, sometimes meters off. Markers have predictable failure modes: covered means walk to the next one, removed means print a new one. For stable environments, VPS is better. For construction, markers won.
Why this works
QR codes are deterministic. They don’t drift. They don’t need a richly textured environment. Bare concrete, empty floors: the system doesn’t care. The workflow starts during site survey. Place markers, measure positions relative to BIM origin, upload. From that point, any device can scan any marker and know exactly where it is.
Positioning is persistent. Come back the next day, scan a marker, same accuracy. Different users see the same aligned model because they reference the same marker database.