back to index

digital twins for remote training

A global pharmaceutical company needed to onboard new employees at their UK production facility. COVID lockdowns made that impossible. New hires couldn't walk the production floor, couldn't learn where equipment was, couldn't build the spatial familiarity that makes people effective in a complex manufacturing environment. The company needed a digital twin of their facility that multiple people could explore together in VR.

This became Ravel's founding use case. We later built digital twins for a Dutch water authority's headquarters and an urban innovation hub. Each project refined the same core pipeline: physical building in, navigable multiplayer VR space out.

The pipeline

Five stages connect a real facility to a VR experience. Source geometry arrives as 3D scans or CAD exports. Unity converts that geometry into optimized scenes with interactive elements. The scenes get packaged as asset bundles, uploaded to S3, and served through CloudFront. A backend manages which environments are available and handles multiplayer session orchestration.

Each stage is independently replaceable. Different source formats feed the same Unity pipeline. Different CDN configurations serve the same bundles. The boundary between stages is always a file or an API call, never shared state.

Source geometry

Real buildings arrive as messy data. Point clouds from 3D scanners have millions of unstructured vertices. CAD exports carry too much detail for real-time rendering.

A digital twin prioritizes spatial recognition over geometric accuracy. The person walking through the VR version needs to think "yes, this is the lab I work in." That means textures and lighting matter more than millimeter precision on pipe diameters. We combined photogrammetry for visual fidelity with simplified CAD geometry for interactive elements. The building looks photorealistic, but the equipment you interact with is clean geometry with metadata attached.

Unity scene construction

Each facility becomes a Unity scene organized by zone. A pharmaceutical production floor has distinct areas: gowning rooms, clean rooms, equipment bays, corridors. Each zone loads as a separate section within the scene, which keeps memory manageable on standalone VR headsets.

Interactive elements sit on top of the base environment. Equipment gets annotated with specs, operational parameters, safety information. Tap on a bioreactor and see its capacity, operating temperature range, cleaning protocol. These annotations live in a JSON structure stored as dynamicContent on the environment entity. The Unity client reads this metadata at load time and attaches interaction handlers to the corresponding objects.

The scene gets exported as a Unity asset bundle. One bundle per target platform: Quest, PC VR, WebGL. Same source scene, different compression and quality settings per platform. The build pipeline produces all variants from a single Unity project.

Environment delivery

Asset bundles need to reach clients fast. A 150MB environment bundle loading over a slow connection kills the experience before it starts. We built a dedicated environment service that handles upload, review, and distribution.

Creators upload bundles through the environment service API. The bundle goes to S3 with owner metadata attached. The environment entity tracks the assetBundleUrl, a CloudFront distribution URL that ensures low-latency delivery regardless of the user's location. The service enforces a size cap of 180MB per bundle to keep load times reasonable.

// Environment entity stores the CDN URL for the asset bundle
@URL
@Column(name = "asset_bundle_url", length = 500)
private String assetBundleUrl;

New environments go through a QA review before publication. A creator submits, QA validates that the bundle loads correctly and meets platform requirements, then approves. Only approved environments can be assigned to spaces. This gate prevents broken or oversized bundles from reaching users.

When a bundle updates, the service invalidates the CloudFront cache so clients pull the fresh version. The invalidation targets the specific distribution path rather than flushing the entire cache.

Spaces and sessions

An environment is a place. A space is an instance of that place with its own configuration, access controls, and persistent state. One pharmaceutical facility environment might back three spaces: a training space, an onboarding space, and a maintenance reference space. Each has different user permissions and different interactive content enabled.

The session layer handles multiplayer. When a user enters a space, the backend provides connection details for Photon (networking) and Agora (voice). The SpaceSessionController returns everything the client needs in a single call: room ID, voice channel token, environment bundle URL.

@GetMapping(value = "/sessions/{userUuid}/{spaceProCode}")
public ResponseEntity<Object> getSessionDetails(
    @PathVariable UUID userUuid,
    @PathVariable String spaceProCode
) {
    return ResponseEntity.ok()
        .body(spaceProService.getSessionDetails(userUuid, spaceProCode));
}

Photon handles state synchronization. When a room is created, the backend stores an empty state object. As users interact with the environment, state updates flow through Photon and persist to the database on room close. When users rejoin, the room picks up where it left off. The persistence service tracks room lifecycle through webhooks: create, join, leave, close.

What made it work for training

The pharmaceutical company's trainers could stand next to a new hire in VR, point at actual equipment in context, and explain procedures. The spatial relationship between equipment mattered. Understanding that the gowning room is here, the clean room airlock is there, and the bioreactors are arranged in this specific layout is knowledge that slides and videos cannot convey.

The water authority project confirmed the pattern. Their building had complex mechanical systems spread across multiple floors. New employees needed to understand which systems served which zones. A digital twin gave them the spatial mental model before they ever entered the building.

Trade-offs

Asset bundles are opaque binaries. You cannot patch a single texture without rebuilding the entire bundle. This made iteration slow during early development. We accepted this trade-off because bundles load atomically, which means no partial states, no missing textures, no version mismatches between assets.

The 180MB cap forced discipline on scene complexity. Some facilities wanted every cable tray and junction box modeled. The cap meant we had to decide what mattered for the training use case and leave the rest out. This was the right constraint. Completeness is the enemy of usability in VR where frame rate directly affects comfort.

Photon's state persistence model worked but was coarse. Room state saved as a single JSON blob on close. If a session crashed, intermediate state was lost. For training scenarios this was acceptable. For collaborative design work it would not have been.

My previous BIM/AR projects dealt with the same fundamental problem: getting building data into a real-time 3D context. A digital twin replaces reality entirely, which raises the bar on visual quality but removes the constraint of camera-feed integration.

I was Technical Director and co-founder at Ravel from 2021 to October 2022.