back to index

from unity build to live environment: the creator pipeline

Ravel was a VR collaboration platform where users met in 3D environments on the web. We shipped a set of default environments, but the real unlock was letting third-party creators build and submit their own. Faculty at an arts university used this to create virtual campuses for exhibitions and teaching. The problem was not building environments. It was getting them safely from a creator's Unity project into production.

Two gates, one pipeline

The pipeline had two approval stages. First, a creator applied for an account. An admin reviewed the request (creator name, bio, intent) and approved or declined it. Only after approval could that creator upload environments. Second, each environment submission went through its own review: preview image, asset bundle URL, short and long description. An admin could enter the environment in a preview mode before approving it for public use. Both gates tracked approved and denied as independent booleans, not a single enum. This meant a submission could be explicitly declined and the creator notified, rather than left in an ambiguous pending state.

Service architecture

The platform ran a main backend monolith handling users, spaces, organizations, and authentication. Alongside it, two microservices handled the creator pipeline.

The environments service managed creator accounts, environment metadata, preview images, asset bundle URLs, and the submission review workflow. Creators uploaded their Unity WebGL builds as asset bundles to S3 through this service. It stored owner metadata on the S3 object itself and validated file size limits before accepting uploads.

The devtools service handled build orchestration. It received builds from two sources: manual zip uploads through the admin panel, and automated builds from Unity Cloud Build via webhook. Both paths produced the same artifact: a build record with a hash ID, S3 key, and a CDN base URL pointing to cdn.ravel.world.

Build ingestion

Unity Cloud Build posted a webhook payload to the devtools service whenever a build completed. The webhook handler extracted the artifact download URL and filename from the payload, created a new build record, and handed off to an async process. This was important: the webhook needed to return 200 immediately, because Unity Cloud Build had short timeout windows. The actual download and processing happened on a background thread using Spring's @Async.

@Async
public void unityCloudBuildWebHook(UnityPost unityPost) {
    try {
        buildService.unityCloudBuildWebHook(unityPost);
    } catch (IOException e) {
        e.printStackTrace();
    }
}

The build service pulled the artifact URL from the Unity webhook payload, streamed the zip to the server, then delegated to the upload pipeline. Manual uploads followed the same path, just with a multipart file instead of a remote URL.

Unzip and deploy to S3

Each build got a hash ID derived from its database sequence. The S3 key structure was webgl/b/{hashId}, with the archive stored under /archive/ and the extracted files placed directly under the build path. The async processor unzipped the archive on the server, walked the file tree, and uploaded each file individually to S3.

Files.walkFileTree(Paths.get(dir), new SimpleFileVisitor<Path>() {
    @Override
    public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) {
        String newName = file.normalize().toString().replaceAll(dir, "");
        s3Service.uploadToS3WithInputStream(s3BaseUrl + newName, ...);
        Files.delete(file.normalize());
        return FileVisitResult.CONTINUE;
    }
});

After uploading, the service listed all objects under that S3 key and stored the details as a JSONB column on the build record. The build's base URL resolved to https://cdn.ravel.world/webgl/b/{hashId}/Build/{projectName}, which was the entry point the WebGL loader needed. CloudFront sat in front of S3, so a wildcard invalidation (/*) was triggered after each deploy to ensure fresh content.

Promotion

A build existing on S3 did not make it live. Builds and environments were separate entities. An environment had a many-to-one relationship to a build: multiple environments could reference the same build, and promoting a new build to an environment was an explicit action. The promoteBuildEnvironment method on the build service looked up both entities, updated the timestamp, and assigned the build.

This separation mattered. A creator could have multiple builds in flight. An admin could review a build, test it in preview, and only then promote it to the environment that players actually joined. Rolling back meant promoting the previous build again.

The admin panel

The admin panel had two queue views: creator account requests and environment submissions. The submissions queue filtered to show only items where both approved and denied were false, giving reviewers a clean list of pending work. A reviewer could enter the environment in preview mode, then approve or decline. No workflow engine, no state machine library. Just two booleans and a REST endpoint.

What an arts university built with it

One of our education clients used this pipeline to create virtual exhibition spaces for their students. Faculty members applied for creator accounts, built environments using our Unity template, uploaded them through the creator portal, and went through the review process. Their students then used these environments for 3D art exhibitions and virtual teaching sessions. The environments ranged from gallery spaces to experimental architectural forms.

The pipeline's value was not technical sophistication. It was trust. We could let external creators contribute to a platform where hundreds of users would enter these environments, knowing that every build had been reviewed, tested in preview, and explicitly promoted.

Trade-offs

The server-side unzip was a bottleneck. Large Unity builds could take minutes to process, and the async thread had no retry mechanism. If the server restarted mid-upload, the build was lost and needed to be resubmitted. A more robust approach would have been an SQS queue with dead-letter handling, or client-side unzip with direct S3 multipart upload. We accepted the limitation because build volume was low (a few per week) and the admin could simply re-trigger a build.

The two-service split also introduced coordination overhead. The devtools service owned builds and CDN deployment. The environments service owned creator accounts and submission review. The monolith owned the spaces where environments were used. Promoting a build required calls across service boundaries. For a team of three engineers this was manageable. For a larger team, a single service owning the full pipeline end-to-end would have been cleaner.

I was Technical Director and co-founder at Ravel from 2021 to October 2022.