News of the lockdown easing brings hope of a return to the studio, but we are all comfortable with our remote working dispersed team approach now. Productivity up thanks to distractions down perhaps, but work is being completed on time and new projects are arriving. This was our week.

Toby – Leading From Home

With the government encouraging us all back to work we’ve decided that we’d rather stay in our homes for now thanks very much. We’re alright as we are, although I have to say my tomato plant seems to be suffering and I haven’t yet diagnosed the issue.

In other news we have a lot on our plates and it’s great. We’re just beginning to get a solution off the ground that will be the first to leverage the full power of our Innovation Centre; more on that in coming weeks.

Soon we’re going to have our first game ready to play, which is fun, and we’re about to start work on Project Holodeck. We’ve had early conversations with a new client, too, this week which is amazing given the fact that currently it’s particularly hard to find “New New” business from our homes.

Alongside that we’re taking a deeper look at the world of Events. When it comes to client discussions I often emphasise that Immersive Technology should be considered as a (valuable) part of the toolset to be appropriately applied by experts.

By that I mean that a training course, a Health & Safety briefing, a simulation, an academic study, a reconstruction, or whatever else, should be designed and led by experts.

at MXTreality, we consider ourselves experts in Immersive Technologies, and the associated technology, but we’d never pretend to be experts in someone else’s specialism.

Similarly, this week I had a really useful (for me anyway) conversation with someone we’ve known for a little while, who is a real expert in the design, planning, and execution of events, some of which are huge.

Together, I’m hoping we can construct a compelling Events and Experiences proposition that goes beyond the production of content alone, and has thorough and expert conceptualisation with event management, at its core.

Josh – Programme Manager

The need to provide content for this team blog, always comes at the wrong time of the week for me. We’re in that difficult for me anyway, transition period between completing a lot of project components, whilst planning new projects, which is a lot plates to keep spinning. Next time, I promise.

Cat – Lead Programmer

This week, I brought the traffic to life by giving vehicles a semblance of vision and a mote of intelligence. They now have a concept of what constitutes a ‘safe’ and ‘comfortable’ distance between each other, which allows them to make decisions based on the vehicles around them.

For instance, they’ll try to maintain a comfortable distance behind the vehicle in front of them, slowing to increase that distance, or speed up if there’s nothing ahead. If they recognise the car ahead is dangerously close, within the ‘safe’ distance, they’ll brake hard to avoid a collision.

One design challenge we faced was trying to define the vehicles’ behaviour to be realistic – in real life, everyone drives a little bit differently. On top of this, every new rule would add cost and complexity to the system, making it harder and harder to test.

Lastly, the order in which these rules are defined might be important. What minimal set of rules could we define to get realistic behaviour, and in what order?

A feature of the game the traffic system forms a part of, is variable speed limits, as seen on Smart motorways.

This was a surprisingly difficult feature to add, as it meant synchronising the road – the same one developed at the start of the year for our hazard spotting solution – and the traffic system, so the speed limit changes came into effect when cars go under gantries.

Both the traffic system and the road use tricks to give the illusion of movement (discussed in a previous blog explaining floating point numbers), but they’re each independent of the other.

In particular, because of the grid abstraction the traffic system uses to represent its environment, it doesn’t have any real concept of distance at all.

So I had to find a way to represent the speed limit information in a way that would make sense to the AI controlling the vehicles, which meant developing an array-based solution, similar to the main environment representation.

Lastly, as part of my work on the traffic this week, I implemented the player. One of the biggest advantages of our DOTS-based approach to traffic is that the player can be treated as just another vehicle in the system, making it very easy to ensure that the player can’t hit anything.

An example of this is that when you start the game, you don’t know what vehicle you’re going to get. The traffic is all generated first, and then the player code picks a random car near the middle of the system and designates it the player.

Then, the AI is told not to generate input for that vehicle and give that responsibility to the player instead. Hey presto, it all just works – after weeks of development!

The traffic’s anti-collision and movement systems have no idea that one of the cars they’re operating on is actually the player, which means adding functionality for the player and the AI vehicles is actually just a matter of adding it to the traffic.

A great week for me playing in traffic, which is ironic when you consider of the 8 in the team, only two actually drive and I’m not one of them!

Sergio – Programmer

Baking Lights – I was continuing my work on web development research, regarding the required stack and design language we are considering – IBM Carbon Design System.

I also spent time investigating methods to build global illumination into dynamically instantiated objects to improve performance and increase quality for Highways England Events.

Because dynamic global illumination in real-time is very expensive to simulate, the classic technique in 3d world-building is to save all light information that hits and bounces from surfaces into a texture map (Lightmap) and apply on top of the geometry.

This will save performance and fake the effect, but the process to save that data which is called ‘baking’ can take a great amount of time and is only done offline (in production) and on objects that cannot move in the scene.

This creates a problem for us as we dynamically instantiate objects around and cannot bake them all together at the same time. One technique is to save bake data individually for each one of them in a separate scene and save that information remotely for later use.

Unfortunately, it is tricky to do as the Unity Engine does not provide the tools and a custom logic needs to be created – but we love a challenge.

Slava – Lead 3D Artist

This week I have been creating an emergency refuge area (ERA) on the motorway for multiple projects we are currently working on. Any drivers will recognise the special parking bay for broken cars with specific layout and marking, typically used on Smart motorways without a continuous hard-shoulder.

The ERA bay also includes additional signs and an SOS phone box to recreate an exact copy of what you will experience if you ever break down on a Smart motorway, which required several smaller subtasks in this development.

When I finished and tested ERA in Unity I changed all the terrain sections we used in the driving simulator to integrate it seamlessly.

Stefano – 3D Artist

This week I again spent time increasing the realism of the animations, really focussing in on the detail, which is becoming the MXTreality trademark I guess.

As everybody knows, every movement of the head and movement each face muscle, from the most obvious to the barely noticeable ones, contributes to different expressions to communicate specific messages.

In a VR experience where the purpose is the interaction between people, it is vital that users can pick up on non-verbal signs. To be able to achieve this, we have to let the animated character know where the user’s eyes are during the simulation.

In the beginning, we constrained the head to look constantly towards the user, but the result was not really good enough, because the effect was overwritten on the original animation, losing all of the head movements important for non-verbal cues.

Then we programmed the head to look at the user just within a limited angular range, so Unity can understand when in the original animation the character was meant to look at the user and when not.

This allowed the programme to procedurally decide when to free the head from the limited range and let it express itself as described in the original animation. Now we have accurately tweaked the behaviour to reflect that of a person being evasive, trying to avoid the eye contact.

The same has been done for the eyes, adding to them some random fast movements, to better express anxiety; the results of this attention to detail can be appreciated in these animated gifs:



Kyung-Min – 3D Generalist

This week my work continues on virtual event projects, as interest grows from many sectors, stifled by the pandemic and unable to hold exhibitions, events and more!

On a personal note, issues beyond my control affected my output this week, but the team here at MXTreality showed the true nature of the business – I was given time to restructure, rest and rebuild without even having to explain myself.

It means a lot. So many businesses express how much they care for their people, but in my decade of experience, few if any follow through like MXTreality and demonstrate with actions not words just how much I’m valued as a member of the team.

It’s almost like a family and how I believe a creative studio should be, if it is to deliver exceptional work, the sort that brings clients back time after time. I also passed my probation this month and am now proud to call myself a permanent member of the MXTreality – thanks Toby.

My work on the Events Projects made me reflect on what we do at MXTreality as a leading creator of Immersive Experiences. So much of what we do is to innovate and push the boundaries of what is possible.

But it’s a fine line between showing customers what they didn’t know was possible, which can be good, and things so ahead of their time that they are rejected.

Balancing the familiar, yet extraordinary with reality, is a tricky challenge for our online exhibition spaces, but it’s these challenges that bring us to life in the studio – well at home for now!

Jordi Caballol – Programmer

This week we have been finishing the game. With the core systems ready it was time to build everything around them, the most important of which, is the actual game!

Like any game we needed rules and in this case they are simple: you have 2 minutes to drive the furthest distance possible, but if you do anything dangerous, like go over the speed limit, crash into another car, etc., you are penalized and have less time to play.

Finally when you run out of time you have a puncture and have to stop at an ERA and answer some questions on how you should respond when using the ERA.

Aside from these rules there’s another important aspect of the game that has been done this week: all the menus and the scoreboard.

The menus are really simple, a main menu with the options to start playing, to choose a name, to check the controls and to exit the game, and after the game a simple table shows us how well we did compared to the other players – it’s not GTA, but wait until you experience the realism.