The making of a scenario that is able to reduce biases and teach inclusive behaviour is a challenging task. In general, virtual reality technology allows us to enter into new and unfamiliar spaces by simply placing a headset on. In some instances, such as in gaming, you can immerse yourself in incredible experiences, even though you might only see your hands. Floating hands is what is most used in virtual reality applications, as these are focused on interacting with the “other” world. From the back-end, it is also a much simpler problem as arms, body and legs do not have to be calibrated or rendered.
When you enter into a Virtual Bodyworks scenario, however, while the space may be unfamiliar, you feel that the real you is actually there. Behind the scenes, we use a full body avatar whose movements and dimensions are coherent with those of your own body. This technique, pioneered by our founders, creates the illusion in the brain that your virtual body is actually your own (this illusion is referred to as embodiment). And, interestingly, our research has shown that embodiment is effective in influencing behavioural change in the real world too.
So, how do we actually create avatars? How similar is it to when we used to design SIM characters in the popular video game?
Making full body avatars is a complex and technically challenging process, especially when we take speed, the ability to use the avatar in different applications, portability, performance and quality into consideration. In this blog, we walk you through our avatar creation pipeline; our criteria for success; how we deal with improvements and performance; and, ultimately, what we see happening in the future for this technology.
Creating a virtual full body avatar
Given software constraints and our commitment to creating quality, full body avatars, our team constantly improves the process of creation.
- Step 1: Selfie time
Before we can get started, we need our clients to take pictures of themselves much like they would for a driver’s license or government ID document
- Step 2: Character Creation
Pictures are used as the basis of each avatar, which we create using a software application called character creator. Automatic uploads do not necessarily maintain the quality we expect if we want our avatars to look as much like our clients as possible. For this reason, we rely on 3D artists that can adjust the face shape, skin color, wrinkles, and clothing etc.
- Step 3: Export
Once the avatar is acceptable, we export it into our proprietary software, which optimizes the avatar for performance and then transforms it into an asset bundle with customisable parameters within it (skin color, definition, length of arms, facial dimensions, etc.).
This last step sounds a bit technical, but it is important. The asset bundle allows us to change avatars easily in a scenario, from its dimensions to making improvements to match updated software.
Motion capture studio? I’ll take the living room
In order to create a powerful scenario, it is not sufficient to have a full body avatar. It is necessary to scale your proportions into those of the avatar. You should see your virtual hands touch as if they were your own, legs, hips, head, wrists, etc. This way, you can feel a virtual body as your own, even if it is a completely different size or shape to your real body.
Creating full body avatars usually requires motion capture studios, which are expensive and equipment-heavy. They are also not very practical, as they require actors, a lot of space and things can not be easily modified after production. However, the studio recordings do offer developers the ability to track movements with precision and make very realistic avatars and scenarios. The same technology, because of its high price tag, is most often used in animations for films.
For all these reasons, we have developed technology that allows us to record movements using simply a headset and its controllers. Using inverse kinematics, we can record a scene using an Oculus Quest, in any place with sufficient space. This has greatly simplified our process.
Improvement and performance
Facebook’s Oculus and Pico Interactive are the most widespread headsets and are the ones we use as the baseline to run our applications. We made this decision as we believe that accessibility will be key if virtual reality is to become as innocuous as television or cell phones in the future. However, the Quest does present challenges from a development standpoint.
As we already established, we use full body avatars, instead of floating hands. This means that the number of polygons of the avatar (meaning the resolution) is much larger than if we were just using floating hands. Therefore, we need to make very specific, detailed decisions at the time of creating the application to ensure smooth performance.
We balance 4 main elements in the application
- Number of polygons/resolution
- Shaders, or the code behind the way materials appear
- Light and shadows
- Time complexity of the code, basically the efficiency of our program
We are focused on balancing the complexities between these main elements in order to prepare ourselves to be at the forefront of portable devices. It also means that we are making applications as optimized as possible, which will only make for higher quality scenarios.
The future of virtual reality
There are 3 key technological improvements that we believe will advance the possibilities of VR applications in the near future.
- The automation of avatar creation. Our 3D art team is working on the use of AI to automate the creation of avatars. Powered by cloud computing, developments in AI and our own expert algorithms, it should be possible to have an automated and highly scalable avatar creation pipeline.
- The improvement of graphic cards and Moore’s Law. More of an observed trend than a law in scientific terms, Moore’s law has come to describe the expected outcome when it comes to the improvement of computer hardware. It states that the capabilities of our computers will double every couple years, while also decreasing in cost. This has indeed occurred when it comes to VR, and we definitely expect headsets to improve greatly in the next couple of years.
- 5G, Cloud Computing and Render Farms. Already present in developments like Google Stadia, high latency speeds and centers dedicated specifically to the rendering of applications means that we can have increasingly smaller devices paired with incredible computing power. This technological change is already underway but, as it requires the improvement of the digital infrastructure of different nations, may take longer to arrive.
Virtual reality is a technology at an inflexion point. We see the space moving at incredible speed, so at Virtual Bodyworks we continue to innovate in order to maintain the pace of the industry. It feels, however, that technology is no longer the barrier that it used to be. New developments have made virtual reality more accessible than ever before. Now, adoption is the challenge we face as an industry.
If you want to keep up to date with the our latest developments in virtual reality, follow us on LinkedIn.