Monday, November 11, 2013

Do you make your own [...]?

Quite often, I get asked whether my company makes specific components - motion trackers, electronics, optics, etc. - that we use inside our virtual reality goggles.

As one would expect, we look at these 'make vs. buy' decisions individually, and ask several questions:

  • Can we add value to our customers if we 'make'? Can we generate a product that is significantly better or lower-cost or offers some other unique benefits relative to the 'buy' alternative?
  • How many of these do we expect to make? We'd be much more likely to buy when only small quantities are required and more inclined to make when there are more units.
  • Can we afford it?
  • Can we build it on time?
  • Can we create value for our shareholders by generating valuable patent filings or know-how?
  • Is this a discipline that we need to understand very well for our future business?
  • Does a 'buy' option exist?
Historically, these have been our answers:
  • Orientation trackers: we typically buy, but we then try to improve what we buy. We have worked with many of the leading orientation tracking vendors - Intersense, Intertial labs, Hillcrest, YEI - and have decided against developing our own. However, we have often worked with these manufacturers to introduce new features in their products or to optimize them for HMD use. We have added some of our own features such as predictive tracking when those did not exist. Last, we prefer to encapsulate the vendor-specific API with a standard Sensics interface because it allows our customers the benefit of maintaining their software investments when we change the motion tracker vendor inside our products.
  • Optics. To date, we have always designed and made optics ourselves. We have made optics for small and large displays, using glass, plastic and fiber, using a variety of manufacturing technologies. We believe that the our portfolio of optical designs is an advantage, and that optics are a critical part of the goggle experience.
  • Electronics. We often design our own electronics. Sometimes, we need special high-speed processing, and in other instances, we feel that we need something beyond simple driving of a display. This can be unique video processing, distortion correction or packaging that supports a particularly compact design.
  • Displays. We buy. We don't have the know-how nor the capital to make our own displays and in the world of changing display technologies, we're glad not to be locked into a specific one. Having said that, we have worked with eMagin in prior years to modify the size of one of their OLED driver boards to make a system more compact and achieve better optical design. It was a financial investment, but we felt we added value to our customers.
  • Mechanical design. We rarely design accessories such as helmet-mounts, but we do love to design goggle enclosures whether to give it our unique 'look', or to include innovative features such as hand tracking sensors.
  • Software. We write our own (or pay to have it written). Our software is so deeply tied to the unique functionality of our designs that it is not available for off-the-shelf purchase.
If you are a manufacturer and would like to see how we can use some of our technologies to help you get new, innovative products to market on short order, drop me a line.


Sunday, November 3, 2013

An Interview with Sebastien Kuntz, CEO of "I'm in VR"

Following my blog post "Where are the VR abstraction layers" I had an opportunity to speak with Sebastien Kuntz, CEO of "I'm in VR" a Paris-based company that is attempting to create such layers. I've known Sebastien for several years, since he was part of Virtools (now Dassaut Systemes) and it was good to catch up and get his up-to-date perspective.
Sebastien Kuntz, CEO of "I'm in VR"

Sebastien, thank you for speaking with me. For those that do not know you, please tell us who you are and what you do?
My name is Sebastien Kuntz and I am CEO of "i’m in VR". Our goal is to accelerate the democratization of virtual reality and towards that goal we created the MiddleVR middleware software product. I have about 12 years of experience virtual reality, starting at the French Railroad company working on immersive training, and continuing as the lead VR engineer in Virtools, which made 3D software. After Virtools was acquired by Dassault Systemes, I decided to start my own company and that is how "i’m in VR" was born.
What is the problem that you are trying to solve?
Creating VR applications is complex because you have to take care of a lot of things – tracking devices, stereoscopy, multiple computers synchronization (if you are working with a Cave), interactions with the virtual environment (VE).
This is even more complex when you want to deploy the same application on multiple VR systems - different HMDs, VR-Walls, Caves ... We want developers to focus on making great applications instead of working on low-level issues that are already solved.
MiddleVR helps you in two ways:
  • It simplifies the creation of your VR applications with our integration in Unity. It manages the devices and cameras for you, and offers high-level interactions such as navigations, selection and manipulation of objects. Soon we will add easy-to-use immersive menus, more interactions and haptics feedback.
  • It simplifies the deployement of your VR applications on multiple VR systems: the MiddleVR configuration tool helps you easily create a description of any VR system, from low-cost to high-end. You can then run your application and it will be dynamically reconfigured to work with your VR system without modification.
MiddleVR is an easy-to-use, modern and commercial equivalent of Cavelib and VR-Juggler.
How do you provide this portability of a VR application to different systems ?
MiddleVR provides several layers of abstraction.
  • Device drivers: the developers don't have a direct access to native drivers, they have access to what we call "virtual devices", or proxy devices. The native drivers write tracker data like position and orientation directly in such a "virtual device". This means that we can change the native drivers at runtime while the application is still referencing the same "virtual device".  
  • Display: all the cameras and viewports are created at runtime depending on the current configuration. This means your application is not dependent on a particular VR display.
  • 3D nodes: Most of the time the developer does not care about the information from a tracker, he is more interested in the position of the user's head or hand for example. MiddleVR provides a configurable representation of the user, whose parts can be manipulated by tracking devices. For example the Oculus Rift orientation tracker can rotate the 3D node representing the user's head, while a Razer Hydra can move the user's hands. Then in your application you can simply ask "Where is the user's head ? Is his hand close to this object ?", which does not rely on any particular device. This also has the big advantage of putting the user back in the center of the application development!
  • Interactions: At an even higher level, the choice of an interaction technique is highly dependent on the characteristics of the hardware. If you have a treadmill you will not navigate in a VE in the same way as if you only have a mouse, or a joystick, or if you want to use gestures... The choice of a navigation technique should be made at runtime based on the available hardware. In the same way, selecting and manipulating an object in a VE can be made very efficient if you use the right interaction techniques for your particular hardware. This is work in progress, but we would like to provide this kind of interactions abstraction. We are also working on offering immersive menus and GUIs based on HTML5.
Will this interaction layer also allow you to define gestures?
Yes, the interaction layer will certainly allow you to define and analyze gestures. Though this particular functionality is not yet implemented in the product, you will be able to integrate your own gestural interactions.
Do you extend Unity's capabilities specifically for VR systems ?
Yes we provide active stereoscopy, which Unity cannot do.
We also provide application synchronization on multiple computers, which is required for VR systems such as Caves. We synchronize the state of all input devices, Unity physics nodes, the display of new images (swap-buffer locking) and left/right eye images display in active stereo(genlocking). As mentioned, we will also offer our own way of creating immersive menus and GUIs because Unity’s GUI system has a hard time dealing with stereoscopy and multiple viewports. 
Do you support other engines other than Unity?
MiddleVR has been created to be generic, so technically it was designed to be integrated into multiple 3D engines, but have not done so yet. It’s the usual balance of time and resources. We made several promising prototypes though.
How far are we from true ‘plug and play’ with regards to displays, trackers and other peripherals?
We are not there yet. You need a lot more information to completely describe a VR system than most people think.
First, there is no standard way to detect exactly which goggle, tv, projector or tracker is plugged in. [Editor's note: EDID does provide some of that information]
Then in a display (HMD, 3D monitor, CAVE), it is not enough to describe resolution and field of view. You also need to understand the field of regard. With an HMD you can look in all directions. With a 3D monitor or most CAVEs, you are not going to be able to see an image if you look towards the back. The VR middleware needs to be aware of this and allow interaction methods that adapt to the field of regard. Moreover you have to know the physical size of the screens to compute the correct perspective. 
I believe we should describe a VR system not based on its technical characteristics such as display resolution or number of cameras for optical tracking, but rather in terms of what those characteristics means for the user in terms of immersion and interaction! For example:
  • What is the end-to-end latency of the VR system for each application? This will directly influence the overall perception of the VE.
  • What is the tracking volume and resolution in terms of position and orientation? This will directly influence how the user interacts with the VE: we will not interact the same with a Leap Motion which has a small tracking volume, or with the Oculus Rift tracker which can only report orientations or with 20 Vicon cameras able to track a whole room with both positions and orientations.
  • What is the angular resolution of the display? If you can't read a text from a given distance, you will have to be able to get the text closer to you. If you can read the text because your VR system has a better angular resolution, you don't necessarily need this particular interaction.
  • What is the field of regard ? As discussed above this also influences your possible actions.
The user's experience is based on its perceptions and actions, so we should only be talking about what is influencing those parameters. This requires a bit more work because they are highly dependent on the final integration of the VR system.
We are not aware of standards work done to create these ‘descriptors’ but we would certainly support such effort as it would benefit the industry and our customers.
Are most of your customers today what we would call ‘professional applications’ or are you seeing game companies interested in this as well?

Gaming in VR is certainly gaining momentum and we are very interested in working with game developers on creating this multi-device capability. We are already working with some early adopters.
We are working hard to follow this path. For instance, we are about to release a free edition of MiddleVR based on low-end VR devices and would like to provide a new commercial licence for this kind of developments. This is in our DNA, this is why we were born! We want to help the current VR democratisation. 
When you think about porting a game to VR, there are two steps (as you have mentioned in your blog): the first one is to adapt the application to 3D, to motion tracking ,etc. This is something you need to do regardless of the device you want to use.
The second is to adapt it to a particular device or set of devices. We can help with both, and especially with the 2nd step. There will be many more goggles coming to market in the next few months. Why just write for one particular VR system when you can write a game that will support all of them ?
What’s a good way to learn more about what MiddleVR can do for application developers? Any white paper or video that you recommend?
Our website has a lot of information. You can find a 5 minutes tutorial, and here another video demonstrating the capabilities of the Free edition.
Sebastien, thank you very much for speaking with me. I look forward to seeing more of I'm in VR in action.
Thank you, Yuval