top of page
Search

A New Look at the Camera

  • Writer: Nitsan Baider
    Nitsan Baider
  • Mar 13, 2019
  • 3 min read

Updated: Jul 31, 2019


So we’re building this technology that makes data come to life in a virtual 3D world, and there are several components required to make it happen. One such component is the camera. In virtual 3D space there’s a virtual object, typically referred to as the camera, which represents the point in space from which the world is viewed, as well as the direction being looked at. (Virtual world, virtual space... I omit ‘virtual’s for brevity).


Our ability to move and turn the camera in space, in order to change our point of view, is quite powerful. (And “With great power comes…”, you know). In 3D graphics this is often referred to as the six degrees of freedom - moving our point of view along the X, Y, and Z axes, as well as rotating around each of these axes. But while these degrees of freedom can serve well in movies or games, they can also be a problem. The people who are supposed to use our technology to get insights into their data are not, for the most part, gamers. Moreover, when allowing users to move freely around data structures, it is all too easy to get disoriented - find yourself looking at the sky, seeing nothing, or looking at an object that is too close, blocking your view, and not knowing where you are, what it is that you’re seeing, or why you are not seeing anything. You can also be simply frustrated by how long it takes you to get to your desired destination.


Our goal is to make it easy for users to navigate around the data in a smooth natural and easy way, without getting into these undesirable disorienting states. We need to allow end users to use their common navigation tools - mouse, touch pad, touch screen, and keyboard. Yes, in the future we may all be wearing some cool version of the currently all too awkward virtual/augmented reality headsets, and using minimal gestures to fly around, but we are not there yet. Our need is to see data today, and we’re not getting to this utopian future fast enough.


So, how do we address these challenges? We reduce the degrees of freedom. We distill navigation needs to the bare minimum. We automate some aspects of camera motion, so that it behaves in an expected and desirable manner. We place boundaries that prevent end users from getting too far or too close, and we prevent objects from getting too big or too small.


But the camera has an even more important role. As we look at data on our screens, we all know the screen is flat, yet our brains are somehow tricked into believing there’s a truly three dimensional scene ahead of us. There are multiple ways to accomplish that, but in our case, the most powerful tool, is the camera. It’s the very motion of the camera that provides our brains with information about which objects are closer, and which are farther away from us. It’s the advent of fast rendering of such 3D scenes, at 30+ frames per second, that allows our camera to smoothly float around and give us this natural perception of depth. That is where the true power of the camera lies, and that is why making camera navigation accessible to all is the key to offering… sorry… I have to say it… a deeper visual.

ree

 
 
 

Comments


bottom of page