02.17.2016 § Leave a comment

Screenshot (16)

Editing between shots in VR is challenging. Mostly this is because you can’t be sure where the viewer will be facing when it’s time to cut. It’s dangerous to put important events at the beginning of a shot because a viewer might miss them before she has time to decide where to look.

There are ways to shoot and edit VR films that minimize this problem, however. I stole the diagrams here from Jessica Brillhart, though set two of them together and added the arrows to illustrate my concept. In Jessica’s originals:

  • We’re viewing things top down.
  • Each ring moving out from the center is a new shot.
  • Since we’re addressing 4D problem in 2D, we’re simplifying things quite a bit. We’re only caring about one of the three degrees of rotational freedom: arguably the most critical one, pan. Neither tilt nor Dutch tilt are in play.
  • The black dot is the direction you’re meant to face at the start of a shot.
  • The white dot is the direction you’re expected to be facing by the end of a shot.

So when editing, you just rotate the rings to line up the dots.

Now of course it’s not that simple:

  1. You have to create shots that can have dots that line up.
  2. And before you can do that you first have to figure out how to create shots with dots at all (places where the viewer’s attention lands reliably).
  3. And unless you live in a world where no HMDs have cords or all your viewers have swivel chairs, you have to worry about not going in circles all in one direction or the other.

These are all noble artistic challenges to pursue, and these constraints breed marvelous creative results. However, they are stories for another time. My area of interest here is what to call it when you skirt this issue. Sometimes you don’t want orientation to be continuous, relative to the previous shot. You just want to say, “No matter what, I want the viewer to start this shot facing this way.”

In terms of Jessica’s diagrams, you’d end your current circle, jumping off its final outermost ring, and move over to start growing a new circle of rings.

I’d call a move like this an “ope”, standing for Opening Perspective Enforcement. As a verb, too, one could say, “The shot from the surface of the moon opes on the Earth.” It sounds a big like “the shot opens on,” which I consider a feature of the word, and I was pleasantly surprised to discover that “to ope” is in fact an archaic form of the verb “to open”.

You can force any combination of the three degrees of rotation, and preserving what the viewer is doing is in any of the others. For example, you could “ope the tilt” onto a plane in the sky (as long as you oped the tilt back to normal for the rest of the sequence, most likely).

Another subtlety: if it is the case that humans keep track of their real world orientation, you would be accepting by oping that viewers would experience different differentials between that and their orientation in your VR experience.

For more VR Vocab, check out this video:



02.17.2016 § Leave a comment

Screenshot (12)

Epikinetic. This one means, more or less, “as a side effect of motion.” Literally something more like “upon”, “attached to”, or “besides” motion. Let me illustrate by example:

  1. Superhot is a game where time only moves when you move. That’s epikinetic time.
  2. TotalCinema360 has a demo where the soundscape you’re surrounded by changes depending on where you turn. The visual space is divided in three. When you face the rock concert, all you hear is rock concert all around you, even though when you turn around you’ll see a nature vista and nature will then be the only thing you’ll hear in every direction. That’s epikinetic sound. (This was the inspiration for my dynamically spatial sound sculpture.)
  3. And in Sightline: The Chair, anything can change whenever you’re not looking. At one point you’re trapped in a room with no doors or windows, and the more frantically you look around, the more rapidly the walls close in on you, without you ever seeing them move. That’s epikinetic action.

So epikinesis is everything that is happening as part of movement in excess of the motion itself.

Epikinesis is not native to VR. Epikinetic time has been done at least as early as 2008’s Braid, there as a 2D platformer.

But epikinesis is amazing in VR. Particularly in first-person POV style VR with body-tracking, it can be mind-blowing.

Think about it. Of course your body’s movement gets replicated exactly, translated into movement of your virtual body in the virtual environment; that much is just what’s necessary to simulate spatiality and physics, to make you think you’re in a reality that works like the one you grew up in.

But when your body’s movement also causes effects beyond this, you feel viscerally integrated into limitless magic. I don’t really know how to put it in a way that captures just how powerful this effect can be.

For more VR Vocab, check out this video:



01.23.2016 § Leave a comment

Screenshot (17)

A couple weeks ago I finally got a VR development machine up and running again. One of the first experiences I tried was the popular From Ashes. While overall I found it tedious, annoying, and ugly, one moment stuck out for me as imaginative. The narrator of the program, TV Head, has an old-timey television set for a head (and is a floating torso with arms, otherwise), and toward the end of the experience at one point he presents to you a separate television at your ever-present virtual desk. The catch is: the image it relays to you is not simply displayed on the surface of the TV; instead, the area inside this also-old-timey (read: not flat-screen) set is more like a hologram presenter. You don’t see things *on* this TV so much as *in* it. When you bob your head around, you get different angles on the three-dimensional objects presented.

This led me to imagine an extension of this idea into its own aesthetic/discipline/medium. Suppose one wasn’t limited just to the area inside the TV set. Suppose the TV screen instead functioned as a rectangular portal into another world, as limitless in space as your own, which you could see through perfectly.

  • Things you saw through it could be quite distant, so bobbing your head around could achieve some significant parallax.
  • And things you saw through it could be quite close to the portal on the other side, just off the edges of the screen, and you could find them by getting right up to the screen on one side and looking through it nearly crossways.
  • Furthermore, you could stitch together sequences of such portals just like a traditional film. One second the portal is right in a character’s face, a close-up, the next it’s a medium shot of their hands picking up a wand, the next it’s several hours later, a wide shot from across the street of them casting a spell on someone.

I propose the name “fenestragraph” (fə ˈnɛ strə ˌgræf, sounds like “finesse” and “paragraph”) for this new medium, because it is essentially writing in windows. 

The value of this medium is that it carries some of the three-dimensionality of immersive cinema, while preserving the ability to edit shots together in a fairly strongly directed way. I am just referring to the well-known struggle we’re going through at this point in history with storytelling and attention-controlling in VR movies. This medium is maybe even more like 3D movies than VR, the difference being that in 3D movies the 3D stuff merely seems to be 3D and comes out of the screen at you, whereas in fenestragraphy (ˌfɛ nə ˈstrɑ grə fi like “photography”, or ˌfɛ nə ˈstreɪ grə fi like “fragrance”) the 3D stuff is truly 3D and exists inside the screen.

I am imagining this screen existing in VR, but you could even achieve this with a normal screen, as long as it was equipped with some head-tracking sensors around it to tell where you were relative to it so it’d know what to display. The only difference in this case would be that since the screen is physical instead of virtual, you wouldn’t be able to stick your head *into* the other world. (Whether or not authors of these types of experiences for VR would allow you poke your head through the portal [or once through, whether you’d be able to look back through or at things in the other world on the side 180 degrees from the direction it faces] would remain to be seen).

As for sound, I imagine that it would be completely unrelated to the window. You’d hear everything in the world you were looking into, just as in immersive cinema. Only here, things to the sides and behind, while they’d sound like they were coming from where they were, would be unseen to you.

For more VR Vocab, check out this video:



01.23.2016 § Leave a comment

Screenshot (13)

I propose the term “beheld” to refer to the particular type of video which was originally shown in a head mounted display to immerse a person in virtual reality.

In a sense this type of video could be said to be live, as it was live for at least one person: the person in the HMD. It was generated by a computer system which was tracking that person’s movements in real time, transposing them into a virtual environment it knows everything it needs to know about visually, then translating them instantaneously (thousands of times faster than cable news would make it to your TV set!) into moving picture on the screen inside the VR helmet.

But we can experience such video in other contexts after the original viewing by that person who is responsible for producing it. A viewer of an immersive film might wish to share the way she chooses to experience it, the particular path she leads her eyes along through the duration of its envelopment. Or maybe she simply wants to provide some way for folks without HMDs to experience the virtual world, in the form of a traditional 2D moving picture.

I wrote previously about this topic, suggesting the term watchline for an instance of such a recording. What I was seeking beyond this term “watchline” when I began this post was another term which could capture its essence, something adjectival, something that could be the analog for the existing term “handheld”.

Initially, actually, I joked that watchlines were “headheld” footage, since HMD’s are in a sense held by peoples’ heads. But something felt off about this. Something more is going on in subjective recording than what goes in handheld camera, something truer and more intimate. “Headheld” would really be a perfect term for the type of video shot by GoPro’s strapped to people’s heads, merely approximating what they saw themselves out of their own eyes.

But the difference between subjective and headheld recording is more than that superficial few vertical inches of disconnect. Even though that disconnect can get pretty critical when one moves into face-to-face social interaction – the same sad impossibility of eye contact one experiences when teleconferencing and having to look back and forth between the webcam and at the reproduction of the other person’s eyes on the screen below – what I am striving to capture here is more essential even than that.

What I am striving to capture is the difference between the manner in which a person moves her head when her intention is to record material with the camera strapped to it and the manner in which she moves her head when she knows that what she is recording is exactly what she is experiencing, when it was literally “beheld” by that person. Admittedly the aesthetic is rather similar, but I am here precisely to split these types of hairs!

So there’s a technique in cinematography called the point of view shot, but my understanding is that technically “POV” does not necessarily imply first person POV. That is, it can sometimes refer to the general angle taken by a group of people, or even just a less literal more liberal interpretation of an individual’s point of view such as the over the shoulder shot. In other words, the perfectly good word “subjective” should be reserved for actual first person perspective, while POV can best be used to describe shots which are as close to subjective as possible while still technically remaining objective.

Well alright, “subjective” would be the technical term for beheld, then. As in “the director of the film then throws in a subjective shot”. Yes, this would be in the context of a traditional cinematic experience meant to be watched on a screen outside an HMD (or I suppose if you want to complicate things it could be a screen inside a virtual reality), and its energy signature would be distinct from handheld footage and even from headheld footage (even when not dealing with eye contact). That’s just what I’m saying: there’s something special about this type of recording, something special enough that a director would choose to have a cinematographer shoot it by strapping on an HMD and just looking around as a person such as themselves would if in the world of the film.

To be perfectly clear: this aesthetic is relevant even to films which are not themselves immersive. Just as the creators of Surf’s Up had their camera people enter a virtual reality to get truly handheld effects on characters and events that only existed inside computer graphics, the creators with a taste for subjective recording would have their camera people enter virtual reality to get truly human subjective perspectives on the action. If the film was to appear to be live action, it would still need to be based in a computer, that is, photo-realistically rendered, assembled piecemeal out of individually captured real life components, as we don’t yet have the technology for 360 degree camera rigs which record reality intact in three dimensions such that you could walk around inside it; live action immersive recordings are still “nodal”, one could say, limited to experiencing them from the position where the camera was placed.

(In fact, applying subjective recording to an immersive environment could only lead to pain. What I’d imagine this would look like is rather than framing down the direction the producer looked, simply recording the vector of her center of view and dynamically reorienting the experience for viewers of her viewing. But this could only lead to sickness as that viewer felt like they were looking around inside another person’s looking around. It would have to be smoothed out to not be the worst imaginable experience from the standpoint of motion sickness. That said, I’m giving it a shot as we speak.)

For more VR Vocab, check out this video:

stort: subject to object rotation transference

01.16.2016 § Leave a comment

Screenshot (9)

This is a nifty virtual reality effect I described briefly in my post opture. I have only seen this effect used in a VR experience once: in the Exhibit part of the Google Cardboard app. Sadly, I published my original post in early July 2014, and it appears that Google’s app came out in late June, so I cannot prove that I came up with it independently.

Anyway, here was my original description:

Rather than translate rotational input as different angles out from a fixed point — as is standard for a sense of embodiment — a shot could translate it as different angles in toward a fixed point from a fixed distance. Thus, rotating your head would cause your vantage to swivel around, in inverted motion, as if on the inside of an invisible sphere surrounding an object of interest. This technique should be useful when an optie director wants to provide you in a particular moment with the freedom not to look around a space, but rather the freedom to examine a specific thing however you choose. Experimentation should be done to determine whether this sort of experience feels natural or comfortable.

Another way I’ve realized to describe this which is simpler (and in Google’s demo, given the blank black background, is ambiguous) is as a transfer of rotation from the subject to the object. That is, when I the subject turn my head, instead of my vantage changing, the rotation of my head is transferred onto the object (and inverted) so that I can naturally-magically look at it from different angles without moving.

I’ll call this “stort”ing for short.

For more VR Vocab, check out this video:

Virtual Reality Editing Technique: Circle Wipe to Point

01.16.2016 § Leave a comment

One concern of cutting from one vantage to another in immersive cinema is the risk that the viewer is not looking in the ideal direction when the new shot begins.

This could be ameliorated by deploying a classic editing technique: the circle wipe.

Circle wipe into the point that is exactly where you wish the viewer’s attention to land on the shot cut to.

The circle can begin as a point at the polar opposite of the point it’s headed toward (behind the viewer if they’re already looking where you want).

It will grow until at maximum it is a orthodrome orthogonal to the line connecting these antipodes.

As the circle constricts toward the ultimate point, no matter where the viewer is looking, the shape of the contour of the circle and especially its motion, will guide their attention toward the intended destination direction, leading them to be left right where you want them.

Artaud VR

12.05.2015 § 1 Comment


Antonin Artaud was a radical man of the theater who lived in France from 1896 to 1948. He fought for theater as a medium apart from any other, most pressingly that of literature. For Artaud, theater was not merely the live enactment of written stories; theater was its own reality, an art of structuring of time and space, of gestures and signs and sounds (potentially semantic), a fundamentally physical, contemporary, present, shared experience. “There is nothing I loathe and detest more than the idea of spectacle, of performance, i.e. of virtuality, of non-reality,” he wrote, “Theater is an act of space and it’s by pressing on the four points of space that it has a chance of affecting life. It is in the space haunted by theater that things find their shape, and beneath the shapes is the clamor of life.” The other reality that theater created was not elsewhere or elsewhen, not a simulation, not a substrate to suspend your disbelief over, but exactly what it was in the place you went through it in, nothing more or less, whether you were actor or audience.

We are not trying to find an equivalent of the written language in the visual language, which is simply a bad interpretation of it. We are trying to bring out the very essence of the language and transport the action to a level where every interpretation would become useless and where this action would act almost intuitively on the brain. …We cannot continue to prostitute the idea of theater whose only value lies in its agonizing, magic relationship to reality and danger. …Theater will never recover its own specific powers of action until it has also recovered its own language. …That is, instead of harking back to texts regarded as sacred and definitive, we must first break theater’s subjection to the text and rediscover the idea of a kind of unique language somewhere between gesture and thought.

While Artaud met with little success as a creator of theater during his brief lifetime, his thought has had a lasting influence on the theater and indeed all of the art world.


Artaud was no stranger to the burgeoning cinematic arts of his time. In fact, he appears in one of the most famous films of all time, Dreyer’s The Passion of Joan of Arc. 

I am the first person for a very long time to have attempted to give speech not just to men but to beings, beings each of whom is the incarnation of great forces, while still retaining just enough human quality to make them plausible from a psychological point of view. As though in a dream, we witness these beings roaring, spinning around, flaunting their instincts or their vices, passing like great storms in which a sort of majestic fate vibrates. I have tried to produce this visual cinema where psychology itself is devoured by the action… Not that the cinema must be devoid of all human psychology, on the contrary; but it should give this psychology a far more live and active form, free from the connections which try to make our motives appear idiotic instead of flaunting them in their original and profound barbarity.

I was introduced to Artaud through the work of the film director Alejandro Jodorowsky, who was first a theater director, and some of whose work seems a tribute to this single thought alone.

But Artaud left film at an early age. Partly this was to focus more closely on theater, his truer calling. And partly it was due to the advent of sync sound. Like many others, Artaud saw the ‘Talkie’ as a threat to the purity, beauty, and depth of the moving picture montage. Soon, he worried, this technology would enable the cinema to reduce itself to the same sorry state that theater had reduced itself to, that is, to subjugate it to the tyranny of the sequence of words.

Of course, as the rate of technological advancement has sped up such that the average human experiences multiple watershed moments in her lifetime, the average human tends to be less reactionary about technology with respect to art forms. The average human understands that movies didn’t replace the stage and television didn’t replace the radio. Time has proven that much art exists in the realm of movie dialog, and that this art is meaningfully independent from its Venn parents of title cards and live recital. The art is distributed away from a single actor in the latter case to a composite art of acting with photography and editing (mise en scene present in either case). And time has also proven that the experimental side of moving pictures in and of their medium specificity still exists for those who wish to find it.

Pure moving picture art is much like an extension of what Artaud’s vision for the theater was: a language of space. The cinematic apparatus takes the gestures of theater and expands their grammar with new powers:

  1. The action can be perceived from any angle or distance, not just from a seat in an auditorium.
  2. The vantage which is chosen can move. The camera can rotate, dolly, zoom, tilt, anything.
  3. The vantage which is chosen can suddenly change, across a film join, from footage from camera A to footage from camera B.
  4. A film join can transport the viewer to a completely different place.
  5. A film join can transport the viewer to a completely different time.

The legendary filmmaker Andrei Tarkovsky writes of “sculpting in time”, spatio-temporal poetry, gestures sutured out of shots. As compared to theater, the critical element of co-present physicality is lost: one no longer shares the same three dimensional space with the action, and no longer feels the energy of the action playing out in the moment, unique and novel. But as far as being a tool for composing representations of inner emotional state, film has further advanced powers. Artaud wrote:

The problem is to turn theater into a function in the proper sense of the word, something as exactly localized as the circulation of our blood through our veins, or the apparently chaotic evolution of dream images in the mind, by an effective mix, truly enslaving our attention. Theater will never be itself again, that is to say will never be able to form truly illusive means, unless it provides the audience with truthful distillations of dreams where its taste for crime, its erotic obsessions, its savageness, its fantasies, its utopian sense of life and objects, even its cannibalism, do not gush out on an illusory make-believe, but on an inner level.

One could imagine that had he lived long enough to see the film arts develop further, he might have embraced its expressive powers to accomplish his goals.


For a time, Artaud became obsessed with the idea of inverting theater in the round: surrounding the audience with action.

…with the audience seated below, in the middle, on swiveling chairs allowing them to follow the show taking place around them. In effect, the lack of a stage in the normal sense of the word will permit the action to extend itself to the four corners of the auditorium. …the action can extend in all directions at all perspective levels of height and depth. …And in order to affect every facet of the spectator’s sensibility, we advocate a revolving show, which instead of making stage and auditorium into two closed worlds without any possible communication between them, will extend its visual and oral outbursts over the whole mass of spectators.

Artaud occasionally refers to theater as creating a “virtual reality”, though I believe this to be only superficially related to the modern technology. I also believe that this fascination with surrounding action would not have been the draw for him to the new art of immersive storytelling.

Virtual reality can take the expanded grammar of moving picture montage and restore to it the potency of the sense of shared physical space. In true light field recordings, at least, the space of the original action is faithfully captured and translated to the distributed and reproduced experience. One can also imagine that live VR theater would be much more powerful than live film recording of theater would be.

For Artaud, VR would let everyone and anyone be inside the action, dancing with the actors, up in their faces, inside their faces, nowhere and right there. To drive home the power of the language of the human body and any conceivable objects and sounds it gathers.

Let’s make some art for the immersive media that Artaud would be proud of.

The Countergaze: Your Watch Being Watched

12.05.2015 § Leave a comment

Screenshot (8)

With the traditional glowing rectangle mediums, you were comfortable knowing that no one knew where on the screen you were looking. Sure, They might know which frame was being displayed when, but eye-tracking never became popular on laptops in time for the average person to worry about web ads knowing if one had made eye contact or not.

Eye-tracking won’t be standard on first edition consumer versions of virtual reality head mounted displays, but head-tracking is by definition part of the equation with such a screen. Thus in order to experience virtual reality media one must turn over information about at least generally what one is attending within the content at any given time. Even disconnected it may be recorded and leveraged locally or later transmitted.

If online social media summoned an age of increasing self-image-consciousness, (especially social) virtual reality will summon an age of increasing consciousness-consciousness. We will invite each other to participate in a system whereby we share with each other, at least indirectly, nothing less than the terrifying intimacy of our own gaze, in all its continuous, jittering, embarrassing, incriminating completeness. We will slip deeper into a perpetual policing state where we are less ourselves and more stories desperately trying to weave themselves into a complicit fabric of society going through sanctioned motions.

For more VR Vocab, check out this video:


10.24.2015 § 2 Comments

Screenshot (18)

A new medium is born in virtual reality: mulsure.

The word mulsure comes from the Latin mulceo, to stroke: to stroke as one would with a pencil while sketching, but also to stroke as one would across the surfaces of a non-flat form while sculpting.

Mulsure is only possible in VR, but it is different from simply sketching in VR or sculpting in VR.

In a sense it is a cross between the two forms. It is volumetric like sculpture, but consisting of loose, open, discontinuous lines like a sketch.

While to sketch is to impressionistically lay down a rough evocation of forms on a 2D surface, to mulse is to do in three dimensions.

While to sculpt is to mold up or chisel down material into a solid 3D form, to mulse is to form hollowly by tracing the surfaces alone, scrubbing floating invisible objects into being in the air.

The product of mulsion can easily be exported to the physical world using 3D printing. Nothing can be created through mulsing that couldn’t have been created directly in the physical world, as the results could be recreated with twisted wire meshes, tattered paper, charcoal suspended in glass, etc. However, saying this is akin to the statement that monkeys on typewriters would eventually pound out Shakespeare: being able to sweep ones arm in majestic arcs through space, to scribble and hash and shade in space, to produce forms with no limitations on the speed of thought or freedom of movement, will enable a never before seen style of creativity, generating forms no sculptor limited by physical rules could come up with in a finite period of time.

Oculus Medium is sculpture in VR. TiltBrush is mulsure. Watch Glen Keane mulse.

(One could also use the silly word sckeulptch by smooshing the words sketch and sculpt together.)

For more VR Vocab, check out this video:

Diegetic Frames Per Second

10.18.2015 § Leave a comment

Screenshot (20)


In this post I will describe three different types of frame rates: video, head tracking, and diegetic. The last of these three types I do not believe has been given a name before. Finally, I will express my excitement over what it will be like to experience all three types together in a live action virtual reality way.


Aesthetics of Video Frame Rate

In 2012, The Hobbit: An Unexpected Journey became the first feature film with a “high frame rate” to enjoy wide release. Historically up to that time (and still to today) traditional film was viewed at a standard 24 frames per second, but some screenings of The Hobbit boasted a frame rate double that, at 48. To understand why the team behind The Hobbit movie offered this option, we first have to understand why 24 was standard in the first place.

Persistence of vision is the name of the illusion which is fundamental to moving pictures. It refers to the effect whereby the presentation of a succession of related still images can appear to come to life, to animate, to resemble real continuous motion. If one were to view a succession of such images at a rate of only one per second, that is, at 1 fps, the succession would be still too slow to produce the illusion; the mind would register them as individual images. It’s not until one increases the frame rate to 10 fps that the threshold of the illusion is met.

10 fps is fast enough to elicit persistence of vision, sure, but just barely. The motion still looks a tad jerky at that level. As one continues to increase the frame rate, more convincing illusions can be achieved. Generally, the higher the frame rate goes, the more realistic motion appears. This is because the simulation of motion is approaching the limit of our ocular system’s ability to distinguish it from real motion. For most people, this power of discernment maxes out around 100 fps, so recording or replaying faster than 100 is wasted energy.

24, then, is a point on the continuum from 10 to 100 that strikes a lovely balance between realism and magic. It’s not so slow that we get distracted noticing the artifacts of the machinery that connects us with the story world. But neither are we distracted by an insistence on the reality of the real world which the story world was built out of. 24 is a skirting of both apparatus and profilmic, for those theory nerds out there. It makes the magic realistic, while keeping the real magical.

The Hobbit team decided to experiment with this balance. Some audience members liked it, feeling they’d brought Middle Earth even further to life. More audience members, it seemed (or perhaps they were just the more vocal group), disliked it. Perhaps it just screwed with their expectations about what a movie should look like. Perhaps they consciously recognized that it made them think more of the (however talented!) actors in their (however convincing!) makeup and costume being on (however amazingly produced!) sets than it did about the substance of the film experience itself.

In any case, this example clearly illustrates the aesthetic effects of different video frame rate choices.


The Race for Head Tracking FPS

Meanwhile, in virtual reality, frame rate was taking its own journey.

To enter a virtual environment one must wear a head mounted display, or HMD. This display, like a traditional movie theater screen, also uses persistence of vision to simulate motion, and the same 10-100 fps constraints apply to that illusion here. However, there is a second consideration which exists for HMDs – head tracking – and its range of appropriate frame rates is completely different.

HMDs are designed to keep track of the position and orientation of your head. This is so they can update the display to render the virtual world from the proper vantage. This repeated updating needs to occur fast enough that you do not notice it. Unlike the fps of a film, slower fps’s do not result in aesthetically pleasing effects. In fact, when the fps of head tracking is too slow, it generally results in sickness for the user of the HMD. The human mind is simply not used to experiencing a delay between inputs from the senses. If we were to set an HMD’s head tracking fps to 24 fps, then when the muscles in the user’s neck twisted 1/24th of a second ago but her eyes were still seeing from the same angle, and this effect was happening to her 24 times a second, things would get a little vomit-inducing.

I doubt anyone ever thought 24 fps was a great setting for an HMD. I can’t imagine there was ever much contention about the ideal head tracking fps being “as fast as possible; anything as long as it is beyond the threshold of human perception”. In reality, though, we didn’t have the technology to be able to head track and render at 100 fps at first. This is one of the key reasons virtual reality was impractical for the masses for so long. When consumer headsets began to appear, head tracking fps offerings were still only around 60 or 75. It is safe to say that in the near future, however, head tracking fps will be a non-issue, and it will be simply assumed that any HMD is sufficient in that respect.


Poly FPS

Head tracking fps exists even for completely motionless virtual environments. Whether these environments are computer generated or captured from real life, if the content of the display is still, your motion as the viewer being tracked still creates differences in the display’s contents from render frame to render frame.

Virtual environments are often in motion, though, and when they are, the potential exists for multiple different frame rates: one for the head tracking, and one for the environment. For simplicity’s sake we will think of changes in the environment as scripted, so we can consider this frame rate analogous to video.  Let’s consider an immersive cinematic experience, shot with a 360 degree camera rig.

Maybe this camera used the same fps as the HMD, that is, it was shot at 100 fps. This is well beyond what is used for normal film and video (30), and well beyond even what they used on The Hobbit. This might have been done in order to make what it recorded feel as real as possible. The feeling of presence in a virtual environment, or the loss of awareness of mediation, is, after all, a common goal with virtual content creators.

But maybe the camera did shoot at a traditional speed like 24. Why not? Lower fps on the video should not induce sickness. Suppose the HMD is at 96 while watching video at 24, and you’re moving your head to look at different sections of the action all around you: you’ll be seeing four slightly different croppings for each frame of video. The important aspect of the fundamental phenomenological continuity is still there; your brain is just tricked into feeling surrounded by objects’ whose motions occur in a stylistically lower fps.


Immersion in Film

Whether shot at 100 or 24, most immersive cinema isn’t quite immersive yet. In fact, many experts do not even consider it true virtual reality. The reason for this distinction is that is not yet possible to simply switch on a 360 camera and record three-dimensional action over time in a real life environment. It is possible to simulate the effect using a number of different tools and a ton of time with computers, but not practical. For the next couple years, at least, most immersive video is more akin to watching a movie displayed on a spherical shell all around you, but the actors are all still basically flat, glued to this virtual surface.

Some will at least use some more advanced technology to include stereoscopic effects, localized sweet spots where the images displayed to each eye are different, simulating depth. But parallax – otherwise known as the ability to see around objects and have them move at different rates past each other depending on their distance from you when you bob your head side to side – is not possible yet with photorealistic moving experiences, unless they are cobbled together from separate still captures of environments and studio captured performances of actors with huge numbers of cameras looking in on them one at a time rather than out at everything at once.

And of course there is straight CG animation, where 3d immersion presents no technical obstacles at all. It would be no problem at all to generate an animation to be experienced in 100 head tracking fps but that would play at 24 fps, so that one could know what it was like to feel perfectly present in a world which itself operates at a stylistically lower frame rate, with 3d frames of existence.

But I lament the lack of real life ness here. For that we have stop motion. Coming out of USC-ICT: 3d captured stop-motion animation with a stylistically low frame rate. Real life objects you can look around at from any angle like they were real, but seeming to move with otherworldly quantization. However, the motion has that distinct signature of stop motion – it is not true smooth motion as things with muscles or gears would have made.

So for me, I am super excited about the near future when photorealistic immersive recordings are commonplace, where I can become been immersed not inside merely a recording of another world, but immersed inside a film – with all the magic of its irreality. Transposing the magical quality that low frame rate lends to film into the third dimension. Perhaps as accoutrements to this we could have film grain but applied atmospherically in space, or spherical cigarette burns before cuts, other such artifacts.


Diegetic FPS

Experiencing a realistic world in another fps would be enough, but a realistic world with multiple fps’s would be even more amazing. Here’s where we finally get to the new term: diegetic fps.

Watching anime, one may from time to time notice that different stretches of animation seem to play at different resolutions. Objects in the foreground may move 12 times a second, while those in the background only 8 or 6 even. Thus, two types of fps exist just within this traditional 2d medium: the video fps, and that of individual objects inside the world it portrays. Though clearly the reasons for the choice of frame rate are budgetary first and only secondarily intentional (in terms of their expressing the story world), I would suggest calling the latter type of fps “diegetic”.

For a three-dimensional example, consider the playable character Mr. Watch and Game of Super Smash Bros. Unlike the other characters he is modeled after more primitive video game technology, so he animates at a stylistically lower frame rate (below persistence of motion). He can still overall move at a normal rate, and he doesn’t lag in updating when the camera of the screen’s action moves – it is just him himself who marches to the beat of a slower drummer.

To me, the biggest trip would be experiencing three different frame rates simultaneously, in virtual reality:

  1. Maxed-out head tracking fps,
  2. 24 fps video,
  3. of co-existing actors performing in various fps’s up to 24 (anything faster would be pointless, hard cut off at 24)

The objects are real, the motion is real, presence is real, but the quantization of the motion is stylistically slow and variable. This is the dream.

For more VR Vocab, check out this video:

Where Am I?

You are currently browsing the Virtual Reality category at cmloegcmluin.