The Countergaze: Your Watch Being Watched

12.05.2015 § Leave a comment

Screenshot (8)

With the traditional glowing rectangle mediums, you were comfortable knowing that no one knew where on the screen you were looking. Sure, They might know which frame was being displayed when, but eye-tracking never became popular on laptops in time for the average person to worry about web ads knowing if one had made eye contact or not.

Eye-tracking won’t be standard on first edition consumer versions of virtual reality head mounted displays, but head-tracking is by definition part of the equation with such a screen. Thus in order to experience virtual reality media one must turn over information about at least generally what one is attending within the content at any given time. Even disconnected it may be recorded and leveraged locally or later transmitted.

If online social media summoned an age of increasing self-image-consciousness, (especially social) virtual reality will summon an age of increasing consciousness-consciousness. We will invite each other to participate in a system whereby we share with each other, at least indirectly, nothing less than the terrifying intimacy of our own gaze, in all its continuous, jittering, embarrassing, incriminating completeness. We will slip deeper into a perpetual policing state where we are less ourselves and more stories desperately trying to weave themselves into a complicit fabric of society going through sanctioned motions.

For more VR Vocab, check out this video:

Advertisements

mulsure

10.24.2015 § 2 Comments

Screenshot (18)

A new medium is born in virtual reality: mulsure.

The word mulsure comes from the Latin mulceo, to stroke: to stroke as one would with a pencil while sketching, but also to stroke as one would across the surfaces of a non-flat form while sculpting.

Mulsure is only possible in VR, but it is different from simply sketching in VR or sculpting in VR.

In a sense it is a cross between the two forms. It is volumetric like sculpture, but consisting of loose, open, discontinuous lines like a sketch.

While to sketch is to impressionistically lay down a rough evocation of forms on a 2D surface, to mulse is to do in three dimensions.

While to sculpt is to mold up or chisel down material into a solid 3D form, to mulse is to form hollowly by tracing the surfaces alone, scrubbing floating invisible objects into being in the air.

The product of mulsion can easily be exported to the physical world using 3D printing. Nothing can be created through mulsing that couldn’t have been created directly in the physical world, as the results could be recreated with twisted wire meshes, tattered paper, charcoal suspended in glass, etc. However, saying this is akin to the statement that monkeys on typewriters would eventually pound out Shakespeare: being able to sweep ones arm in majestic arcs through space, to scribble and hash and shade in space, to produce forms with no limitations on the speed of thought or freedom of movement, will enable a never before seen style of creativity, generating forms no sculptor limited by physical rules could come up with in a finite period of time.

Oculus Medium is sculpture in VR. TiltBrush is mulsure. Watch Glen Keane mulse.

(One could also use the silly word sckeulptch by smooshing the words sketch and sculpt together.)

For more VR Vocab, check out this video:

Diegetic Frames Per Second

10.18.2015 § Leave a comment

Screenshot (20)

 

In this post I will describe three different types of frame rates: video, head tracking, and diegetic. The last of these three types I do not believe has been given a name before. Finally, I will express my excitement over what it will be like to experience all three types together in a live action virtual reality way.

 

Aesthetics of Video Frame Rate

In 2012, The Hobbit: An Unexpected Journey became the first feature film with a “high frame rate” to enjoy wide release. Historically up to that time (and still to today) traditional film was viewed at a standard 24 frames per second, but some screenings of The Hobbit boasted a frame rate double that, at 48. To understand why the team behind The Hobbit movie offered this option, we first have to understand why 24 was standard in the first place.

Persistence of vision is the name of the illusion which is fundamental to moving pictures. It refers to the effect whereby the presentation of a succession of related still images can appear to come to life, to animate, to resemble real continuous motion. If one were to view a succession of such images at a rate of only one per second, that is, at 1 fps, the succession would be still too slow to produce the illusion; the mind would register them as individual images. It’s not until one increases the frame rate to 10 fps that the threshold of the illusion is met.

10 fps is fast enough to elicit persistence of vision, sure, but just barely. The motion still looks a tad jerky at that level. As one continues to increase the frame rate, more convincing illusions can be achieved. Generally, the higher the frame rate goes, the more realistic motion appears. This is because the simulation of motion is approaching the limit of our ocular system’s ability to distinguish it from real motion. For most people, this power of discernment maxes out around 100 fps, so recording or replaying faster than 100 is wasted energy.

24, then, is a point on the continuum from 10 to 100 that strikes a lovely balance between realism and magic. It’s not so slow that we get distracted noticing the artifacts of the machinery that connects us with the story world. But neither are we distracted by an insistence on the reality of the real world which the story world was built out of. 24 is a skirting of both apparatus and profilmic, for those theory nerds out there. It makes the magic realistic, while keeping the real magical.

The Hobbit team decided to experiment with this balance. Some audience members liked it, feeling they’d brought Middle Earth even further to life. More audience members, it seemed (or perhaps they were just the more vocal group), disliked it. Perhaps it just screwed with their expectations about what a movie should look like. Perhaps they consciously recognized that it made them think more of the (however talented!) actors in their (however convincing!) makeup and costume being on (however amazingly produced!) sets than it did about the substance of the film experience itself.

In any case, this example clearly illustrates the aesthetic effects of different video frame rate choices.

 

The Race for Head Tracking FPS

Meanwhile, in virtual reality, frame rate was taking its own journey.

To enter a virtual environment one must wear a head mounted display, or HMD. This display, like a traditional movie theater screen, also uses persistence of vision to simulate motion, and the same 10-100 fps constraints apply to that illusion here. However, there is a second consideration which exists for HMDs – head tracking – and its range of appropriate frame rates is completely different.

HMDs are designed to keep track of the position and orientation of your head. This is so they can update the display to render the virtual world from the proper vantage. This repeated updating needs to occur fast enough that you do not notice it. Unlike the fps of a film, slower fps’s do not result in aesthetically pleasing effects. In fact, when the fps of head tracking is too slow, it generally results in sickness for the user of the HMD. The human mind is simply not used to experiencing a delay between inputs from the senses. If we were to set an HMD’s head tracking fps to 24 fps, then when the muscles in the user’s neck twisted 1/24th of a second ago but her eyes were still seeing from the same angle, and this effect was happening to her 24 times a second, things would get a little vomit-inducing.

I doubt anyone ever thought 24 fps was a great setting for an HMD. I can’t imagine there was ever much contention about the ideal head tracking fps being “as fast as possible; anything as long as it is beyond the threshold of human perception”. In reality, though, we didn’t have the technology to be able to head track and render at 100 fps at first. This is one of the key reasons virtual reality was impractical for the masses for so long. When consumer headsets began to appear, head tracking fps offerings were still only around 60 or 75. It is safe to say that in the near future, however, head tracking fps will be a non-issue, and it will be simply assumed that any HMD is sufficient in that respect.

 

Poly FPS

Head tracking fps exists even for completely motionless virtual environments. Whether these environments are computer generated or captured from real life, if the content of the display is still, your motion as the viewer being tracked still creates differences in the display’s contents from render frame to render frame.

Virtual environments are often in motion, though, and when they are, the potential exists for multiple different frame rates: one for the head tracking, and one for the environment. For simplicity’s sake we will think of changes in the environment as scripted, so we can consider this frame rate analogous to video.  Let’s consider an immersive cinematic experience, shot with a 360 degree camera rig.

Maybe this camera used the same fps as the HMD, that is, it was shot at 100 fps. This is well beyond what is used for normal film and video (30), and well beyond even what they used on The Hobbit. This might have been done in order to make what it recorded feel as real as possible. The feeling of presence in a virtual environment, or the loss of awareness of mediation, is, after all, a common goal with virtual content creators.

But maybe the camera did shoot at a traditional speed like 24. Why not? Lower fps on the video should not induce sickness. Suppose the HMD is at 96 while watching video at 24, and you’re moving your head to look at different sections of the action all around you: you’ll be seeing four slightly different croppings for each frame of video. The important aspect of the fundamental phenomenological continuity is still there; your brain is just tricked into feeling surrounded by objects’ whose motions occur in a stylistically lower fps.

 

Immersion in Film

Whether shot at 100 or 24, most immersive cinema isn’t quite immersive yet. In fact, many experts do not even consider it true virtual reality. The reason for this distinction is that is not yet possible to simply switch on a 360 camera and record three-dimensional action over time in a real life environment. It is possible to simulate the effect using a number of different tools and a ton of time with computers, but not practical. For the next couple years, at least, most immersive video is more akin to watching a movie displayed on a spherical shell all around you, but the actors are all still basically flat, glued to this virtual surface.

Some will at least use some more advanced technology to include stereoscopic effects, localized sweet spots where the images displayed to each eye are different, simulating depth. But parallax – otherwise known as the ability to see around objects and have them move at different rates past each other depending on their distance from you when you bob your head side to side – is not possible yet with photorealistic moving experiences, unless they are cobbled together from separate still captures of environments and studio captured performances of actors with huge numbers of cameras looking in on them one at a time rather than out at everything at once.

And of course there is straight CG animation, where 3d immersion presents no technical obstacles at all. It would be no problem at all to generate an animation to be experienced in 100 head tracking fps but that would play at 24 fps, so that one could know what it was like to feel perfectly present in a world which itself operates at a stylistically lower frame rate, with 3d frames of existence.

But I lament the lack of real life ness here. For that we have stop motion. Coming out of USC-ICT: 3d captured stop-motion animation with a stylistically low frame rate. Real life objects you can look around at from any angle like they were real, but seeming to move with otherworldly quantization. However, the motion has that distinct signature of stop motion – it is not true smooth motion as things with muscles or gears would have made.

So for me, I am super excited about the near future when photorealistic immersive recordings are commonplace, where I can become been immersed not inside merely a recording of another world, but immersed inside a film – with all the magic of its irreality. Transposing the magical quality that low frame rate lends to film into the third dimension. Perhaps as accoutrements to this we could have film grain but applied atmospherically in space, or spherical cigarette burns before cuts, other such artifacts.

 

Diegetic FPS

Experiencing a realistic world in another fps would be enough, but a realistic world with multiple fps’s would be even more amazing. Here’s where we finally get to the new term: diegetic fps.

Watching anime, one may from time to time notice that different stretches of animation seem to play at different resolutions. Objects in the foreground may move 12 times a second, while those in the background only 8 or 6 even. Thus, two types of fps exist just within this traditional 2d medium: the video fps, and that of individual objects inside the world it portrays. Though clearly the reasons for the choice of frame rate are budgetary first and only secondarily intentional (in terms of their expressing the story world), I would suggest calling the latter type of fps “diegetic”.

For a three-dimensional example, consider the playable character Mr. Watch and Game of Super Smash Bros. Unlike the other characters he is modeled after more primitive video game technology, so he animates at a stylistically lower frame rate (below persistence of motion). He can still overall move at a normal rate, and he doesn’t lag in updating when the camera of the screen’s action moves – it is just him himself who marches to the beat of a slower drummer.

To me, the biggest trip would be experiencing three different frame rates simultaneously, in virtual reality:

  1. Maxed-out head tracking fps,
  2. 24 fps video,
  3. of co-existing actors performing in various fps’s up to 24 (anything faster would be pointless, hard cut off at 24)

The objects are real, the motion is real, presence is real, but the quantization of the motion is stylistically slow and variable. This is the dream.

For more VR Vocab, check out this video:

Enter VR interview: Douglas Blumeyer on immersive cinema, etc.

10.11.2015 § 1 Comment

I was recently interviewed by Cris Miranda for his Enter VR podcast to get my perspective on immersive cinema. You can listen to the full thing (and find more examples of his incomparable breed of interviewing) here:

http://www.stitcher.com/podcast/cris-miranda/entervr/e/rethinking-vr-cinema-ai-consciousness-clones-brain-to-brain-40313488

Since I find my own voice tedious and annoying to listen to, and this is a feature length film’s worth of mostly my voice, I went ahead and typed up the more interesting parts. Hopefully this alternative form of consumption broadens its accessibility. Also I distilled down and liberally refined material and added links as appropriate.

We started off just talking about The Matrix — not immersive cinema, but cinema about VR & AI, and a big part of both of our childhoods. But we end up covering a range of stuff beyond the more on-topic stuff like immersion and interactivity — from ultimate mediums and psychedelics, to individuality vs. collectivism and generative art. Without further ado!

 

CM:

Virtual worlds will be perpetually attached to artificial intelligence for the rest of time. To me it seems like the manifestation of artificial intelligence in our time will first come out through virtual worlds. I don’t think we’ll see androids or robots walking around among us that will be sentient and self aware, you know? I think it will happen later, but first we will see them in VR.

 

DB:

I agree. Virtual Reality is built on technologies that are far ahead of what robotics can achieve in the physical world.

And here’s another thought coming at that point from the other direction. I expect some listeners may have heard of the Uncanny Valley before, which is that little gap between Realistic — you believe this is a real person I’m engaging with — and Looks-Kind-Of-Like-A-Person. In between that, where it’s like Very-Close-But-No-Cigar, we’re biologically wired to resist (I think this evolved for a reason; an animal such as a human wouldn’t want to breed with something that was not correct, that had mutated in a way that made it off enough that the brain registered it as foreign or risky). Now love is infinitely complicated, but things like love are pretty easy to elicit and replicate, to get from people, and I have no doubt that people are going to be falling in love with artificial intelligences in the near future. It’s probably already happening to some extent. But I think that we’d be sooner to fall in love with virtual reality artificial intelligences than physical robot artificial intelligences. And my reason for that is that when we’re in virtual reality our standards are lower, even when we’re socializing with virtual entities which we know are backed by a real person. If you’ve ever done something like Altspace‘s social experience, even when you throw in things like eye tracking so you get this very intimate level of eye contact, or if you get hand tracking because hand gestures are very important to bodily communication, even when you throw all of that into the mix there’s still something off about it, and so if the playing field is leveled such that even humans are kind of off then I think it’d be easier for an AI to catch up and be indistinguishable. And as the role of VR increases in our lives and we spend more and more of our time in virtual environments our relationships will on average include AI more. It won’t seem as strange.

 

CM:

Is falling in love with an AI considered cheating?

 

DB:

I don’t think it makes sense to make objective judgments. It depends from relationship to relationship. It’s going to add a new line couples may have to draw. With virtual reality porn, couples will have to decide what’s the line — what’s a real sexual experience and what’s not.

 

CM:

That line is so blurry, and also that line is moving as technology progresses. What if the AI … has a virtual representation of my girlfriend. Is it considered cheating if ‘The AI looks like you, honey!’

 

DB:

Simulating doing things with a sexual partner they wouldn’t normally do, is that what you’re suggesting? That’s a whole other line. Well, probably a really similar line when you think about it. But this isn’t just a sexual thing… maybe you’d want to simulate having other types of experiences with people you know that you otherwise wouldn’t have.

 

CM:

And what if in virtual reality everything you say and do gets tracked, logged, stored away for algorithms to learn from you and put out an AI representation or mirror reflection of you in VR. This thing would have autonomy in the metaverse. ‘I’m you in VR.’

 

DB:

There’s going to be intellectual property issues about the self. What does it mean to be me and can you abuse it.

 

CM:

So… what’s in your mind these days?

 

DB:

My particular interest is less about AI and more about storytelling. AI implies a level of interactivity and unpredictability. The intersection I’m most interested in has the least to do with interactivity and game-like structures as possible. My background is in film and while immersive cinema inherently has some interactive nature because the person experiencing an immersive film can choose to look anywhere they want at any given time, other than that necessary interactivity I’m more interested in figuring out how can we still tell linear set-in-stone stories or story-like experiences. What are those going to be like.

 

CM:

What are you finding out the more you think about these things?

 

DB: 

Immersive storytelling is a brand new thing. It’s hard to say exactly what direction it’s going to go. It’s probably going to go in many different directions. It’s informed by not just traditional film but also theater — if only because unlike in film you’re actually occupying the same three dimensional space as whatever performers and actions are being dramatized — and also from game — if only because of those at-least-subtly interactive elements.

I was looking for a good definition of cinema and there’s one on Wikipedia which says “the art of simulating experience,” which is really interesting actually; it’s not the first thing I’d think of but I think it’s astute. I’ve since noticed people referring to literature this way, too. I think you can find — if your’e looking for it — a continuum from text-based storytelling to film-based storytelling toward immersive storytelling in that they’re all striving to simulate human experience. And what that continuum describes is an increase in the comprehensiveness of the phenomenal stuff, as technology enables us to more completely capture sensory channels and convey those directly to people.

I feel like every time we make a technology step forward there’s a period of resistance to it and a fear that it’s compromising the potential for artistry, you know, that it can’t help but be less subtle and speak less purely to the soul, that it can speak only to the body and then just maybe the mind. I don’t doubt that this is true to some extent for some experiences… when you have access to convey sensory information more directly it is easier to make interesting but kind-of-dumb not-very-spiritual experiences. I do think it’s also possible to direct that in a way that it can seize the soul and convey even more awesome spiritual truths.

 

CM:

Where is this continuum leading us to, in terms of these mediums of self-expression? An ultimate or final medium?

 

DB:

I think ultimately we all want to feel more connected. I’ve heard the fractured human experience described this way: we are already one consciousness, we’re just not very organized yet. You can glimpse moments of our interconnectedness, our collective consciousness, but it’s overwhelmingly an experience of being an individual, an experience of isolation and fear and pain. I believe that things could be better if I could be inside everyone I knew more intimately, more deeply. That’s something I aspire toward and that’s something I think that a lot of artists share as a mission: to try to turn themselves inside out in one way or another and plumb the depths of what makes them themselves, indirectly in an effort to find that which is generalizable, and package these explorations and share them in a way that other people can dive into and find out things about themselves, and in turn modeled after their experience in the first artist’s work create their own experiences of this sort. I do believe we’re all artists and I think if you don’t think you’re an artist then that’s fair but you’re probably using a more specific or superficial definition of artist than I am.

So anyway, what would be next after virtual reality would be ‘mainlining’ (borrowing a term I learned from Alex Voto). To introduce mainlining: when I first started hearing all the modern-day excitement about virtual reality my first thought was ‘Oh my god, I want to play Portal in VR.’ I want to feel the boundary passing over my body from head to toe — the boundary between two differently-oriented volumes connected by a portal — and I want to feel this immaterial boundary through no sensation other than sensing the disparity in the direction of gravity between that of the volume where I’m coming from and that of the volume I’m going into, feel the two gravities pulling me separate directions while I’m still partway in both. Well, we’re not going to have that for a while. That’s going to require wiring into our brains and tricking our brains at a level past our eyes, ears, and skin. That’s what’s meant by mainlining. And that technology is completely unrelated to the hardware we’re working on nowadays, to perfect this tier – the tricking eyes ears and skin one.

But mainlining is coming eventually, and I think a lot of lessons we take from virtual reality — from immersive cinema, games, and any sort of virtual experience — can be applied to mainlined media.

Mainlining will open up even more stuff. It’s called the ultimate or final medium for a reason. This is where we get to the things I really dream about: building virtual drugs. Approaching demiurge level, where you’re essentially just a creator god. You can send electrical signals, you can write a script that when applied to a human mind will give them a type of psychological, spiritual experience beyond mere sensory information. They will react to it and populate it with their own memories and preconceptions in a certain way. And it’s just like tripping with friends insofar as everyone will come out of it having had a similar experience, one they can consider to have shared, but each person’s experience will be different from the next in way fundamentally different than the difference we think of when you and I watch a movie together and walk away with a comparable but different experience.

 

CM:

What are your predictions for the future of immersive media?

 

DB:

If you force me to predict, I’d say genres will coalesce around sweet spots of interactiveness, such as I describe in my proposed format of opture. (I cover a lot of this same material in my post about opture). And not just on a simple continuum from interactive to non-interactive; there are different kinds of interactivity:

  • Can you actually affect the outcome of the story?
  • Or can you simply affect the way in which events unfold, the montage?
  • Whether you experience it chronologically?
  • From which spatial angles you see things?
  • Are you aware of the capacity in which you are interacting with any of the above, or is it maybe the author’s intention for you specifically to not understand how you’re affecting some of these?

Maybe an example would help. Suppose you’ve got a standard conversation between two characters sitting at a diner.

In a traditional film you’d get a master establishing shot and then start going back and forth between over the shoulder shots, maybe some close ups mixed in. In an immersive film, though, since every shot or vantage is in a sense already a first person shot — because you are that person looking around — it might be more natural to experience things from the points of view of the two characters. While I feel like point of view shots are not unheard of in film and TV, we see them pretty often and aren’t surprised when we see them, they’re still striking, we notice them and wonder about their importance. Maybe in VR they would be not as big a deal. They might turn out to be the more natural way to experience the scene.

Now imagine one way you could experience the scene would be always from the perspective of whichever character is talking: you are made to feel like you are the active character. Alternatively the author might have you do the opposite, always experience it from the perspective of the character being spoken to. Both of these are fundamentally passive viewing experiences but one of them could give you a different level of identification than the kind you get when watching traditional film. You might be made to be more sensitive to the words spoken. I’m way more critical of the words that come out of my own mouth, so a line with a character in some movie you might react to like, ‘Eh, some people are just like that’, in an immersive media you might react instead more like, ‘Shit, so this is how people get this way.’

So to finally get to the actual bit about interactivity: in this example, whose perspective you experience it from could depend on your choices. The experience could be programmed so that as long as you held eye contact you’d stay as one person, but if you ever broke eye contact, it would switch you. Or you make eye contact with the waitress and the other character character reacts to that.

This all said, I think game-like experiences are going to become increasingly popular and non-interactive cinematic type experiences less and less popular, and I’m okay with that.

 

CM:

What have you been working on yourself?

 

DB:

I’ve had issues with my DK2 recently, but I was able to build a dynamically spatial sound sculpture. I also worked on a mobile game for a bit and an animated film.

 

CM:

Can you tell us a bit about what you learned working on an immersive animated film? It sounds really hard. I have no clue what that would look like.

 

DB:

Storytelling is hard in immersive cinema. Editing is particularly hard. We barely made it past just technically getting control of the camera and making it do what we asked.

When it’s time for an edit, a consideration is reacting appropriately to the direction the user happens to be looking. The moment of the cut needs some flexibility where in traditional film there is none whatsoever.

Continuity editing (film theory term referring to things most people take for granted: match on glance, match on action — one shot shows a character opening a door, the next is from inside the room they’re going into — you piece the spatial reality of the film together) — that’s just not going to be as practical or important at all.

 

CM:

What’s the solution to preserving continuity? What techniques need to be implemented to produce a constant narrative?

 

DB:

We take continuity editing for granted, especially if we’re accustomed to watching modern — and especially Hollywood style — movies. That’s become the accepted way to edit film together for the mainstream, because it’s really easy to watch and understand. But this style of editing had to be researched, developed, and refined over the years, and it was by no means obvious or considered necessary. If you watch earlier films, or more experimental films throughout the years, you can find a vast world of ways to piece shots together that convey connections between segments of moving picture that are more symbolic or thematic or sometimes narrative but in a more abstract way — not as bound to visual, physical spatial continuity.

VR cinema has inherently greater replay value than traditional cinema, because you can only look one direction at a time. Immersive cinema authors can — if they choose to — create experiences that can’t be taken in in their entirety the first time through. Even if you make something that, unlike a game is always 10 minutes long, it’s always the same 10 minutes of stuff each time, you can’t even affect the outcome… just take 360 spherical degrees and divide by your field of view: that’s an objective minimum of views just to literally see all of the footage.

There’s another endlessly fascinating dimension a potential form of media could go down. The amount of feedback the creators of television shows already get these days between episodes and seasons… the influence is strong enough over episodic content that the audience can steer the direction of the show. This is already happening. But it could even be: it is the same episode over and over, but over time the people who watch it are able to affect it, make it into something new. How is this different from, say, opening a bar? I might have had some idea of what vibe I wanted the place to be, but it’s defined over time by who chooses to come to it. Maybe I don’t want to predetermine what it means, maybe I just wanted to make a template that people could take collectively and make something out of it. There could be a version history of the episode. ‘My favorite version of this episode is 3.2’. This is not too dissimilar with what’s going on with reboots. The tools are just going get easier and easier for the populace to use; people don’t bat an eye at meme generators. Don’t underestimate the power of fans: just look at Bartkira.

For me, the biggest hurdle for immersive cinema is live action capture technology. Until someone gets light field capture down for real, I’m not really interested in going for it. Look at Google Jump. Amazing, but I don’t think even they’ve quite got it. There are still some artifacts at least from what we can tell from the footage they shared at I/O. I think their technology is perfectly suited toward sharing experiences from friend to friend in casual use, but it’s not cinematic level yet. We’ve seen perfect-o still light field capture with Otoy’s crazy thingamigger spinning the camera all around, and maybe you could suture that together with whatever their holographic stages are called, piece together an actor’s performance, copy-paste them into a captured still environment to simulate live action, such that you can be there and get that parallax. It’s just not worth it with only the stereoscopic depth cues, though Felix and Paul’s Cirque du Soleil experience is nonetheless impressive. Until light field works, we’ve got animation.

 

CM:

Will light field technology replace cameras?

 

DB:

You mean will Lytro replace DSLRs? People still listen to the radio. Old forms remain. Niches will be into it forever.

 

CM:

What do you think about big studios? What do you think about their approach to immersive cinema storytelling?

 

DB:

I feel gratitude and suspicion. I’m grateful that some really creative and thoughtful people have got the money to already be exploring the real nuances and be pioneering this stuff. All power to them. It’s great. I’m sure, though, that given their context they’ll be striving toward achieving something which ultimately I’m not that into personally, as more of an experimental wacko type guy. I mean they’re probably going to be going straight for the heart strings, and the horror, and the mindfuck type stuff. I don’t know. I can’t wait until this technology is truly democratized in the way we’re starting to see traditional film be.

 

CM:

I’m more excited about what other humans out there have to share than Disney. Somewhere out there there’s an Alejandro Jodorowsky of VR waiting for the power to unleash their vision.

 

DB:

If the ultimate goal is to bring all the people, all the consciousnesses on the planet closer together, we don’t need more white dudes in capes with hammers jumping around. We need more of this handing out laptops and cameras to people in the third world and seeing what they make. Once they have 360 cameras, too, then literally anyone is anywhere anytime. And that’s the dream.

 

CM:

It’s a dream, certainly.

 

DB:

Yeah, what am I saying. No one has “the” dream.

 

CM:

Are there downsides to this level of connectivity, though? How will the individual, the self, remain comfortable with being itself while at the same time having this flood of input from the masses? How will that balance as technology ramps?

 

DB:

Well keep in mind the opposite effect, where technology helps us reinforce our beliefs about ourselves more easily. Even things like our Facebook News Feed where I feel more connected to everyone around me than I ever was before, and to things happening all over the planet. But when you think about it you’re not just getting a flat feed of anything and everything. A corporation is using the most advanced techniques they can to figure out and give you exactly what you want to hear. And that’s only going to get stronger and stronger. If that is what you want — to massage the patterns that are already forming in your consciousness — that path will be available to you. We’ll have a world of superficially connected but even more separate entities. I’m not saying that’s a nightmare either, maybe that’s cool and fine, but it’s definitely a thing to think about.

 

CM:

Would you want to live a slave to pleasure like that?

 

DB:

Hard to say. I think there’s a lot more possible than that. There’s nothing wrong with euphoria… maybe we should question whether peace is the ultimate goal, or whether preserving strife and risk would be better. You can’t create something without destroying something, if only the absence of the thing you created. Growth is violent.

 

CM:

Can you have peace for a species like ours without the self? Do we have to remove aspects of the self like ambition, greed, and whatever comes out from there, is that still human, even if we have peace. Should we remove these things?

 

DB:

I would like to think we could have both. A transcendence while preserving what we arose from. I think that greed and conflict and suffering even are interesting things. I don’t know why this word is coming to mind, but ‘zoo’. Almost as if humanity could be put in a zoo of sorts where we’re observing what we used to be for our continued pleasure and inspiration but exist apart from it. Putting it as a “zoo” sounds awful. Zoos are immoral institutions in my opinion… there’s got to be a better word to describe what I’m going for here… but if humans could be allowed to keep reinforcing their own individualities but those individualities were interesting and worthwhile to the peaceful being that they had all combined into, then that sounds like the best of both worlds to me.

I guess what I was going for here was more like a preservation as we preserve a species rather than force it to extinction. We can’t halt certain partially out of control processes of overwhelming change we kick into motion, but we can do little things to curtail the completeness and permanence of the violent outfall of the result. No one may ever write poetry again in a language saved just sort of extinction, but as long as some people still speak it… or as long as events are recorded in history books, at least they weren’t forgotten. That’s maybe the best we can hope for humanity itself in the face of transcendence.

 

CM:

How could we possibly get there, considering the state of the internet now?

 

DB:

If we can make it past all the major existential crises (such as irrevocably ruining our environment) and solve our critical social issues. We might be too stupid, warlike, genocidal to make it these things happen, sure. There’s no need for a non-scarcity economy anymore, where the richest 1% have 90% of the wealth and everyone else has nothing; we should be able to have robots harvesting all the food we need to keep our human bodies alive so we can all focus on whatever we’re most interested in becoming, exploring our own unique snowflake blinding rainbowy crystals of light, and join in some crazy shared VR where we can exchange those things…

 

CM:

It will get to a point where robots do all that. What are humans left to do? We as individual could figure out what it is that gives us meaning and purpose. Maybe the journey to figure that out is the thing. It’s individual, it’s up to you. But what we could do as a collective mass is party, party to the ends of the universe, because what else is there to do really once we’ve achieved all that? This is the future of humanity once we cross the singularity. It’ll be us, VR, and AI navigating, exploring, our selves, collectively partying, holding giant Burning Mans all over the world.

 

DB:

A lot of people agree with this.

 

CM:

There’s so much that could go wrong though! I’m holding onto this happy vision of the future.

 

DB:

I probably shouldn’t tell you to read Eliezer Yudkowsky’s fun theory. Is fun in the universe exhaustible? The way he defines fun is novel experiences to be had, problems to be solved in a more mathematical logical sense. Is it possible we could figure everything out and get bored? The party could end, the booze could run out.

 

CM:

Goddamnit. We partied from every possible angle we could have.

 

DB:

It’s theoretically possible.

 

CM:

We can’t discount that possibility.

 

DB:

Well, we’ll cross that bridge when we get there.

 

CM:

What’s the best case scenario for you?

 

DB:

That AI makes some really cool art. That it helps us find ourselves.*

Some of my favorite experiences with AI have been neural networks trying to simulate human behavior and writing. Like What Would I Say. You link your social media accounts, it analyzes everything you’ve ever said, and it generates novel things that you might say, using a Markov chain or something similar I’d think, considering each word individually and then statistically what are the most likely words you say after each other one, building up phrases from there. I’m drastically oversimplifying this but it’s the basic principle behind things like this and it’s hilarious. It’s uncanny, sometimes uncanny Engrishy humor, other times because of how accurate it is. These things are getting better and better, you can train them, like Google’s Deep Dream. People have been working on generative artwork for decades. Generate novel works by Mozart that confuse the heck out of experts. That’s childsplay though. What if we could teach them the principles that inform great works of art at a high enough level that they could create things that would inspire and provoke us. I feel like this is full circle to your fears of an AI clone of you, though, maybe.

 

CM:

Let me play the devil’s advocate here because you’re saying that art can be formulaic. Isn’t there something deeper, something that is innately human that comes out of nowhere. But with AI can’t we just tell where it’s coming from if we dig deeper into the code?

 

DB:

Ultimately humans are machines too, just really poorly understood ones. I’m not saying we’ll understand ourselves completely one day as much as the opposite, that we’ll raise machines to the level of complexity where their creative impetus will be encrypted beyond our ken as well. Are machines going to have souls? Pretty much the same concern. It’s going to be easier for machines to make some sort of inspiring and valuable art than it will be for them to pass the Turing test in conversation. Art’s just more open to interpretation and abstract while the rules governing conversation are so much more specific.

 

CM:

Are you banking on AI and VR?

 

DB:

Subconsciously at least I think I make decisions that way. I broke my wrist recently and haven’t been taking as good care of it as I would probably if I didn’t believe on some level that I wasn’t really going to need it when I’m 80 in the same way as my great grandparents do. Much to the chagrin of those closest to me, sorry about this.

Perhaps as an artist I’m realizing more and more that I’m not as interested in creating artists. What I mean is: I think there are a lot of technical people out there interested not creating films and music and games directly but instead focusing on the parameters and constraints, the rules that govern an experience, and setting a machine out to come up with different presentations of those inputs. That’s creating an artist as art. Or turning it over to a combination of humans and AIs to create those experiences. You could say this is already happening with video games: developers came up with the rules of StarCraft and they wrote NPCs that obey those rules and put them out in the world with a bunch of humans and they compose a metagame together. And that metagame is something fascinating and beautiful which they didn’t create but they knew they were going to form the substrate for. This is really similar example to what I said earlier about creating a bar. As AIs become closer and closer to the abilities humans have, these two interests are going to become more and more similar. So to differentiate myself from that approach to introducing art into the world, I’m preparing to focus more on looking inwardly just on myself. I don’t mean in a crazy egocentric way. I’m just more interested in this: when I sit down with a set of intentions and constraints, what do I come up with? Even when it’s virtual psychedelics where the actual individual audience reaction is divergent, the aura of the work is quite solid. And I want to believe that that’s going to remain interesting to other people (whether they be human or artificially intelligent themselves) — that non-collective works of art will continue to hold the attention.

 

CM:

What do you have in the works for the future? From your melting pot of ideas? How do you intend on bringing forth your self-expression?

 

DB:

I recently had a Tarot reading, and the dilemma that I posed to the reader was: I feel like I want to have the greatest possible positive impact on the world and I’m worried that I’m deluding myself when I say that the best way for me to do that is to focus on cultivating my self-understanding and my powers of articulating my personal experience and sharing those with the world. Maybe if what I really want is to help others that that’s bullshit, I’m not worth my weight in art, I should just get out there and end wars and hunger and genocide. Just directly help the people that need it the most in obvious ways. Because I don’t do nearly enough of that.

I read this interesting article coming out of the Singularity Institute about this concept called “sins of the future“. This is the idea. We look back on those who actively or passively were part of the slavery industry in United States history very poorly. We condemn those people. For a lot of those people, it wasn’t a big issue in their lives, it was just the time and place they were born into. It’s not like they were hateful people necessarily. So we live one of the wealthiest countries in the world. The economy that supports our lifestyle is built upon global slavery. Mass incarceration. Maybe people will look back on us just being okay with having comfortable lives while millions are in abject misery. They’ll think we’re awful people.

I feel awful about this, we all feel awful about this. We want to do what we can and are challenging ourselves to find the best ways to do it. But what is the best way? I don’t think the answer is black and white. We all should do more to give directly, but also think about how individually we best can make the wider and more long term and profound impact.

 

CM:

Is this where AI is going to step in and be a benevolent adviser?

 

DB:

I’ve been able to impose accountability on myself using technology, able to externalize pressure to meet personal goals. It’d be a dream if we could do a similar thing with AI on a societal level, externalize accountability as a species.

 

CM:

Or psychedelics! It’s not in the realm of impossibility that we’ll figure finally that this is medicine. It could be used for therapy. If we had the research, knowledge, and mentality, we could make amazing breakthroughs. People need to breathe more; the world we be much better off.

 

DB:

What you’re tapping into here is a really important continuum, similar to the one we were talking about earlier with the new lines people are going to have to draw in romantic relationships — what’s another person, what’s infidelity: thought policing. Once we have things beyond psychoactive drugs that merely affect us on a physical chemical level, but we have mainlining type technologies like the artistic virtual drugs I was dreaming of creating one day, I hope that our power structures will be able to accommodate the prerequisite great amounts of freedom in the types of conscious experience you’ll be able to have.

 

*I hadn’t read Dave Egger’s The Circle yet when I did this interview, but this exact phrase figures disconcertingly in it.

watchline

04.11.2015 § 2 Comments

Screenshot (14)

The trailer to the VR game Eve: Valkyrie is genius. Someone on staff recorded a few minutes of their experience playing through the opening sequence game. Watching this trailer, you can feel the distinct energy signature of this person’s experience, reacting to the aleatoric elements of this primarily scripted event. It’s similar to but different from the feeling you get watching handheld footage, sensing the presence and intent of the cameraperson behind the camera (the un-simulate-able subtleties of which explain the technique of placing real-life surfing documentarians into virtual reality to “film” surf scenes of Surf’s Up). Sometimes first-person POV shots are simulated, or sometimes captured with GoPros strapped to actual people’s heads. The critical difference with E:V and VR screencasts in general is that the world is a definitive, limited, and infinitely repeatable experience, but the particular experiences possible to have in it are unlimited: you can’t watch the same VR film twice.

And how best to debut a work of immersive cinema to a group of people in a traditional simultaneous collective experience? It seems to me the best method is for the director, or perhaps a separately specifically talented author, to record the “definitive” or a “suggested” path through the story. Circumventing all of the novel storytelling challenges VR cinema presents – how to get viewers to look where they’re “supposed” to and when, you just do it more-or-less perfectly. This is a great method for trailers / previews, of course, but could also become an art form in itself, is what I mean to say. People may still wish to gather in groups without HMDs between each others bodies, hear each other’s real-life laughter unmediated, and experience a story in the traditional giant-2D-rectangle-on-the-wall style. In the multiple-screens, multi-media, intermedia art and entertainment world we’re moving into otherwise, this makes even more sense. Such 2D versions may come packaged with the 3D version, complementary to them.

This isn’t like a director’s cut, since it’s not a re-organizing in time of existing elements, or addition or deletion of timed elements. It’s always the same total length of time, the same elements, just a differently chosen path of sight through them. And it’s not like a “take” either, since the action inside the world may not even have any variance.

In addition to the authoritative take on a VR cinema experience, sharing of individual fan experiences may become a thing. Either:

  1. broadcasting widely (one may become famous on VR YouTube for having a really good eye for “shooting” VR movies, for honing them down to the most artful or interesting 2D version of themselves),
  2. just sharing with your friends,
  3. or keeping them for your own personal records (think about it: in traditional movies, you can never relive the first time you watched a movie, but in immersive cinema you can in some way! While you can’t know your thoughts, you can infer them from what you chose as important most to watch at that time at each moment throughout it.) You might keep track of various watching experiences over time, too. You may have watched the same piece of VR cinema 5 times, and still only beheld something like 42% of the total visuals to take in.

I propose a word to refer to this 2D projection of a 3D immersive film: the watchline. I considered many other options:

  • playline (meh, too vague)
  • viewline (it’s really more of a viewing line, so this sounds more like a synonym for eyeline which is a mere snapshot in time)
  • experienceline (too vague / related to consciousness, while it should be more about the material recording)
  • screenline (inaccurate)
  • observationline (too long)
  • vantageline (etymology is more about good positioning)
  • frameline (unfortunately this term is already used in film as the boundary between two frames on a film strip)

I chose watchline in part for its resemblance to the physics term “worldline” which describes an objects path through four dimensions, with time as the fourth dimension. This is a very similar idea, except instead of an object lathed out, snaking through time from conception to destruction, we have a framing on the 360 degrees of action lathed out, snaking through the material from its beginning to ending.

For more VR Vocab, check out this video:

Sphere-out transition

04.11.2015 § Leave a comment

I’m having a heck of a hard time achieving a special effect in my Unity3D game engine. As far as I understand it, my struggles generalize to most graphics environments; this is simply a non-trivial problem. But I think this graphics effect would be amazing to experience, especially in virtual reality!

I want to transition from one scene to the next using a growing spherical portal — Sort of the 3d equivalent of a circle-out wipe transition such as you might see in a movie (here’s an example from Star Wars, where a circular window into Endor opens from the center of the screen outwards, replacing the previous shot of space:

sw_circle_fade

). But in this case, parts of the next scene inside a sphere, not a circle, should replace the previous scene.

My first impulse was to use the stencil buffer, in its most basic implementation. That is, the portal sphere has ColorMask 0 and ZWrite off, but always passes the stencil test and replaces the buffer with 1 (arbitrary number, chosen for simplicity); then all the objects in the next scene use materials which are culled unless they pass the stencil test equal to 1.

This basically works except for one problem: some objects in the next scene which are outside the portal sphere are visible anyway – namely, those objects which are between the sphere and the camera! Again, only objects from the next scene that are inside the sphere are supposed to be revealed. Here’s a screenshot:

fail_sphere

As you can probably tell, the closest corner of the house is outside the sphere, and therefore it shouldn’t be visible yet. As the sphere grows outward from a point, the house should be revealed in a series of spherical slices. I want to see crazy curved cross-sections of the walls and chairs and stairs, etc. as the sphere grows. The way this is working now, it’s ultimately no different than the 2D circle-out effect from the movies: the sphere just cuts a circular hole in the screen through which the next scene is seen.

I believe the fundamental problem is with the way the stencil buffer functions not with respect to depth in 3d space but with respect to ultimate pixels on the 2d screen. In other words, the renderer considers the screen, and any pixel where the sphere exists, regardless of depth or occlusion, becomes “stencil 1”. Then when it comes time to render the actual physical objects in the scene, they have no idea whether they are in front of, inside of, or behind the portal sphere, and if they’re stencil 1 objects they just get rendered irrespective to that important factor.

My understanding is that in order to get the stencil to apply in space, I need to turn ZWrite back on in my shader. Otherwise the stencil information the sphere writes has no depth info associated with it. The problem with that is that when Zwrite is on, objects behind the sphere get occluded. Well, that won’t work, because I need to see the objects inside the sphere and behind it. Even when I set the ColorMask to 0 making it invisible, it still occludes things, essentially cutting a hole in the world (aka a DepthMask, like so).

So really what I need to figure out how to do is attribute depth information to my stencil buffer entries without actually writing to the depth buffer, or go the standard route and write to the depth buffer but somehow override the default behavior of occluding more distant objects. I’m not sure which is better, but probably either way would require some scripting.

For example, if I could use a variable as the Ref of the stencil buffer I could convert the stencil buffer into a sort of custom depth buffer serving my needs (This guy CaptainScience seems to have found custom depth buffer functionality in Unreal, at least). Or I might be able to script the sorting of the objects behind the sphere, using a technique similar to here with texcoord and ComputeScreenPos.

I learned a lot about stencil buffers from exploring the demo project posted here. Unfortunately as this project’s pdf readme admits, this solution is limited, and only works in this special case. The stencil mask is drawn before geometry. As soon as you pull an object in front of the glass the effect is ruined, because the part of it outside is still seen; it becomes clear that the portal simply draws on the surface of the screen and defines where on the screen the object can be seen, not where in the game world the object can be seen. (Here’s another example of what I mean with “in front” being a problem, in the very last comment, the shot of the boat protruding through the wall.) As expected, the project writer ultimately suggests scripting solutions for sorting in depth if I need to move the portal into rendering simultaneous with the geometry, like I do need.

Depth + Stencil

Now I have done my homework enough to know that depth buffer and stencil buffer can be worked with together. For example, “Carmack’s reverse trick” for volumetric shadows, where the buffer gets inverted/toggled each time it crosses the border of a shadow. My solution would need up to three states instead of two: when you are still outside the sphere as it grows, current scene between you and the sphere, next scene inside the sphere, and current scene on the other side of the sphere where there are no objects inside the sphere occluding them (only two when you’re already inside the sphere: next scene, and current scene [although at the point you’re inside the sphere it may make more sense to begin referring to the “next” scene as the current and the “current” as the previous; not sure what better time there’d be for making that terminological distinction]). So if I could figure out how to realize my sphere as the same type of object these volumetric shadows get realized as, that might work.

This might have helped. I like the idea of flipping the stencil buffer on when I go in, and back off again when I go out of the sphere, but in practice I can’t get this to work.

And this might have helped if the discussion went anywhere.

The section in the Unity documentation on “Transparent shader with depth writes” seems like it could be useful if I could parse it. It certainly seems possible to see through objects that write to the depth buffer, which is really all I need! It’s just that the docs seem really sparse and vague on all of this (or assume a much greater familiarity with the basic material)

CSG

People have spoken of Stencil Buffers used to create CSG in graphics in general, so I’m sure it’s somehow possible in Unity. Erik Rufelt’s post here seems promising, for example, and this guy got something started.

Then I looked into Realtime Constructive Solid Geometry (runtime CSG). Has to be runtime, Boolean Ops won’t work. BooleanRT, GameDraw, MegaFiers… alsoThis guy tried a couple more I didn’t find but said they were all buggy and broke down when attempting more complex geometry (which my scene definitely has).

Fidding with Z-Testing

A friend suggested inverting the depth test, so that objects behind the rim of the sphere wouldn’t get occluded. The problem is then that there are multiple layers of them into the infinite distance, and now they’re ALL inverted in depth and appearing on top of each other in funky ways.

I attemped a solution where the FIRST object in the next world seen through the sphere ALWAYS passes the z-test and then flips the stencil up to 2, where objects can pass the stencil test at 2 but with default z- behavior; unfortunately all objects in the next world would need BOTH of these materials for the effect to work (because in some situations they could be the very first object past the sphere lip, in others not) and Unity doesn’t support multiple shaders.

I played around a bit with the ZFail param of the Stencil buffer. I thought, “if it doesn’t fail the z-test, then it can’t be inside the sphere yet!” But if there were ever more than one object from the next world in front of the sphere, this would only cull the first one.

Other Failed Rendering Strategies

Perhaps it would be possible to achieve this solution using a single shader with multiple passes and/or replacement shaders (seems like the “queue” tag uniquely can enable ordering of passes?), but the documentation doesn’t provide any examples of multiple passes that I can find, nor can I find any examples online, so that possibility remains a mystery.

I looked into Layer Masks, but it doesn’t matter how many cameras you use – it still doesn’t solve the problem of cropping out the stuff from the next scene that is between the portal sphere and a camera.

I looked into Alpha blending but don’t think it’s right for this job.

And render textures seem more for projecting views into parts of the world onto flat surfaces, wouldn’t really apply here.

I understand that transparent objects use the classical “painter’s algorithm” style of rendering, back to front, which also might come in handy, even if none of the objects in my scene actually are transparent, I could try labelling them as such and put up with whatever possible weird artifacts from intersection / equal distance may occur…

There may be some way to adapt the work done here.

This guy’s three part post seemed promising, since the effect from the game Quantum Conundrum he was inspired by turned out to be exactly what I’m going for!

quantum_cunundrum_sphere_out

Unfortunately he never actually gets there in his three-part tutorial, it just ends up with a depth texture. I guess I’m not surprised that a project designed by one of the folks behind Portal would feature this effect, but it does make the surprising dauntingness of achieving it easier to bear.

So I just started learning about graphics and rendering this weekend, so I apologize for my n00bness. Any guidance would be deeply appreciated.

Virtual Reality Dynamic Positional Audio Experimental Sound Sculpture

11.30.2014 § 3 Comments

This is a little something I whipped up in one day. You will need the Oculus Rift DK2 to experience it.

Windows | Mac

You will find yourself floating invisibly inside an icosahedron, listening to some strange music. Look at different triangular faces to change the tuning of the instruments. Each of the twenty faces is associated with a different Hexany, one for each combination of three of the first six odd numbers past one, including one as the fourth number ([1,3,5,7], [1,3,5,9], [1,3,5,11], [1,3,5,13], [1,3,7,9], … [1,9,11,13]). Every hexany consists of six pitches, and these are mapped to the six different instrument timbres, which are mapped to the six cardinal directions (above, below, left, right, before, behind).

This project was inspired in part by a demo Total Cinema 360 gave at the first VR Cinema meetup. In this demo, you found yourself inside three virtual realities at once (arranged as trienspheres): look ahead to be flying over a volcano, look behind and to the right to be in a living room hanging out with some folks, and look behind and to the left to be on stage of a rock concert. The kicker is that when you’re looking into any one of these three worlds, the sound surrounding you is of that world. In other words, look into the volcano and you can hear the volcano behind you, even though you know that if you look back there you will see a room or a concert and the sound of the volcano will disappear. Thus an element of sight has been lent to sound: you can only hear things in a line extending from the front of your head, rather than the normal state of being able to hear everything around you no matter where you look. This is what I refer to as Dynamic Positional Audio.

tc360demo

There’s a lot left to improve on this, and hopefully I will be able to get to it soon. I would use more distinct tunings, maybe have a continuum between them, maybe have the sound sources in motion (in response to your motion even maybe), and just make it more beautiful to look at and hear. But hey, I’m a beginner. Ship.

I used Different Methods’s SpatialAudio Unity plugin after much hunting and attempts to do it myself using a MaxMSP+Unity integration.

Where Am I?

You are currently browsing the Virtual Reality category at cmloegcmluin.