Enter VR interview: Douglas Blumeyer on immersive cinema, etc.

10.11.2015 § 1 Comment

I was recently interviewed by Cris Miranda for his Enter VR podcast to get my perspective on immersive cinema. You can listen to the full thing (and find more examples of his incomparable breed of interviewing) here:

http://www.stitcher.com/podcast/cris-miranda/entervr/e/rethinking-vr-cinema-ai-consciousness-clones-brain-to-brain-40313488

Since I find my own voice tedious and annoying to listen to, and this is a feature length film’s worth of mostly my voice, I went ahead and typed up the more interesting parts. Hopefully this alternative form of consumption broadens its accessibility. Also I distilled down and liberally refined material and added links as appropriate.

We started off just talking about The Matrix — not immersive cinema, but cinema about VR & AI, and a big part of both of our childhoods. But we end up covering a range of stuff beyond the more on-topic stuff like immersion and interactivity — from ultimate mediums and psychedelics, to individuality vs. collectivism and generative art. Without further ado!

 

CM:

Virtual worlds will be perpetually attached to artificial intelligence for the rest of time. To me it seems like the manifestation of artificial intelligence in our time will first come out through virtual worlds. I don’t think we’ll see androids or robots walking around among us that will be sentient and self aware, you know? I think it will happen later, but first we will see them in VR.

 

DB:

I agree. Virtual Reality is built on technologies that are far ahead of what robotics can achieve in the physical world.

And here’s another thought coming at that point from the other direction. I expect some listeners may have heard of the Uncanny Valley before, which is that little gap between Realistic — you believe this is a real person I’m engaging with — and Looks-Kind-Of-Like-A-Person. In between that, where it’s like Very-Close-But-No-Cigar, we’re biologically wired to resist (I think this evolved for a reason; an animal such as a human wouldn’t want to breed with something that was not correct, that had mutated in a way that made it off enough that the brain registered it as foreign or risky). Now love is infinitely complicated, but things like love are pretty easy to elicit and replicate, to get from people, and I have no doubt that people are going to be falling in love with artificial intelligences in the near future. It’s probably already happening to some extent. But I think that we’d be sooner to fall in love with virtual reality artificial intelligences than physical robot artificial intelligences. And my reason for that is that when we’re in virtual reality our standards are lower, even when we’re socializing with virtual entities which we know are backed by a real person. If you’ve ever done something like Altspace‘s social experience, even when you throw in things like eye tracking so you get this very intimate level of eye contact, or if you get hand tracking because hand gestures are very important to bodily communication, even when you throw all of that into the mix there’s still something off about it, and so if the playing field is leveled such that even humans are kind of off then I think it’d be easier for an AI to catch up and be indistinguishable. And as the role of VR increases in our lives and we spend more and more of our time in virtual environments our relationships will on average include AI more. It won’t seem as strange.

 

CM:

Is falling in love with an AI considered cheating?

 

DB:

I don’t think it makes sense to make objective judgments. It depends from relationship to relationship. It’s going to add a new line couples may have to draw. With virtual reality porn, couples will have to decide what’s the line — what’s a real sexual experience and what’s not.

 

CM:

That line is so blurry, and also that line is moving as technology progresses. What if the AI … has a virtual representation of my girlfriend. Is it considered cheating if ‘The AI looks like you, honey!’

 

DB:

Simulating doing things with a sexual partner they wouldn’t normally do, is that what you’re suggesting? That’s a whole other line. Well, probably a really similar line when you think about it. But this isn’t just a sexual thing… maybe you’d want to simulate having other types of experiences with people you know that you otherwise wouldn’t have.

 

CM:

And what if in virtual reality everything you say and do gets tracked, logged, stored away for algorithms to learn from you and put out an AI representation or mirror reflection of you in VR. This thing would have autonomy in the metaverse. ‘I’m you in VR.’

 

DB:

There’s going to be intellectual property issues about the self. What does it mean to be me and can you abuse it.

 

CM:

So… what’s in your mind these days?

 

DB:

My particular interest is less about AI and more about storytelling. AI implies a level of interactivity and unpredictability. The intersection I’m most interested in has the least to do with interactivity and game-like structures as possible. My background is in film and while immersive cinema inherently has some interactive nature because the person experiencing an immersive film can choose to look anywhere they want at any given time, other than that necessary interactivity I’m more interested in figuring out how can we still tell linear set-in-stone stories or story-like experiences. What are those going to be like.

 

CM:

What are you finding out the more you think about these things?

 

DB: 

Immersive storytelling is a brand new thing. It’s hard to say exactly what direction it’s going to go. It’s probably going to go in many different directions. It’s informed by not just traditional film but also theater — if only because unlike in film you’re actually occupying the same three dimensional space as whatever performers and actions are being dramatized — and also from game — if only because of those at-least-subtly interactive elements.

I was looking for a good definition of cinema and there’s one on Wikipedia which says “the art of simulating experience,” which is really interesting actually; it’s not the first thing I’d think of but I think it’s astute. I’ve since noticed people referring to literature this way, too. I think you can find — if your’e looking for it — a continuum from text-based storytelling to film-based storytelling toward immersive storytelling in that they’re all striving to simulate human experience. And what that continuum describes is an increase in the comprehensiveness of the phenomenal stuff, as technology enables us to more completely capture sensory channels and convey those directly to people.

I feel like every time we make a technology step forward there’s a period of resistance to it and a fear that it’s compromising the potential for artistry, you know, that it can’t help but be less subtle and speak less purely to the soul, that it can speak only to the body and then just maybe the mind. I don’t doubt that this is true to some extent for some experiences… when you have access to convey sensory information more directly it is easier to make interesting but kind-of-dumb not-very-spiritual experiences. I do think it’s also possible to direct that in a way that it can seize the soul and convey even more awesome spiritual truths.

 

CM:

Where is this continuum leading us to, in terms of these mediums of self-expression? An ultimate or final medium?

 

DB:

I think ultimately we all want to feel more connected. I’ve heard the fractured human experience described this way: we are already one consciousness, we’re just not very organized yet. You can glimpse moments of our interconnectedness, our collective consciousness, but it’s overwhelmingly an experience of being an individual, an experience of isolation and fear and pain. I believe that things could be better if I could be inside everyone I knew more intimately, more deeply. That’s something I aspire toward and that’s something I think that a lot of artists share as a mission: to try to turn themselves inside out in one way or another and plumb the depths of what makes them themselves, indirectly in an effort to find that which is generalizable, and package these explorations and share them in a way that other people can dive into and find out things about themselves, and in turn modeled after their experience in the first artist’s work create their own experiences of this sort. I do believe we’re all artists and I think if you don’t think you’re an artist then that’s fair but you’re probably using a more specific or superficial definition of artist than I am.

So anyway, what would be next after virtual reality would be ‘mainlining’ (borrowing a term I learned from Alex Voto). To introduce mainlining: when I first started hearing all the modern-day excitement about virtual reality my first thought was ‘Oh my god, I want to play Portal in VR.’ I want to feel the boundary passing over my body from head to toe — the boundary between two differently-oriented volumes connected by a portal — and I want to feel this immaterial boundary through no sensation other than sensing the disparity in the direction of gravity between that of the volume where I’m coming from and that of the volume I’m going into, feel the two gravities pulling me separate directions while I’m still partway in both. Well, we’re not going to have that for a while. That’s going to require wiring into our brains and tricking our brains at a level past our eyes, ears, and skin. That’s what’s meant by mainlining. And that technology is completely unrelated to the hardware we’re working on nowadays, to perfect this tier – the tricking eyes ears and skin one.

But mainlining is coming eventually, and I think a lot of lessons we take from virtual reality — from immersive cinema, games, and any sort of virtual experience — can be applied to mainlined media.

Mainlining will open up even more stuff. It’s called the ultimate or final medium for a reason. This is where we get to the things I really dream about: building virtual drugs. Approaching demiurge level, where you’re essentially just a creator god. You can send electrical signals, you can write a script that when applied to a human mind will give them a type of psychological, spiritual experience beyond mere sensory information. They will react to it and populate it with their own memories and preconceptions in a certain way. And it’s just like tripping with friends insofar as everyone will come out of it having had a similar experience, one they can consider to have shared, but each person’s experience will be different from the next in way fundamentally different than the difference we think of when you and I watch a movie together and walk away with a comparable but different experience.

 

CM:

What are your predictions for the future of immersive media?

 

DB:

If you force me to predict, I’d say genres will coalesce around sweet spots of interactiveness, such as I describe in my proposed format of opture. (I cover a lot of this same material in my post about opture). And not just on a simple continuum from interactive to non-interactive; there are different kinds of interactivity:

  • Can you actually affect the outcome of the story?
  • Or can you simply affect the way in which events unfold, the montage?
  • Whether you experience it chronologically?
  • From which spatial angles you see things?
  • Are you aware of the capacity in which you are interacting with any of the above, or is it maybe the author’s intention for you specifically to not understand how you’re affecting some of these?

Maybe an example would help. Suppose you’ve got a standard conversation between two characters sitting at a diner.

In a traditional film you’d get a master establishing shot and then start going back and forth between over the shoulder shots, maybe some close ups mixed in. In an immersive film, though, since every shot or vantage is in a sense already a first person shot — because you are that person looking around — it might be more natural to experience things from the points of view of the two characters. While I feel like point of view shots are not unheard of in film and TV, we see them pretty often and aren’t surprised when we see them, they’re still striking, we notice them and wonder about their importance. Maybe in VR they would be not as big a deal. They might turn out to be the more natural way to experience the scene.

Now imagine one way you could experience the scene would be always from the perspective of whichever character is talking: you are made to feel like you are the active character. Alternatively the author might have you do the opposite, always experience it from the perspective of the character being spoken to. Both of these are fundamentally passive viewing experiences but one of them could give you a different level of identification than the kind you get when watching traditional film. You might be made to be more sensitive to the words spoken. I’m way more critical of the words that come out of my own mouth, so a line with a character in some movie you might react to like, ‘Eh, some people are just like that’, in an immersive media you might react instead more like, ‘Shit, so this is how people get this way.’

So to finally get to the actual bit about interactivity: in this example, whose perspective you experience it from could depend on your choices. The experience could be programmed so that as long as you held eye contact you’d stay as one person, but if you ever broke eye contact, it would switch you. Or you make eye contact with the waitress and the other character character reacts to that.

This all said, I think game-like experiences are going to become increasingly popular and non-interactive cinematic type experiences less and less popular, and I’m okay with that.

 

CM:

What have you been working on yourself?

 

DB:

I’ve had issues with my DK2 recently, but I was able to build a dynamically spatial sound sculpture. I also worked on a mobile game for a bit and an animated film.

 

CM:

Can you tell us a bit about what you learned working on an immersive animated film? It sounds really hard. I have no clue what that would look like.

 

DB:

Storytelling is hard in immersive cinema. Editing is particularly hard. We barely made it past just technically getting control of the camera and making it do what we asked.

When it’s time for an edit, a consideration is reacting appropriately to the direction the user happens to be looking. The moment of the cut needs some flexibility where in traditional film there is none whatsoever.

Continuity editing (film theory term referring to things most people take for granted: match on glance, match on action — one shot shows a character opening a door, the next is from inside the room they’re going into — you piece the spatial reality of the film together) — that’s just not going to be as practical or important at all.

 

CM:

What’s the solution to preserving continuity? What techniques need to be implemented to produce a constant narrative?

 

DB:

We take continuity editing for granted, especially if we’re accustomed to watching modern — and especially Hollywood style — movies. That’s become the accepted way to edit film together for the mainstream, because it’s really easy to watch and understand. But this style of editing had to be researched, developed, and refined over the years, and it was by no means obvious or considered necessary. If you watch earlier films, or more experimental films throughout the years, you can find a vast world of ways to piece shots together that convey connections between segments of moving picture that are more symbolic or thematic or sometimes narrative but in a more abstract way — not as bound to visual, physical spatial continuity.

VR cinema has inherently greater replay value than traditional cinema, because you can only look one direction at a time. Immersive cinema authors can — if they choose to — create experiences that can’t be taken in in their entirety the first time through. Even if you make something that, unlike a game is always 10 minutes long, it’s always the same 10 minutes of stuff each time, you can’t even affect the outcome… just take 360 spherical degrees and divide by your field of view: that’s an objective minimum of views just to literally see all of the footage.

There’s another endlessly fascinating dimension a potential form of media could go down. The amount of feedback the creators of television shows already get these days between episodes and seasons… the influence is strong enough over episodic content that the audience can steer the direction of the show. This is already happening. But it could even be: it is the same episode over and over, but over time the people who watch it are able to affect it, make it into something new. How is this different from, say, opening a bar? I might have had some idea of what vibe I wanted the place to be, but it’s defined over time by who chooses to come to it. Maybe I don’t want to predetermine what it means, maybe I just wanted to make a template that people could take collectively and make something out of it. There could be a version history of the episode. ‘My favorite version of this episode is 3.2’. This is not too dissimilar with what’s going on with reboots. The tools are just going get easier and easier for the populace to use; people don’t bat an eye at meme generators. Don’t underestimate the power of fans: just look at Bartkira.

For me, the biggest hurdle for immersive cinema is live action capture technology. Until someone gets light field capture down for real, I’m not really interested in going for it. Look at Google Jump. Amazing, but I don’t think even they’ve quite got it. There are still some artifacts at least from what we can tell from the footage they shared at I/O. I think their technology is perfectly suited toward sharing experiences from friend to friend in casual use, but it’s not cinematic level yet. We’ve seen perfect-o still light field capture with Otoy’s crazy thingamigger spinning the camera all around, and maybe you could suture that together with whatever their holographic stages are called, piece together an actor’s performance, copy-paste them into a captured still environment to simulate live action, such that you can be there and get that parallax. It’s just not worth it with only the stereoscopic depth cues, though Felix and Paul’s Cirque du Soleil experience is nonetheless impressive. Until light field works, we’ve got animation.

 

CM:

Will light field technology replace cameras?

 

DB:

You mean will Lytro replace DSLRs? People still listen to the radio. Old forms remain. Niches will be into it forever.

 

CM:

What do you think about big studios? What do you think about their approach to immersive cinema storytelling?

 

DB:

I feel gratitude and suspicion. I’m grateful that some really creative and thoughtful people have got the money to already be exploring the real nuances and be pioneering this stuff. All power to them. It’s great. I’m sure, though, that given their context they’ll be striving toward achieving something which ultimately I’m not that into personally, as more of an experimental wacko type guy. I mean they’re probably going to be going straight for the heart strings, and the horror, and the mindfuck type stuff. I don’t know. I can’t wait until this technology is truly democratized in the way we’re starting to see traditional film be.

 

CM:

I’m more excited about what other humans out there have to share than Disney. Somewhere out there there’s an Alejandro Jodorowsky of VR waiting for the power to unleash their vision.

 

DB:

If the ultimate goal is to bring all the people, all the consciousnesses on the planet closer together, we don’t need more white dudes in capes with hammers jumping around. We need more of this handing out laptops and cameras to people in the third world and seeing what they make. Once they have 360 cameras, too, then literally anyone is anywhere anytime. And that’s the dream.

 

CM:

It’s a dream, certainly.

 

DB:

Yeah, what am I saying. No one has “the” dream.

 

CM:

Are there downsides to this level of connectivity, though? How will the individual, the self, remain comfortable with being itself while at the same time having this flood of input from the masses? How will that balance as technology ramps?

 

DB:

Well keep in mind the opposite effect, where technology helps us reinforce our beliefs about ourselves more easily. Even things like our Facebook News Feed where I feel more connected to everyone around me than I ever was before, and to things happening all over the planet. But when you think about it you’re not just getting a flat feed of anything and everything. A corporation is using the most advanced techniques they can to figure out and give you exactly what you want to hear. And that’s only going to get stronger and stronger. If that is what you want — to massage the patterns that are already forming in your consciousness — that path will be available to you. We’ll have a world of superficially connected but even more separate entities. I’m not saying that’s a nightmare either, maybe that’s cool and fine, but it’s definitely a thing to think about.

 

CM:

Would you want to live a slave to pleasure like that?

 

DB:

Hard to say. I think there’s a lot more possible than that. There’s nothing wrong with euphoria… maybe we should question whether peace is the ultimate goal, or whether preserving strife and risk would be better. You can’t create something without destroying something, if only the absence of the thing you created. Growth is violent.

 

CM:

Can you have peace for a species like ours without the self? Do we have to remove aspects of the self like ambition, greed, and whatever comes out from there, is that still human, even if we have peace. Should we remove these things?

 

DB:

I would like to think we could have both. A transcendence while preserving what we arose from. I think that greed and conflict and suffering even are interesting things. I don’t know why this word is coming to mind, but ‘zoo’. Almost as if humanity could be put in a zoo of sorts where we’re observing what we used to be for our continued pleasure and inspiration but exist apart from it. Putting it as a “zoo” sounds awful. Zoos are immoral institutions in my opinion… there’s got to be a better word to describe what I’m going for here… but if humans could be allowed to keep reinforcing their own individualities but those individualities were interesting and worthwhile to the peaceful being that they had all combined into, then that sounds like the best of both worlds to me.

I guess what I was going for here was more like a preservation as we preserve a species rather than force it to extinction. We can’t halt certain partially out of control processes of overwhelming change we kick into motion, but we can do little things to curtail the completeness and permanence of the violent outfall of the result. No one may ever write poetry again in a language saved just sort of extinction, but as long as some people still speak it… or as long as events are recorded in history books, at least they weren’t forgotten. That’s maybe the best we can hope for humanity itself in the face of transcendence.

 

CM:

How could we possibly get there, considering the state of the internet now?

 

DB:

If we can make it past all the major existential crises (such as irrevocably ruining our environment) and solve our critical social issues. We might be too stupid, warlike, genocidal to make it these things happen, sure. There’s no need for a non-scarcity economy anymore, where the richest 1% have 90% of the wealth and everyone else has nothing; we should be able to have robots harvesting all the food we need to keep our human bodies alive so we can all focus on whatever we’re most interested in becoming, exploring our own unique snowflake blinding rainbowy crystals of light, and join in some crazy shared VR where we can exchange those things…

 

CM:

It will get to a point where robots do all that. What are humans left to do? We as individual could figure out what it is that gives us meaning and purpose. Maybe the journey to figure that out is the thing. It’s individual, it’s up to you. But what we could do as a collective mass is party, party to the ends of the universe, because what else is there to do really once we’ve achieved all that? This is the future of humanity once we cross the singularity. It’ll be us, VR, and AI navigating, exploring, our selves, collectively partying, holding giant Burning Mans all over the world.

 

DB:

A lot of people agree with this.

 

CM:

There’s so much that could go wrong though! I’m holding onto this happy vision of the future.

 

DB:

I probably shouldn’t tell you to read Eliezer Yudkowsky’s fun theory. Is fun in the universe exhaustible? The way he defines fun is novel experiences to be had, problems to be solved in a more mathematical logical sense. Is it possible we could figure everything out and get bored? The party could end, the booze could run out.

 

CM:

Goddamnit. We partied from every possible angle we could have.

 

DB:

It’s theoretically possible.

 

CM:

We can’t discount that possibility.

 

DB:

Well, we’ll cross that bridge when we get there.

 

CM:

What’s the best case scenario for you?

 

DB:

That AI makes some really cool art. That it helps us find ourselves.*

Some of my favorite experiences with AI have been neural networks trying to simulate human behavior and writing. Like What Would I Say. You link your social media accounts, it analyzes everything you’ve ever said, and it generates novel things that you might say, using a Markov chain or something similar I’d think, considering each word individually and then statistically what are the most likely words you say after each other one, building up phrases from there. I’m drastically oversimplifying this but it’s the basic principle behind things like this and it’s hilarious. It’s uncanny, sometimes uncanny Engrishy humor, other times because of how accurate it is. These things are getting better and better, you can train them, like Google’s Deep Dream. People have been working on generative artwork for decades. Generate novel works by Mozart that confuse the heck out of experts. That’s childsplay though. What if we could teach them the principles that inform great works of art at a high enough level that they could create things that would inspire and provoke us. I feel like this is full circle to your fears of an AI clone of you, though, maybe.

 

CM:

Let me play the devil’s advocate here because you’re saying that art can be formulaic. Isn’t there something deeper, something that is innately human that comes out of nowhere. But with AI can’t we just tell where it’s coming from if we dig deeper into the code?

 

DB:

Ultimately humans are machines too, just really poorly understood ones. I’m not saying we’ll understand ourselves completely one day as much as the opposite, that we’ll raise machines to the level of complexity where their creative impetus will be encrypted beyond our ken as well. Are machines going to have souls? Pretty much the same concern. It’s going to be easier for machines to make some sort of inspiring and valuable art than it will be for them to pass the Turing test in conversation. Art’s just more open to interpretation and abstract while the rules governing conversation are so much more specific.

 

CM:

Are you banking on AI and VR?

 

DB:

Subconsciously at least I think I make decisions that way. I broke my wrist recently and haven’t been taking as good care of it as I would probably if I didn’t believe on some level that I wasn’t really going to need it when I’m 80 in the same way as my great grandparents do. Much to the chagrin of those closest to me, sorry about this.

Perhaps as an artist I’m realizing more and more that I’m not as interested in creating artists. What I mean is: I think there are a lot of technical people out there interested not creating films and music and games directly but instead focusing on the parameters and constraints, the rules that govern an experience, and setting a machine out to come up with different presentations of those inputs. That’s creating an artist as art. Or turning it over to a combination of humans and AIs to create those experiences. You could say this is already happening with video games: developers came up with the rules of StarCraft and they wrote NPCs that obey those rules and put them out in the world with a bunch of humans and they compose a metagame together. And that metagame is something fascinating and beautiful which they didn’t create but they knew they were going to form the substrate for. This is really similar example to what I said earlier about creating a bar. As AIs become closer and closer to the abilities humans have, these two interests are going to become more and more similar. So to differentiate myself from that approach to introducing art into the world, I’m preparing to focus more on looking inwardly just on myself. I don’t mean in a crazy egocentric way. I’m just more interested in this: when I sit down with a set of intentions and constraints, what do I come up with? Even when it’s virtual psychedelics where the actual individual audience reaction is divergent, the aura of the work is quite solid. And I want to believe that that’s going to remain interesting to other people (whether they be human or artificially intelligent themselves) — that non-collective works of art will continue to hold the attention.

 

CM:

What do you have in the works for the future? From your melting pot of ideas? How do you intend on bringing forth your self-expression?

 

DB:

I recently had a Tarot reading, and the dilemma that I posed to the reader was: I feel like I want to have the greatest possible positive impact on the world and I’m worried that I’m deluding myself when I say that the best way for me to do that is to focus on cultivating my self-understanding and my powers of articulating my personal experience and sharing those with the world. Maybe if what I really want is to help others that that’s bullshit, I’m not worth my weight in art, I should just get out there and end wars and hunger and genocide. Just directly help the people that need it the most in obvious ways. Because I don’t do nearly enough of that.

I read this interesting article coming out of the Singularity Institute about this concept called “sins of the future“. This is the idea. We look back on those who actively or passively were part of the slavery industry in United States history very poorly. We condemn those people. For a lot of those people, it wasn’t a big issue in their lives, it was just the time and place they were born into. It’s not like they were hateful people necessarily. So we live one of the wealthiest countries in the world. The economy that supports our lifestyle is built upon global slavery. Mass incarceration. Maybe people will look back on us just being okay with having comfortable lives while millions are in abject misery. They’ll think we’re awful people.

I feel awful about this, we all feel awful about this. We want to do what we can and are challenging ourselves to find the best ways to do it. But what is the best way? I don’t think the answer is black and white. We all should do more to give directly, but also think about how individually we best can make the wider and more long term and profound impact.

 

CM:

Is this where AI is going to step in and be a benevolent adviser?

 

DB:

I’ve been able to impose accountability on myself using technology, able to externalize pressure to meet personal goals. It’d be a dream if we could do a similar thing with AI on a societal level, externalize accountability as a species.

 

CM:

Or psychedelics! It’s not in the realm of impossibility that we’ll figure finally that this is medicine. It could be used for therapy. If we had the research, knowledge, and mentality, we could make amazing breakthroughs. People need to breathe more; the world we be much better off.

 

DB:

What you’re tapping into here is a really important continuum, similar to the one we were talking about earlier with the new lines people are going to have to draw in romantic relationships — what’s another person, what’s infidelity: thought policing. Once we have things beyond psychoactive drugs that merely affect us on a physical chemical level, but we have mainlining type technologies like the artistic virtual drugs I was dreaming of creating one day, I hope that our power structures will be able to accommodate the prerequisite great amounts of freedom in the types of conscious experience you’ll be able to have.

 

*I hadn’t read Dave Egger’s The Circle yet when I did this interview, but this exact phrase figures disconcertingly in it.

watchline

04.11.2015 § 2 Comments

Screenshot (14)

The trailer to the VR game Eve: Valkyrie is genius. Someone on staff recorded a few minutes of their experience playing through the opening sequence game. Watching this trailer, you can feel the distinct energy signature of this person’s experience, reacting to the aleatoric elements of this primarily scripted event. It’s similar to but different from the feeling you get watching handheld footage, sensing the presence and intent of the cameraperson behind the camera (the un-simulate-able subtleties of which explain the technique of placing real-life surfing documentarians into virtual reality to “film” surf scenes of Surf’s Up). Sometimes first-person POV shots are simulated, or sometimes captured with GoPros strapped to actual people’s heads. The critical difference with E:V and VR screencasts in general is that the world is a definitive, limited, and infinitely repeatable experience, but the particular experiences possible to have in it are unlimited: you can’t watch the same VR film twice.

And how best to debut a work of immersive cinema to a group of people in a traditional simultaneous collective experience? It seems to me the best method is for the director, or perhaps a separately specifically talented author, to record the “definitive” or a “suggested” path through the story. Circumventing all of the novel storytelling challenges VR cinema presents – how to get viewers to look where they’re “supposed” to and when, you just do it more-or-less perfectly. This is a great method for trailers / previews, of course, but could also become an art form in itself, is what I mean to say. People may still wish to gather in groups without HMDs between each others bodies, hear each other’s real-life laughter unmediated, and experience a story in the traditional giant-2D-rectangle-on-the-wall style. In the multiple-screens, multi-media, intermedia art and entertainment world we’re moving into otherwise, this makes even more sense. Such 2D versions may come packaged with the 3D version, complementary to them.

This isn’t like a director’s cut, since it’s not a re-organizing in time of existing elements, or addition or deletion of timed elements. It’s always the same total length of time, the same elements, just a differently chosen path of sight through them. And it’s not like a “take” either, since the action inside the world may not even have any variance.

In addition to the authoritative take on a VR cinema experience, sharing of individual fan experiences may become a thing. Either:

  1. broadcasting widely (one may become famous on VR YouTube for having a really good eye for “shooting” VR movies, for honing them down to the most artful or interesting 2D version of themselves),
  2. just sharing with your friends,
  3. or keeping them for your own personal records (think about it: in traditional movies, you can never relive the first time you watched a movie, but in immersive cinema you can in some way! While you can’t know your thoughts, you can infer them from what you chose as important most to watch at that time at each moment throughout it.) You might keep track of various watching experiences over time, too. You may have watched the same piece of VR cinema 5 times, and still only beheld something like 42% of the total visuals to take in.

I propose a word to refer to this 2D projection of a 3D immersive film: the watchline. I considered many other options:

  • playline (meh, too vague)
  • viewline (it’s really more of a viewing line, so this sounds more like a synonym for eyeline which is a mere snapshot in time)
  • experienceline (too vague / related to consciousness, while it should be more about the material recording)
  • screenline (inaccurate)
  • observationline (too long)
  • vantageline (etymology is more about good positioning)
  • frameline (unfortunately this term is already used in film as the boundary between two frames on a film strip)

I chose watchline in part for its resemblance to the physics term “worldline” which describes an objects path through four dimensions, with time as the fourth dimension. This is a very similar idea, except instead of an object lathed out, snaking through time from conception to destruction, we have a framing on the 360 degrees of action lathed out, snaking through the material from its beginning to ending.

For more VR Vocab, check out this video:

Sphere-out transition

04.11.2015 § Leave a comment

I’m having a heck of a hard time achieving a special effect in my Unity3D game engine. As far as I understand it, my struggles generalize to most graphics environments; this is simply a non-trivial problem. But I think this graphics effect would be amazing to experience, especially in virtual reality!

I want to transition from one scene to the next using a growing spherical portal — Sort of the 3d equivalent of a circle-out wipe transition such as you might see in a movie (here’s an example from Star Wars, where a circular window into Endor opens from the center of the screen outwards, replacing the previous shot of space:

sw_circle_fade

). But in this case, parts of the next scene inside a sphere, not a circle, should replace the previous scene.

My first impulse was to use the stencil buffer, in its most basic implementation. That is, the portal sphere has ColorMask 0 and ZWrite off, but always passes the stencil test and replaces the buffer with 1 (arbitrary number, chosen for simplicity); then all the objects in the next scene use materials which are culled unless they pass the stencil test equal to 1.

This basically works except for one problem: some objects in the next scene which are outside the portal sphere are visible anyway – namely, those objects which are between the sphere and the camera! Again, only objects from the next scene that are inside the sphere are supposed to be revealed. Here’s a screenshot:

fail_sphere

As you can probably tell, the closest corner of the house is outside the sphere, and therefore it shouldn’t be visible yet. As the sphere grows outward from a point, the house should be revealed in a series of spherical slices. I want to see crazy curved cross-sections of the walls and chairs and stairs, etc. as the sphere grows. The way this is working now, it’s ultimately no different than the 2D circle-out effect from the movies: the sphere just cuts a circular hole in the screen through which the next scene is seen.

I believe the fundamental problem is with the way the stencil buffer functions not with respect to depth in 3d space but with respect to ultimate pixels on the 2d screen. In other words, the renderer considers the screen, and any pixel where the sphere exists, regardless of depth or occlusion, becomes “stencil 1”. Then when it comes time to render the actual physical objects in the scene, they have no idea whether they are in front of, inside of, or behind the portal sphere, and if they’re stencil 1 objects they just get rendered irrespective to that important factor.

My understanding is that in order to get the stencil to apply in space, I need to turn ZWrite back on in my shader. Otherwise the stencil information the sphere writes has no depth info associated with it. The problem with that is that when Zwrite is on, objects behind the sphere get occluded. Well, that won’t work, because I need to see the objects inside the sphere and behind it. Even when I set the ColorMask to 0 making it invisible, it still occludes things, essentially cutting a hole in the world (aka a DepthMask, like so).

So really what I need to figure out how to do is attribute depth information to my stencil buffer entries without actually writing to the depth buffer, or go the standard route and write to the depth buffer but somehow override the default behavior of occluding more distant objects. I’m not sure which is better, but probably either way would require some scripting.

For example, if I could use a variable as the Ref of the stencil buffer I could convert the stencil buffer into a sort of custom depth buffer serving my needs (This guy CaptainScience seems to have found custom depth buffer functionality in Unreal, at least). Or I might be able to script the sorting of the objects behind the sphere, using a technique similar to here with texcoord and ComputeScreenPos.

I learned a lot about stencil buffers from exploring the demo project posted here. Unfortunately as this project’s pdf readme admits, this solution is limited, and only works in this special case. The stencil mask is drawn before geometry. As soon as you pull an object in front of the glass the effect is ruined, because the part of it outside is still seen; it becomes clear that the portal simply draws on the surface of the screen and defines where on the screen the object can be seen, not where in the game world the object can be seen. (Here’s another example of what I mean with “in front” being a problem, in the very last comment, the shot of the boat protruding through the wall.) As expected, the project writer ultimately suggests scripting solutions for sorting in depth if I need to move the portal into rendering simultaneous with the geometry, like I do need.

Depth + Stencil

Now I have done my homework enough to know that depth buffer and stencil buffer can be worked with together. For example, “Carmack’s reverse trick” for volumetric shadows, where the buffer gets inverted/toggled each time it crosses the border of a shadow. My solution would need up to three states instead of two: when you are still outside the sphere as it grows, current scene between you and the sphere, next scene inside the sphere, and current scene on the other side of the sphere where there are no objects inside the sphere occluding them (only two when you’re already inside the sphere: next scene, and current scene [although at the point you’re inside the sphere it may make more sense to begin referring to the “next” scene as the current and the “current” as the previous; not sure what better time there’d be for making that terminological distinction]). So if I could figure out how to realize my sphere as the same type of object these volumetric shadows get realized as, that might work.

This might have helped. I like the idea of flipping the stencil buffer on when I go in, and back off again when I go out of the sphere, but in practice I can’t get this to work.

And this might have helped if the discussion went anywhere.

The section in the Unity documentation on “Transparent shader with depth writes” seems like it could be useful if I could parse it. It certainly seems possible to see through objects that write to the depth buffer, which is really all I need! It’s just that the docs seem really sparse and vague on all of this (or assume a much greater familiarity with the basic material)

CSG

People have spoken of Stencil Buffers used to create CSG in graphics in general, so I’m sure it’s somehow possible in Unity. Erik Rufelt’s post here seems promising, for example, and this guy got something started.

Then I looked into Realtime Constructive Solid Geometry (runtime CSG). Has to be runtime, Boolean Ops won’t work. BooleanRT, GameDraw, MegaFiers… alsoThis guy tried a couple more I didn’t find but said they were all buggy and broke down when attempting more complex geometry (which my scene definitely has).

Fidding with Z-Testing

A friend suggested inverting the depth test, so that objects behind the rim of the sphere wouldn’t get occluded. The problem is then that there are multiple layers of them into the infinite distance, and now they’re ALL inverted in depth and appearing on top of each other in funky ways.

I attemped a solution where the FIRST object in the next world seen through the sphere ALWAYS passes the z-test and then flips the stencil up to 2, where objects can pass the stencil test at 2 but with default z- behavior; unfortunately all objects in the next world would need BOTH of these materials for the effect to work (because in some situations they could be the very first object past the sphere lip, in others not) and Unity doesn’t support multiple shaders.

I played around a bit with the ZFail param of the Stencil buffer. I thought, “if it doesn’t fail the z-test, then it can’t be inside the sphere yet!” But if there were ever more than one object from the next world in front of the sphere, this would only cull the first one.

Other Failed Rendering Strategies

Perhaps it would be possible to achieve this solution using a single shader with multiple passes and/or replacement shaders (seems like the “queue” tag uniquely can enable ordering of passes?), but the documentation doesn’t provide any examples of multiple passes that I can find, nor can I find any examples online, so that possibility remains a mystery.

I looked into Layer Masks, but it doesn’t matter how many cameras you use – it still doesn’t solve the problem of cropping out the stuff from the next scene that is between the portal sphere and a camera.

I looked into Alpha blending but don’t think it’s right for this job.

And render textures seem more for projecting views into parts of the world onto flat surfaces, wouldn’t really apply here.

I understand that transparent objects use the classical “painter’s algorithm” style of rendering, back to front, which also might come in handy, even if none of the objects in my scene actually are transparent, I could try labelling them as such and put up with whatever possible weird artifacts from intersection / equal distance may occur…

There may be some way to adapt the work done here.

This guy’s three part post seemed promising, since the effect from the game Quantum Conundrum he was inspired by turned out to be exactly what I’m going for!

quantum_cunundrum_sphere_out

Unfortunately he never actually gets there in his three-part tutorial, it just ends up with a depth texture. I guess I’m not surprised that a project designed by one of the folks behind Portal would feature this effect, but it does make the surprising dauntingness of achieving it easier to bear.

So I just started learning about graphics and rendering this weekend, so I apologize for my n00bness. Any guidance would be deeply appreciated.

Virtual Reality Dynamic Positional Audio Experimental Sound Sculpture

11.30.2014 § 3 Comments

This is a little something I whipped up in one day. You will need the Oculus Rift DK2 to experience it.

Windows | Mac

You will find yourself floating invisibly inside an icosahedron, listening to some strange music. Look at different triangular faces to change the tuning of the instruments. Each of the twenty faces is associated with a different Hexany, one for each combination of three of the first six odd numbers past one, including one as the fourth number ([1,3,5,7], [1,3,5,9], [1,3,5,11], [1,3,5,13], [1,3,7,9], … [1,9,11,13]). Every hexany consists of six pitches, and these are mapped to the six different instrument timbres, which are mapped to the six cardinal directions (above, below, left, right, before, behind).

This project was inspired in part by a demo Total Cinema 360 gave at the first VR Cinema meetup. In this demo, you found yourself inside three virtual realities at once (arranged as trienspheres): look ahead to be flying over a volcano, look behind and to the right to be in a living room hanging out with some folks, and look behind and to the left to be on stage of a rock concert. The kicker is that when you’re looking into any one of these three worlds, the sound surrounding you is of that world. In other words, look into the volcano and you can hear the volcano behind you, even though you know that if you look back there you will see a room or a concert and the sound of the volcano will disappear. Thus an element of sight has been lent to sound: you can only hear things in a line extending from the front of your head, rather than the normal state of being able to hear everything around you no matter where you look. This is what I refer to as Dynamic Positional Audio.

tc360demo

There’s a lot left to improve on this, and hopefully I will be able to get to it soon. I would use more distinct tunings, maybe have a continuum between them, maybe have the sound sources in motion (in response to your motion even maybe), and just make it more beautiful to look at and hear. But hey, I’m a beginner. Ship.

I used Different Methods’s SpatialAudio Unity plugin after much hunting and attempts to do it myself using a MaxMSP+Unity integration.

NED Talk 21.0: Virtual Reality Cinema

11.17.2014 § Leave a comment

Check out this reproduction of a talk I gave recently about Virtual Reality cinema:

Opture

07.08.2014 § 4 Comments

Screenshot (2)
Opture is the term I propose for a new artistic medium which recently has been made possible by the advent of consumer virtual reality technology: immersive story experience and world exploration controlled exclusively by head and gaze tracking.

Opture is distinct from both VR cinema and VR games. It is not VR cinema because it includes not only immersion in the story world but also the power to interact with it. Due to the constraints on this interaction, however, the experience of playing an optie is much different from that of playing a game; without any button-based input, voice commands, or body tracking, an optie player should feel in a more purely cognitive state.

Opture is much more than a cross between film and game, though. Because inside an optie the player knows that their watch itself is being watched, in every split-second, opties have the potential to be unprecedentedly active, intimate, and engaging experiences. The human gaze is closely intertwined with our train of thought and our willpower, so in sophisticated works of opture players may feel no less than their imagination coming to life. This dreamlike agency is what will draw people to opture again and again.

Now, at one extreme, opture could be as game-like as letting the player control a cursor with the direction of their gaze, and blink to click. And at the other extreme, it could be as VR film-like as to correlate head motion only and always with a change in vantage, and keep hidden from the player the nature of any effect their attention may be having on their experience. The most original and exemplary opties, though, should explore the possibilities in-between these extremes, dynamically.

Screenshot (3)

To my knowledge, no true works of opture have been created yet. Auteurs and theorists alike, rejoice: as an entirely new medium of storytelling and world sharing, opture calls for entirely new styles, methods, and grammars. Certainly there will be those who use this new virtual reality technology to make striking yet gimmicky statements, as the film nickelodeons did a century ago. But like the pioneers who innovated continuity editing, montage, composition in motion, the 180 degree rule, etc. — creating the film language from the ground up and paving the way for others to express themselves in it — a huge opportunity exists for us today to spread understanding and appreciation for the unique powers of this new artistic medium. Enough territory remains open for each of immersion and gaze tracking alone, but at their intersection is where the gold likely lies. This is a tremendously exciting time!

 

Etymology

Screenshot (7)

I coined the word “opture” by hiding the Proto-Indo-European root ‘op’, for choice or preference (as in “option” or “opinion”), inside the Greek root for eye or sight, ‘opt’ (as in “optics”); thus, in opture, our choices play out invisibly through our eyesight.

  • Like the word “film”, “opture” refers both to an individual work (the suffix ‘-ure’ as in [moving] “picture”) and to the art itself (the suffix ‘-ure’ as in “culture”).
  • “Optie” for short comes from the ‘-ie’ ending of “movie”.
  • The verb “to play” an optie is suggested because of its ties both to “playing” games and “pressing play” on a video file.
  • In more formal settings, “optigate” may be the more appropriate verb for playing optures, using the Latin ‘agi-’/’act-’/’-ig-’ root for drive/act/do/go/move, as in “navigate.”
  • I suggest “optural” as the adjectival form, since the alternative ‘-ive’ and ‘-orial’ suffixes seem more exceptional.
  • An “optographer” would be a maker of opties.

 

Eye-Tracking Technology

Leading consumer HMDs do not presently feature gaze tracking technology. However, there are several good reasons to expect it to be included in the near future:

  1. Gaze tracking can be used for dynamic inter-pupillary distance calibration. Distance between pupils varies from person to person, and even varies for each given person depending on the distance they are focusing on. Knowing the exact inter-pupillary distance at all times is critical for fine-tuning stereoscopic presentation; the tiniest inaccuracy can still contribute to motion sickness, a central challenge that VR engineers are striving to overcome.
  2. Gaze tracking is also necessary to achieve foveal rendering, a holy grail of VR engineering. Reducing visual processing requirements by an order of magnitude would be a huge boon to the industry.
  3. Finally, gaze tracking could also enable gaze-dependent depth-of-field. In cinematographic imagery, viewers are limited to the focal length chosen for them, which gives rise to a certain type of beautiful artistic experience. However, allowing the viewer to change focal length just as they could if they were there seeing with their own eyes opens up the option to elevate this art beyond the cinematographic and into the realistic. Computer animation is already capable of providing such images with dynamic focus capability, and plenoptic cinematography may be available as well.

Even if gaze tracking technology does not see wide adoption by the virtual reality community, the existing rotational head tracking functionality still offers a huge amount of possibilities for optural expression. And on the other side of things, since control interfaces based on gaze tracking which are independent from virtual reality are simultaneously becoming popular, it would be possible to create opties whose immersion aspect is approximated with parallax.

 

Industry Considerations

In terms of development, marketing, and distribution, it may be helpful to think of optures less like films and more like video games. The key factor here is the interactiveness.

Optie creators wield freedom with players’ experiences more on the order of game designers than film directors. A screenplay is plainly insufficient as a blueprint for an optie; a great many further considerations must be made for each scene: how it may be controlled, how it ends, and where it goes. While some opties may have a set runtime like a film, others may allow players to affect the pacing or even the events portrayed. Therefore works of opture should exhibit a variability of scope which is more on par with the gaming industry than the film industry.

Also, opties will not be able to be played as media files on ordinary VR cinema players. Each opture will have unique manners of reacting to players’ attention. While undoubtedly standard optural modes will be determined, and thus perhaps certain formats may give rise to specialized media players for convenience, I expect most opties will be too customized to come packaged as anything other than stand-alone programs.

 

Classification

One of the most important ways to classify optures will probably be by the extent of control afforded to the player:

  1. Vantage only. This is just a shade beyond VR cinema; the single difference is that the relationship between head rotation and vantage within each shot may be more complicated than the standard direct correlation, thus opening the door to possibilities beyond mere embodiment. Affecting the position of the vantage is also possible, but montage, duration, and story are all still set.
  2. Montage. One may further influence an opture’s editing by directing attention in certain ways or to certain targets. This could include cutting between different perspectives on a scene, or choosing unique slices of action during periods of parallel editing.
  3. Time. The duration of optie becomes a variable. Applications range from mass media interests —  tailoring pacing using analytics to determine the level of a player’s engagement with individual scenes.
  4. Diegesis. Everything is up for grabs. Players affect not just the storytelling but also the story itself, its characters, and its world.

Screenshot (21)

Social aspects exist on a separate axis:

  1. Asynchronous. Record your particular optie experience and share with someone else as a sort of personal immersive film version.
  2. Synchronous. Play together simultaneously. Be able to see where your playing partners are looking, perhaps as an unobtrusive crosshair, perhaps color-coded if they are ahead or behind you in time, etc. Your experiences are otherwise completely independent.
  3. Cooperative. Play together, but now with the power to influence or share each other’s experience. Keeping vantage bound to your head rotation is probably still important, but other factors like montage, duration, and story could be affected in new ways through combination of inputs from multiple players.

Finally, I expect opture will be categorized by the level of transparency of the players’ control. Easier opties may hold the players’ eyes, so to speak, as they lead them comfortably through the experience. Experimental or puzzle opties, on the other hand, may intentionally confuse or deceive players about the effects their attention is having on their experience (pseudoludonarrative). Horror opties may specifically disobey the intentions of their players.

 

Initial Theory and Ideas

To help get your mind going imagining optural experiences, here is a collection of unordered ideas I’ve had so far:

  1. Look to relocate. See something that looks like you’re meant to get a closer look at, or some spot from which you’re probably meant to see from next? Look at it. Hold your gaze a moment, and the optie will accept this as your decision to move there. A game like Myst would fare particularly well as an early optie: a limited set of vantage points traveled between in punctuated cuts.
  2. Character embodiment. First person perspective is rarely used in traditional film and television storytelling. However, because in immersive media every shot is (in a sense) first person, I predict that it will be a more palatable mode in both VR cinema and opture, i.e. that first person points of view will be mixed in with the third person and not feel gimmicky or odd. Experiencing a character’s actions and hearing a character’s words as if they are coming from ones own hands and mouth may bolster emotional/psychological involvement with them.
  3. Distributed character embodiment. In a typical film, a conversation between two characters is often shot in alternating over-the-shoulder shots. In a typical optie, perhaps, such a conversation would tend to be told instead in alternating first person perspective shots. Imagine being asking to choose which character to experience a conversation from the perspective of, or whether or not to make eye contact during. Normal film spectators are already capable of sustaining emotional/psychological identification with multiple characters in a scene at once — feeling e.g. both the anger of one and the shame of another. I predict that players will be able to sustain a distributed sense of physical embodiment as well, in at least two characters at once. This distributed embodiment should not compromise the aforementioned effect of each of the individual embodiments bolstering emotional/psychological identification with the respective character.
  4. Look down to cut. Cues may be given to the player to train them how to control montage such as this.
  5. Squint to zoom.
  6. Examine. Rather than translate rotational input as different angles outfrom a fixed point — as is standard for a sense of embodiment — a shot could translate it as different angles in toward a fixed point from a fixed distance. Thus, rotating your head would cause your vantage to swivel around, in inverted motion, as if on the inside of an invisible sphere surrounding an object of interest. This technique should be useful when an optie director wants to provide you in a particular moment with the freedom not to look around a space, but rather the freedom to examine a specific thing however you choose. Experimentation should be done to determine whether this sort of experience feels natural or comfortable.
  7. Coaxing. Sometimes as optie designer one may wish to artfully restrict players’ options, to guide them to experience a shot in a certain way or limited set of ways. For example some shots may be best experienced motionlessly, or others may be best with only the ability to pan side to side. However, breaking correlation between head rotation and vantage completely can induce sickness. Ultimately there is nothing one can do to restrain players’ necks, so the best one can do is coax players to watch in a desired way. The basic mechanism is to promptly but smoothly correct against certain rotations, as if rotating the entire world under or around the player’s feet; after this happens, the viewer will intuitively undo their ineffectual motion, returning to a more at-rest position. This effect may sound drastic, but once players recognize that their current optie features it, they will adjust quickly; by initiating their looks around carefully at first, friction between players’ wills and that of the designer will become quite subtle. Using coaxing, optie designers can bestow upon players the feeling of having the skill of a professional cinematographer, rather than just an average dude erratically looking all around wherever and however they feel like.
  8. Endurance challenge. Next-gen art-house long takes test the player’s ability to concentrate. The shot zooms out, and new images enter into your periphery — but if your curiosity gets the better of you and you look away from center, you fail. Perhaps you must restart the shot, or perhaps miss out on a visual reward.  It’s like the candle shot from Andrej Tarkovsky’s Nostalghia, crossed with the self-restraint confronting the Zone at the end of his film Stalker.

 

Conclusion

The pursuit of full-body virtual reality presence is a beautiful thing, but it is not my thing. Please wake me up once someone’s built a simulator for feeling like one is inside the game Portal — experiencing a shift in the direction of gravity passing over ones body like a wave. I expect that this won’t come until we’re tapping directly into neurons, at which point “ear tracking” will become as possible as gaze tracking and I’ll probably switch over to become a “dynamic musician” or whatever they’ll call it.

Until then, there is plenty of beauty to be explored by extending the tried-and-true seated experience, as the team at Oculus Rift has been preaching. Consider Lucky’s Tale (or as I like to call it, after Mario 64’s groundbreaking cameraman character, “Lakitu”) as a genius and subtle example of tapping the freedom of vantage into a working genre. Let’s see this in an RTS soon, too!

Regarding newfangled technology allowing one to control games with ones thoughts directly, if it becomes precise enough to trump eye tracking for these purposes, then all bets are off.

A danger of increasingly comprehensive mediums is that emphasis risks falling on the sensory material itself rather than the thought or emotion which it has newfound power to express. And of course there will always be artless excess in any medium. This argument came up with film and ridiculously enough is still ongoing with video games: whether they are an art or not. Opture is approaching the ultimate medium, and it will certainly face this criticism too. Without great content, opture will be nothing.

Fortunately I have a couple ambitious plans for feature length optures ready now — optures which have deep emotional and intellectual scope. Furthermore, they both play with the opture form reflexively.

For more VR Vocab, check out this video:

 

Where Am I?

You are currently browsing the Virtual Reality category at cmloegcmluin.