The claim is that in the next decade video game engines will be used in the film-making process to eliminate post production altogether. Lucasfilm is making this statement in light of rapidly progressing video game technology. Both CG effects companies and game developers essentially use the same sort of techniques to produce what they do, especially when it comes to motion capture. So why not combine already existing methodologies and see what happens? The thought is similar to how the movie Rango was produced but much more advanced. If you've seen the BTS videos from that film then some of this tech will be old news to you, but how it is being used on the fly isn't...and that's the really cool stuff.
Virtual or partially virtual sets are the norm these days but it wasn't until a few years ago that a director could take a device and physically walk around the created world. Now this has been combined with game engines to allow the director interact with and control the virtual elements of that world. Actors could be able to see themselves as the virtual character they are portraying in an instant. Digital costume changes would be nothing but the click of a button. All of this is truly incredible when you consider that traditional rendering times for one frame of a feature film start in hours, not seconds. Rendering times on systems like what Lucasfilm is developing clock in at 24 frames per second. I'm no math-wiz, but that seems just a tiny bit faster.
The word is that Lucasfilm would like to take this to the point that people could actually interact with the movies they watch at home. I'm not totally on board with that one, but that doesn't take away from the brilliance of this tech demo.
Who knows what's next.
Via TheInquirer
I also was under the impression that this was how AVATAR and Spielberg's Tintin were made too. I remember specifically reading that similar technology Cameron and co had developed during AVATAR that did just this was what sold Spielberg on doing Tintin in the first place. Cool nonetheless.
They had mocap but it was not in real time like this tech demo. This raises the question whats going to happen to all the artists who do all the post work for movies.
I'm sure they will still need to develop the CG prior to the production. Plus editors and sound designers will remain the same.
To answer that question, post-production fine-tuning and tweaks will still be needed. From 2D efx post, composites to 3D lighting and surface adjustments. There will probably be less need for big-iron work, but it's still way too early to call the time of death for our jobs.
Thats not true. http://screenrant.com/crazy-3d-technology-james-cameron-avatar-kofi-3367/
It reads like a precursor to this very tech. ILM was one of the companies that worked on AVATAR, sure they had a hand in this back then or at least saw some of it.
Really what I mean is they didn't have a fully rendered mocap .
That is simply a-m-a-z-i-n-g.
These likely aren't final production quality renders.. Just like in video games, based on the graphics hardware capabilities, the rendering performance is lowered in order to up the frame rate: bitmap texture detail/quality, 3d mapping, dynamic range, viewing distance, virtual lighting rendering, particle count, etc., all lowered to some degree -- barely noteworthy sometimes but a massive relief to the graphics hardware.
Final production level renders are almost indistinguishable from real life these days -- that still takes DAYS worth of rendering, even when outsourced to rendering farms.
All things considered, this was bound to happen. It's been a long time coming.
This is awesome imagine a 3d streaming glass on your eyes with such CG rendering, you can happily live in a virtual world. You can make precise movements in all games.
Actors might be able to "work from home" :)
You can run in treadmills and see yourself running in a forest :)
You can sleep with... oops!! :P