2015.08.07
Live Animation. Part II: Bridging Faceshift and Live2D.
Since the advent of toon shading in the CG industry, real-time renderer are getting closer and closer to the touch’n’feel that only 2D animation studio were able to provide. The balance between 3DCG shortcut and/or hack to provide this anime style narration is heavily challenged by the full range of nuance that face expression requires, even in its minimal 2D style. The result is yet under the subjectivity of art judgement: some will appreciate the vintage videogame style limitation, others will think it is a dead-end. Today we will have a look at a few technique that open the doors to a new kind of anime broadcast: The Live Animation.
Live animation is closer to puppet-mastering than real state of the art animation. Even if it is using the same technologies than videogame it differs from in-game-engine cinematic in several way. First it is a live broadcast. Based on real-time motion capture including body and face expression, character are like in a live talk show in a “studio” made of virtual space designed in a game engine. Nothing is new in this production pipeline, the only point is that now it can be done in budget even for indies or for a one time promotion project. So let’s start with the motion capture process.
The cost behind MOCAP studio are beyond the scope of live animation. Not only the effective cost but mainly the time consuming settings are bottleneck in a live animation scenario. Then collateral bottleneck are everywhere with MOCAP studio. Then they rely on markers and Live animation rely, from a marketing point of view, on Talent renting their voice and charisma to the animation character they embed. If this is ok for big picture production, it becomes an other story when it is about live show. Markers are cumbersome, especially if facial expression capture is involved. So to summary in a large scale project like movie or triple A game MOCAP studio pipeline is the way it should be done. Otherwise it should be avoided. So what is left to the Live Animation? For body motion capture there are two solutions. You can go with a baked MOCAP library that you would deploy in real-time using game pad or keyboard, like of you would do during a third person videogame. While this solution is quite limited from a quality point of view it is from far the safest in the real-time realm. An other one is to use markerless MOCAP solution. OrganicMotion developed OpenStage, a markerless solution pack with a capture volume enough for Live Animation. The technology doesn’t need any training nor setup which mean anyone entering the volume is on the fly captured. Just keep in mind that the whole show should be contained in the capture volume which is an average 5 x 5 x 3 m cube.
For Facial Expression capture you can also rely on a pre-defined set of expression, each one triggered using a gamepad or keyboard. Implementation should prevent noise and smooth transitions by using interpolation and tween technique. It is the way of most live animation system in game production but has the demerit to be un-natural, clumsy and to add face control workload when added to body pre-defined animation system. Problem with Facial Expression Capture are once again the markers and their trainings. For a Live Animation session, a marker-less solution will bring the double merit to free the talent from ugly geek tech mask and let the avatar fully nuanced by the actor play. Keeping a low budget perspective in mind, FaceShift provide such a solution. Their Kinect based Facial Expression is reliable enough to bring the live stream in production.
The actual solution are mainly focused on Facial animation using a simple 2D illustration. Some solution handle flat 2D character such as MotionCapture from Silicon Studio.
© Motion Portrait
Others are more Animation Pipeline oriented and require layered illustration file. Both of the solution uses a kind of 2.5D mesh morphing animation to fake small range of 3D movement. Combining with z-indexed layer and parallax technique the final effect can be quite realistic. We found out that the Live2D solution allow a high-fidelity expression set when correctly setup and limited to the 30 degrees on both X and Y axis. Live 2D Cubism allows a very close touch’n’feel compared to traditional CG animation techniques. Eye rotation and blink have reflection correctly adjusted to the movement and interpolation are noise-less. This quality do require a heavy preparation work at layer and mesh level but it definitely worst it.
© Live2D
Yet in an Alpha version with restricted access, Live2D Euclid offer a mixed set of technologies to bring their 2D face animation to 3D game engine. The 3D part remains state of the art 3D game development but face animation implement the 2D Cubism technique inside a local 3D box with full 360 degree extension. Result is quite convincing. This tech can be combined with marker-less MOCAP to create a complete 3D Live Anime. Yet storytelling have to remain in a live sitcom or talk show, even if we don’t see any problem to attempt an action live story, which sounds quite promising.
© Live2D
While broadcasted live animation has merit by itself, it potential use in a digital signage promotion lift the technology to an higher level. Traditional animation can not be exposed out of its production ecosystem whereas live animation can. So think about live interview directly broadcasted on public giant TV screen during a sport event or in strategic city meeting point.
While body MOCAP can be a challenge, facial tracking within a marker-less scenario can let the talent move freely in site. Without going as far as a full live animation broadcast solution, the same technology can be tweaked to add visual effect on the top of a live interview. Somehow the possibilities are yet unexplored and fully open to unseen digital experience.