Abstract
Current techniques for
generating animated scenes involve either videos (whose resolution is limited)
or a single image (which requires a significant amount of user interaction). In
this project, we describe a system that allows the user to quickly and easily
produce a compelling-looking animation from a small collection of high
resolution stills. Our system has two unique features. First, it applies an
automatic partial temporal order recovery algorithm to the stills in order to
approximate the original scene dynamics. The output sequence is subsequently
extracted using a second-order Markov Chain model. Second, a region with large
motion variation can be automatically decomposed into semiautonomous regions such that their
temporal orderings are softly constrained. This is to ensure motion smoothness
throughout the original region. The final animation is obtained by frame
interpolation and feathering. Our system also provides a simple-to-use interface
to help the user to fine-tune the motion of the animated scene. Using our
system, an animated scene can be generated in minutes. We show results for a
variety of scenes.
Implementation
single picture conveys a
lot of information about the scene, but it rarely conveys the scene’s true
dynamic nature. A video effectively does both but is limited in resolution.
Off-the-shelf camcorders can capture videos with a resolution of 720 _ 480 at
30 fps, but this resolution pales in comparison to those for consumer digital
cameras, whose resolution can be as high as 16 MPixels. What if we wish to
produce a high resolution animated scene that reasonably reflects the true
dynamic nature of the scene? Video textures
is the perfect solution for producing arbitrarily long video
sequences—if only very high resolution camcorders exist. Chuang et al.’s
system is capable of generating
compelling-looking animated scenes, but there is a major drawback: Their system
requires a considerable amount of manual input. Furthermore, since the
animation is specified completely manually, it might not reflect the true scene
dynamics. We use a different tack that bridges video textures and Chuang et
al.’s system:
We use as input a small collection of high
resolution stills that (under-)samples the dynamic scene. This collection has
both the benefit of the high resolution and some indication of the dynamic
nature of the scene (assuming that the scene has some degree of regularity in
motion). We are also motivated by a need for a more practical solution that
allows the user to easily generate the animated scene.
In this paper, we
describe a scene animation system that can easily generate a video or video
texture from a small collection of stills (typically, 10 to 20 stills are
captured within 1 to 2 minutes, depending on the complexity of the scene
motion). Our system first builds a graph that links similar images. It then
recovers partial temporal orders among the input images and uses a second-order
Markov Chain model to generate an image sequence of the video or video texture
(Fig. 1). Our system is designed to allow the user to easily fine-tune the
animation. For example, the user has the
option to manually specify regions where animation occurs independently (which we term independent
animated regions (IAR)) so that different time instances of each
IAR can be used independently. An IAR with large motion variation can further
be automatically decomposed into semi-independent animated
regions (SIARs) in order to make the motion appear more
natural. The user also has the option to modify the dynamics (e.g., speed up or
slow down the motion, or choose different motion parameters) through a simple
interface. Finally, all regions are frame interpolated and feathered at their
boundaries to produce the final animation.
The user needs only a
few minutes of interaction to finish the whole process. In our work, we limit
our scope to quasi-periodic motion, i.e., dynamic textures. There are two key
features of our system. One is the automatic partial temporal order recovery.
This recovery algorithm is critical because the original capture order
typically does not reflect the true dynamics due to temporal undersampling. As
a result, the input images would typically have to be sorted. The recovery algorithm
automatically suggests orders for subsets of stills. These recovered partial
orders provide reference dynamics to the animation. The other feature is its
ability to automatically decompose an IAR into SIARs when the user requests and
treat the interdependence among the SIARs. IAR decomposition can greatly reduce
the dependence among the temporal
orderings of local samples if the IAR has significant motion variation that
results in unsatisfactory animation. Our system then finds the optimal
processing order among the SIARs and imposes soft constraints to maintain
motion smoothness among the SIARs.
Preferred
Technologies
Solutions
was created by using Microsoft Visual Studio.net 2005,
No comments:
Post a Comment
Note: only a member of this blog may post a comment.