User:Jg1204

From Wikipedia, the free encyclopedia

Machine Learning for Computer Animation&Machine Learning for Planning[edit]

Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligenceMachine learning is closely related to (and often overlaps with) computational statistics; a discipline which also focuses in prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is unfeasible

Computer animation, or CGI animation, is the process used for generating animated images. The more general term computer-generated imagery encompasses both static scenes and dynamic images, while computeranimation only refers to the moving images. Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes, the target of the animation is the computer itself, but sometimes film as well.

Introduction[edit]

Machine learning has experienced explosive growth in the last few decades. It has achieved sufficient maturity to provide efficient techniques for a number of research and engineering fields including machine perception, computer vision, natural language processing, syntactic pattern recognition, and search engines. Machine learning provides a firm theoretical basis upon which to propose new techniques that leverage existing data to extract interesting information or to synthesize more data. It has been widely used in computer animation and related fields, e.g. rendering, modeling, geometry processing, and coloring. Based on these techniques, users can efficiently utilize graphical materials, such as models, images, and motion capture data to generate animations in practice, including virtual reality, video games, animation films, and sport simulations.

Background[edit]

Unfortunately, the computer animation community has not utilized machine learning as widely as computer vision. Nevertheless, we can expect that integrating machine learning techniques into the computer animation field may create more effective methods. Suppose that users wish to simulate life on the streets of Bei Jing in ancient China, or in a mysterious alien society. They have to generate all the 3D models and textures for the world, the behaviors of animations for the characters. Although tools exist for all of these tasks, the scale of even the most prosaic world can require months or years of labor. An alternative approach is to create these models from existing data, either designed by artists or captured from the world. In this section, we introduce the idea that fitting models from data can be very useful for computer graphics, along with the idea that machine learning can provide powerful tools.

To consider the problem of generating motions for a character in a movie, it is important to realize that the motions can be created procedurally, i.e. by designing algorithms that synthesize motion. The animations can be created "by hand" or captured from an actor in a studio. These "pure data" approaches give the highestquality motions, but at substantial cost in time and effort of artists or actors. Moreover, there is little flexibility: If it is discovered that the right motions were not captured in the studio, it is necessary to retrack and capture more. The situation is worse for a video game, where all the motions that might conceivably be needed must be captured. To solve this problem, machine learning techniques can be adopted to promise the best of both worlds: Starting with an amount of captured data, we can procedurally synthesize more data in the style of the original. Moreover, we can constrain the synthetic data, for example, according to the requirements of an artist. For such problems, machine learning offers an attractive set of tools for modeling the patterns of data. These data-driven techniques have gained a steadily increasing presence in graphics research.

Bulk Theory[edit]

  • Supervised learning:The computer is presented with example inputs and their desired outputs,given by a "teacher",and the goal is to learn a general rule that maps inputs to outputs.
  • Unsupervised learning:No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
  • Reinforcement learning:A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle), without a teacher explicitly telling it whether it has come close to its goal. Another example is learning to play a game by playing against an opponent.

Tightly topics/subtopics[edit]

1. Semi-supervised Learning[edit]

In computer animation, the interaction between computer and artist has already been demonstrated to be an efficient way [120]. Some researchers have explored the close relationship between SSL and computer animation; for example, in character animation, Ikemoto [120] has found what the artist would like for given inputs. Using these observations as training data, the input output mapping functions can be fitted to generalize the training data to novel input. The artist can provide feedback by editing the output. The system uses this feedback to refine its mapping function, and this iterative process continues until the artist is satisfied. This framework has been applied to address important character animation problems

2.Q-learning[edit]

Q-learning is a model-free reinforcement learning technique. Specifically, Q-learning can be used to find an optimal action-selection policy for any given (finite) Markov decision process (MDP). It works by learning an action-value function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy thereafter. A policy is a rule that the agent follows in selecting actions, given the state it is in. When such an action-value function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. Additionally, Q-learning can handle problems with stochastic transitions and rewards, without requiring any adaptations. It has been proven that for any finite MDP, Q-learning eventually finds an optimal policy, in the sense that the expected value of the total reward return over all successive steps, starting from the current state, is the maximum achievable.

Real World Application[edit]

Correspondence construction for automatic carton generation[edit]

Correspondence construction of objects in keyframes is the pre-condition for inbetweening and coloring in cartoon animation production. Since each frame of an animation consists of multiple layers, objects are complex in terms of shape and structure; therefore, existing shape-matching algorithms, specifically designed for simple structures such as a single closed contour, cannot perform well on objects constructed by multiple contours with an open shape. Yu et al. proposed a semi-supervised learning method for complex object correspondence construction. In particular, the new method constructs local patches for each point on an object and aligns these patches in a new feature space, in which correspondences between objects can be detected by the subsequent clustering. For local patch construction, pairwise constraints, which indicate the corresponding points (must link) or unfitting points (cannot link), are introduced by users to improve the performance of the correspondence construction. This kind of input is conveniently available to animation.Based on the above analysis, semi-supervised learning (SSL) could be an appropriate technique for designing novel tools for computer animation.

Q-learning used in Video-based rendering[edit]

Video sprites,which are a special type of video texture.In video sprites,instead of storing whole images,the object of interest is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated velocity information. They can be rendered anywhere on the screen to create a novel animation of the object.People present methods to create such animations by finding a sequence of sprite samples that is both visually smooth and follows a desired path. To estimate visual smoothness, we train a linear classifier to estimate visual similarity between video samples. If the motion path is known in advance, we use beam search to find a good sample sequence. We can specify the motion interactively by precomputing the sequence cost function using Q-learning.

Before the advent of 3D graphics, the idea of creating animations by sequencing 2D sprites showing different poses and actions was widely used in computer games. Almost all characters in fighting and jump-and-run games are animated in this fashion. Game artists had to generate all these animations manually.Video textures reorder the original video samples into a new sequence. If the sequence of samples is not the original order, we have to insure that transitions between samples that are out of order are visually smooth.

See also[edit]

References[edit]

  1. Hertzmann, A. (2003, October). Machine Learning for Computer Graphics: A Manifesto and Tutorial. In Pacific conference on computer graphics and applications (pp. 22-26)
  2. A* search algorithm
  3. Machine learning
  4. Bayes' theorem
  5. Chuang, E. S., Deshpande, F., & Bregler, C. (2002). Facial expression space learning. In Computer Graphics and Applications, 2002. Proceedings. 10th Pacific Conference on (pp. 68-76). IEEE.
  6. Hughes, J. F., Van Dam, A., Foley, J. D., & Feiner, S. K. (2014). Computer graphics: principles and practice. Pearson Education.
  7. Michels, J., Saxena, A., & Ng, A. Y. (2005, August). High speed obstacle avoidance using monocular vision and reinforcement learning. InProceedings of the 22nd international conference on Machine learning (pp. 593-600). ACM.
  8. Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (Eds.). (2013). Machine learning: An artificial intelligence approach. Springer Science & Business Media.
  9. Minton, S. (Ed.). (2014). Machine learning methods for planning. Morgan Kaufmann.
  10. Szarowicz, A. Reinforcement Learning Techniques for Action Generation Using Inverse Kinematics.
  11. Schodl, A., & Essa, I. A. (2000). Machine learning for video-based rendering.
  12. Yu, J., & Tao, D. Modern Machine Learning Techniques. Modern Machine Learning Techniques and Their Applications in Cartoon Animation Research, 63-104.