

in the first week, I reviewed the brief, met some students from different majors, and presented my previous work in an
effort to discover a few members who could work together and contribute in their field. Fortunately, I met a student
from the VFX course who had a strong command of UE4, and we decided to collaborate.
Our primary concept is to combine the traditional 3D animation software maya for creating the modelling, rigging, and
animating, with UE4 for rendering the final image. UE4 includes a very powerful real-time rendering system that can
render and show an assortment of rendering impacts, including realistic rendering, stylized rendering, and 2D
rendering style, etc.
In furthermore, I also met three students from the same course (3d computer animation), and we all agreed to
collaborate and produce an animated short film as our final project for this unit.
The primary goal of my four years of university education to continue working in the animation and game industries after graduation. I will focus on character or creature riggings, such as how to construct the skeleton and muscular systems for the creature character, as well as the distribution of weights. I am passionate about giving life to a character, and rigging is the foundation for doing so. This is the aspect of my animation experience that I found most interesting during my university years. A good rig gives the animator more room to manoeuvre and creates increasingly engaging movement effects.
I must be proficient with film and animation software, such as Maya, as well as the number and location of the skeletons of various creatures and the fundamental movement patterns of characters or animations. I must also be familiar with a few simple codes and expressions in order to create additional animation effects that will facilitate the animator’s work.
My final FMP project will most likely be a three-minute-long, cartoon-style animated short film.
Yes, it’s important. My current thinking is that my thesis will investigate the field of animation rendering, including real-time rendering, offline rendering, GPU rendering, and CPU rendering. Rendering plays a significant role in my FMP animated short, so I believe it is crucial. I must choose a renderer and rendering method that will significantly reduce my working time and improve the quality of my work.
Currently, rendering and animation are the areas I wish to investigate. I will seek out additional data and resources to determine what I want to do.
This semester’s studies and research have provided me with a new perspective on the interplay between motion capture and traditional keyframe animation. I first thought that by studying motion capture, I could save a great deal of time in animation, which is why I decided to study the topic in the first place.
After reading some literature and understanding the practicalities of the situation, I realized that although motion capture can save a great deal of time in the production of animation, there are several production processes that must be carried out to ensure that the character’s movements are accurate and of high quality, which cannot be accomplished by only exporting motion data.
This semester has also provided me with a general understanding of writing an academic report, which was a significant challenge for me because I had rarely completed this type of academic assignment in the past.
This semester, I mainly worked on three projects. The first one was a weekly project in 3D Computer Animation Fundamentals that included modeling, rigging (blendshape), animating, and rendering. The most challenging part was rigging with blendshape and character animating. Because we needed to consider the human face and muscular anatomy when creating the blendshape, which was something I had rarely touched or tried before. Furthermore, when creating character animation, we must consider timing, extrusion, and stretching, which appear to be simple but actually test the animator’s ability and experience.
As an animator with three years of undergraduate animation experience and one year of work experience, I also created a personal project that included modeling, texturing, animating, visual effects, lighting, and rendering. The challenge of this project was creating various plant model assets, distributing them strategically, and building a comprehensive landscape. Time management was also an essential issue for me. Because of the time constraint, there are some sections that are not flawless, such as the character’s actions, textures, and lighting, which is also an aspect that I need to reflect on in the future.
The third project was a group project in which four people created an animated short film. My contribution was to create a character model of a little girl including textures. Rig the two characters in the animation and animate one of them.
Motion Capture
Motion Capture vs.Animation from Observation:
Motion capture is different from making animation by watching people move. For one thing, motion capture can be used for things other than animation, like biomedical analysis, surveillance, sports performance analysis, or as a way for people and computers to talk to each other. Each of these tasks is both different and the same as the problems that come up when making animation. At the first step of each, you have to make the observations that will be used to figure out what’s going on, e.g., record the movements. Many of the ways animation is done come from the fields of biomechanics or medicine.
Motion Capture vs.AnimLtion:
Online motion capture is one of a kind because it is the only way to do what it does. Motion capture, on the other hand, is just one way to make animations move when they are made off-line. By knowing the other options, you can see where motion capture is most useful and what it needs to be able to do to be a way to make animated motion. Taxonomies of motion creation, such as those used by game designers, usually divide the ways to make motion into three groups: manual specification, procedural and simulation, and motion capture.
Motion Capture for Animation:
Motion Editing and Motion Capture:
The best motions that can be made with motion capture should be great, so why should they need to be changed? If everything was working right, the data from motion capture should be a good representation of how a desired performance really happened. Still, a big part of how motion capture is used seems to be talking about how to change movements once we have them.
ComputerVision and Motion Capture:
There are more and more people in the computer vision community who want to figure out how to analyse images of people moving. There are many different uses for this technology, such as surveillance, user interface input, and biomechanical analysis. Traditional motion capture techniques from these other fields have been used in animation, and video analysis is an interesting tool for making animated motion.
History of Computer Character Animation Motion Capture:
1980-1983: Simon Fraser University — Goniometers
Around the same time, biomechanics labs began to employ computers to examine human motion. These research began to influence the computer graphics community with their techniques and apparatus. Tom Calvert, a professor of kinesiology and computer science at Simon Fraser University, connected potentiometers to a body in the early 1980s and utilised the output to control computer-animated figures for choreographic research and clinical evaluation of movement problems. To monitor knee flexion, for instance, they attached a type of exoskeleton to each leg and positioned a potentiometer next to each knee so that it would bend in tandem with the knee. The analogue output was then converted to digital and sent to the computer animation system. Their animation method employed motion capture equipment in conjunction with Labanotation and kinematic requirements to precisely define character motion.
1988: deGraf/Wahrman — Mike the Talking Head
In 1988, deGraf/Wahrman created “Mike the Talking Head” for Silicon Graphics to demonstrate the 4D machines’ real-time capabilities. Mike was operated with a custom-built controller that allowed a single puppeteer to manipulate the lips, eyes, expression, and head position of the character’s face. The Silicon Graphics chip allowed real-time interpolation between the performer-controlled facial emotions and head geometry. Mike was performed live at the SIGGRAPH film and video exhibition of that year. The live performance indicated conclusively that the technology was ready for use in production contexts.
1989: Kleiser-Walczak — Dozo
Kleiser-Walczak created Dozo, a non-real-time computer animation of a lady singing and dancing in front of a microphone for a music video, in 1989. They opted to employ motion capture techniques to achieve authentic human movements. On the basis of Kleiser’s motion capture experiments at Digital Productions and Omnibus (two now-defunct computer animation production companies), they selected an optically-based solution from Motion Analysis that used multiple cameras to triangulate the images of small reflective tape pieces placed on the body. The output is the three-dimensional trajectory of each reflector in space. As discussed previously, one of the issues with this type of system is the difficulty in monitoring spots that are obscured from the cameras. This was a very time-consuming post-process for Dozo. Fortunately, some modern systems are beginning to perform this in software, greatly accelerating motion capture.
1992: Brad deGraf — Alive!
After the success of deGraf/Mike Wahrman’s the Talking Head, Brad deGraf continued to work on his own, creating a real-time animation system today known as Alive! For one Alive! character, deGraf created a customised hand gadget with five plungers that were operated by the puppeteer’s fingers. The gadget was used to control the face expressions of a computer-generated, pleasant, talking spacecraft, which advertised its “parent” firm at trade exhibitions in a manner similar to Mario.
Today: Many players using commercial systems
Ascension, Polhemus, SuperFluo, and others have launched commercial motion tracking systems for computer animation during the past few years. In addition, animation software companies, such as SoftImage, have included these methods into their products, so offering “off-the-shelf” performance animation systems. Despite the fact that numerous issues have to be resolved in the field of human motion capture, the technique is now firmly established as a viable choice for computer animation production. Undoubtedly, as the technology advances, motion capture will become one of the animator’s fundamental tools.