Thursday, April 29, 2010

Future Optimizations

Things I Obviously Don't Have the Time to Do Now, but Would Make Version 2.0 Much Nicer:

  • Re-write code to use clusters, rather than lattices. Clusters are less visually obstructive.
  • Rather than connect attributes of the FigureDataNode to the Lattice attributes, it'd be better to have the FigureDataNode possess just an attribute which links to the mesh it is relevant to. That way, the Lattice (although, now Cluster) attributes aren't locked. After all, the FigureDataNode is just storage. It holds the sketch information after the sketch is disposed of by the QT Application closing.
  • If each FigureDataNode possesses a mesh, we can pass the DAGPath to a mesh as an argument into our other commands. This would let us operate on multiple meshes in one file.
How I'd Add Skeletons:

  • Make a Lattice around each Cluster. (This could be a problem, given lattices are axis-aligned. This could be solved by keeping the current Lattice structures but making them invisible.)
  • Find its translation matrix. (I actually did that yesterday for adding wires, and then realized I didn't need it.)
  • Create a joint at (0, scaleY, 0) and (0, -scaleY, 0).
  • Apply the lattice's transformation matrix to the join.
So much I've learned, not enough time to make what I want to. And there's a lot of trash accumulating on my desk...

Wire Deformers

Hey, guess what doesn't work! Wire deformers!

Wednesday, April 28, 2010

Tasks, Everywhere! Tasks!

Caffeinated. Very caffeinated.

I'm working on implementing wires, but for now, cleaning up my to-do lists...

Major things:
  • Add a Maya command to attach wire deformers, based on projects from the lattice boxes.
  • Add a Maya command to rig and bind a skeleton to the mesh, based on the center points of the top and bottom faces of each lattice.
  • Write the light behavior equations for the tissue layer shader. (I have to do this anyway.)
  • Attach tissue layer shader to model and write a MEL script GUI to allow editing of pigments and layers.
Minor things:
  • Find how to make the FigureDataNode-Lattice connection two-ways. This may not be possible, in which case, the connection will simply be broken.
  • Divide figure into more parts to accommodate the Thalmann measurements: neck, waist, lower arms, lower legs, hands, and feet. However, I am not sure if they should be specifically tagged or not.
  • Re-implement properly scaled translation and rotation of lattices, around the correct pivot point.
  • Re-implement translation and rotation based on the slope of the drawn curve at certain points.
  • Drawing interface should erase curves. At the moment, it just paints them over white.
  • Drawing interface should use an ink drying metaphor, where multiple nearby strokes are averaged into one.
  • Drawing interface should not assume strokes are drawn in a particular order or direction.
  • Accommodate multiple figures and selection.
  • Re-implement things with clusters, not lattices.
  • Clean up C++ code and node attributes, so it's not calling MEL commands.

Tuesday, April 27, 2010

Scaling Figure Sections.

Been working on physics-based animation this weekend, but here is current progress with the sketch modeling. I turned off affecting translation, and scaling is now relative to the original size of the imported tagged mesh. This way, the size of the sketch dimensions does not affect the model, so it will still be appropriate for other assets created in the same world space.

I think I need to re-tag the skull area so it doesn't contain the neck, but I like how things are looking, especially with the hips. However, I am concerned with the rotation, especially of the limbs. It should be at the pivot point of the shoulder, not the middle of the arm.

Unless I can think of a way to make the relationship two-way, I think - despite having gone through major trouble of connecting lattices to the FigureDataNode - I should break the connections. This way, the model can be edited manually by lattice.


Because I almost lost the paper I wrote these ideas down on, I am now going to make an two impossibly long to-do lists, sorted from most to least feasible. First, major things which add to overall concept of the project:
  • Add a Maya command to attach wire deformers, based on projects from the lattice boxes.
  • Add a Maya command to rig and bind a skeleton to the mesh, based on the center points of the top and bottom faces of each lattice.
  • Write the light behavior equations for the tissue layer shader. (I have to do this anyway.)
  • Attach tissue layer shader to model and write a MEL script GUI to allow editing of pigments and layers.
Now minor things, which aren't necessary to the project concept, but help refine current ideas it poses:

  • Find how to make the FigureDataNode-Lattice connection two-ways. This may not be possible, in which case, the connection will simply be broken.
  • Divide figure into more parts to accommodate the Thalmann measurements: neck, waist, lower arms, lower legs, hands, and feet. However, I am not sure if they should be specifically tagged or not.
  • Re-implement properly scaled translation and rotation of lattices, around the correct pivot point.
  • Re-implement translation and rotation based on the slope of the drawn curve at certain points.
  • Drawing interface should erase curves. At the moment, it just paints them over white.
  • Drawing interface should use an ink drying metaphor, where multiple nearby strokes are averaged into one.
  • Drawing interface should not assume strokes are drawn in a particular order or direction.
  • Show and hide lattices and wires on different layers.
  • Accommodate multiple figures and selection.
  • Clean up C++ code and node attributes, so it's not calling MEL commands.
However, given my time limit and need for proof of concept, I'm solely going to focus on the wire system for now.

Wednesday, April 21, 2010

Tagged Mesh

I was feeling a little under the weather tonight, so I just properly tagged the Nancy model. Now if I need a model, I don't need to tag it again, I can just load this directly. It also only took me a half hour to do the tagging, proving how efficient this system can be despite its simplicity. My GUI, however, is still ugly.

As intelligently requested, here's the tagged model and the sketch which resulted in the figure. Minus the rotation pivot which is causing awkward translation of the limbs... this is right. I really like what it did on the ribcage-pelvis area.


I feel I need to, though, before anything, scale the figure down to match the sketch percentages before sketch-stylization since, right now it's rather large (say 40 world units while the sketch canvas is 10).

I then don't know if I should focus on wires (more proof of concept) or making the sketch system more robust (handling erasure, multiple strokes) or making the mesh distortions smoother.

Tuesday, April 20, 2010

I should make zombies.

The GUI is in this picture, and well, it seems to be working. This isn't properly tagged (so some vertices weren't recognized in the groups). I now need to fix the translation on the limb boxes.

Oh, god, it looks so gross.


I feel posts are a good way to tell myself waht to do next, so once I get this to be less horribly distorting: adding wires and adding bones. (And that, you know, shader and that SIMBICON implementation for my other classes.)

... so gross.

Scaling Issues

Joe says I blog in spurts. This is true. Anyhow, made a GUI (not pictured here), but it loads a model (default model if unspecified), the user tags different areas (if they are not tagged already, or they can re-tag areas), and manipulates it based on the sketch. The scale between the figure and the sketch is off...

Also, need to fix how it's putting each new lattice inside the previous one...

Monday, April 19, 2010

Multiple Set Selection

You can now define multiple sets on a single mesh and connect the FigureNode parameters to each set and distort each set. I have yet to focus on soft selection, though.


So far, I've been defining the sets by name manually. I plan to write a small GUI tonight which allows the user to load in an OBJ model and define the sets. I need to do this so that the set naming convention I've been using remains constant (or else the plug-in won't work, since Maya does selection by name). I hope to add a load/save aspect to the GUI as well, which lets you load an .ma file if you've already defined the sets. (Sets can also be defined through Maya's existing interface).

Problems I forsee in the future are - if a loaded mesh is edited in such a way to add vertices, the sets would have to re-defined to include these new vertices.

Sunday, April 18, 2010

Set Selection

Now, a user loads in a model (any model). I need to write a GUI which allows the user to label regions of vertices with a "Set" that follows a certain naming convention ("skullSet" "ribcageSet"... etc). Then, a lattice deformer is applied to that region. The node with the figure data connects to that lattice deformer, translating, rotating, and scaling it appropriately.

I spent all day trying to do it in C++ to no avail (because Maya's default lattice deformers are actually a small network of nodes - I was able to do it to an entire object, not a labeled set region :( ), so I hooked it together mostly in MEL.

Thursday, April 15, 2010

Invisible Progress

It doesn't look any different, so no screenshots for today, but here's a list of what I did:

Node structure now much much more manageable. Grouped floats into vectors.
Nodes link automatically, so no more need for MEL.
Rotations now take the signed angle into account.

Overall, I have a little better understanding of MDagPath and MDGModifier for node work in C++. I still need to, however, make it so that translations occur with the shoulder as the pivot point. After that, my tasks are now:

Load and segment the body mesh.
Attach wire deformers.
Generate a skeleton.

I also got 9 hours of sleep. How unheard of. But it means I can think again.

Tuesday, April 13, 2010

Now, with Vectors!

More of a reminder to myself as to what I need to do tomorrow, but I re-did everything with vectors, thus need to re-implement data connections to the plugs of my cube objects.

I also need to figure out how to co-ordinate the rotation and translation of the limbs. That was too much math for me just now.


During the review, Norm brought up that the cubes themselves could be treated as deforming controls for the artist. No duh, that's ingenious.

A Full Person...?

And we almost have a box-man. Something is off with my ability to calculate translation for the limbs. This is because the pivot point is automatically set to be the center of the object. I should be bale to figure that out, but I need more sleep.


Also thanks to some information from Aline, I can re-organize my FigureDataNode to use vectors in place of floats, so I have less arguments to deal with.

Monday, April 12, 2010

Saving Strokes and the Assumptions I Have Made To Do So

In order to create the template for this, I had to make a number of unfounded assumptions about how artists draw the figure. Most are drawn from personal experience. They won't work in all cases, but can be adapted to if an artist is patient enough.

In future versions, I would like to make it so that I do not have to make all of these assumptions, specifically those relating to drawing order, erasure, and multiple strokes.

I am also tossing around the idea of characters with less than four limbs, but that seems like a rare use case to be addressed later. So for now, symmetry is automatic, not optional.

Essential Assumptions: (made based on our definition of "animateable figure": - bilateral, bipedal, four-limbed)

1. Following the "line of movement" drawing principle, all characters must have spines. This defines the bilateral quality of our figure. However, thanks to our color-type system, artists can start with either the head or the spine. And limbs can be added at any point after the spine.

2. A character's legs will be closer to the root of its spine. Its arms will be closer to the head. Therefore, the upper set of limbs will be considered the arms. We are not considering creatures with tails, so we assume that the spine line will always end close to where the leg lines begin.

3. The artist will draw the character in a reference pose. While we use simple angle math to determine some rotations, We do not take into account severe limb bending or limb crossing. As we set out to make animateable figures, any posing should be added later, using the character's skeleton.

Potentially Detrimental Presumptions: (which are in place for now but should be worked out later for an actually useful product)

1. Artists never erase. The program assumes that the first line drawn for a body part is an accurate representation. This is the one presumption most detrimental to usability. Actually, it's so bad it might make the tool useless.

2. Artists always draw limbs radiating from the body. This way, the pen is put down where the limb connects to the trunk and then is lifted off at the extremity. This may not necessarily be true, especially if the limbs are bent or lifted.

3. Artists always draw flat bones starting from the skull and moving downwards, continuing with the pelvis and ribcage. This is not always the case, but most figure drawing classes teach it this way, so I think this is the least worrisome.

4. The spine is straight. Curving spines would be nice and add a lot of character, but we are also working with a 2D projection, and more personality comes from the lateral view. At the moment, we need a straight axis spine because of our calculation methods.

Given the aforementioned assumptions and presumptions, each stroke is saved appropriately, into its correct location. Hooray.

The next step is to analyze these strokes to acquire measurements. Salient measurements are chosen based on those listed in the paper by Mustafa Kasap and Nadia Magnenat-Thalmann, "Modeling individual animated virtual humans for crowds" and Eliot Goldfinger's "Human Anatomy for Artists: The Elements of Form." I want to combine both a graphics programming and artistic source - because if there's anything either side lacks, it's the opinion of the other.

The measurements are then put into the FigureDataNode which is used to translate, scale, and rotate boxes. Later, we will load a figure, but for now, we are using boxes as substitutes for limbs body parts. Also, because I am on the clock for tonight, I am going to ignore joints and consider the limb as one block. Therefore, the FigureDataNode should contain the following information as plugs.

For each flat shape, we have a box, whose transformations are defined as follows:

FlatBoneScaleX - flat bone width, least and greatest X values from sketch
FlatBoneScaleY - flat bone height, least and greatest Y values from sketch
FlatBoneScaleZ - flat bone depth, maintain ratio with FlatBoneX and FlatBoneY.
FlatBoneRotateX - 0.0, none.
FlatBoneRotateY - 0.0, none.
FlatBoneRotateZ - 0.0, none. (could we derive this later?)
FlatBoneTranslateX - center of flat bone, based on X and Y values from sketch.
FlatBoneTranslateY - center of flat bone, based on X and Y values from sketch.
FlatBoneTranslateZ - 0.0, none

At the moment, we are approximating flat bones with axis-aligned bounding boxes. Also, because we assume the spine is straight, we don't have to derive certain parameters such as RotateZ.

However, we have to take more care with long bones. For all these, we have moved the pivot point to one end of the bone:

LongBoneScaleX - 1.0, default.
LongBoneScaleY - long bone length, distance between curve endpoints from sketch
LongBoneScaleZ - 1.0, default.
LongBoneRotateX - 0.0, none.
LongBoneRotateY - 0.0, none.
LongBoneRotateZ - take vector from arm, find angle between arm vector and spine axis.
LongBoneTranslateX - beginning curve endpoint X, from sketch.
LongBoneTranslateY - beginning curve endpoint Y, from sketch.
LongBoneTranslateZ - none.

WaistGirth - .7*(PelvisScaleX) gives the ideal waist-to-hip ratio, a good starting point. This can be adjusted later with curves.
ScyeLength - distance between arm-trunk connection and base of skull.
ShoulderLength - distance between arm-trunk connections.

Overall height and width can be calculated by adding the heights and widths of various body parts above. More accurate measurements will be done later. This will allow us to have greater variation. For example, we can scale the top four vertices on the ribcage box to approximate the underbust. Shoulder width will be the distance between the arm-trunk connections. The scye length will be the difference between the arm-trunk connections and the base of the skull. Waist girth is tricker. For now, we will use .9*(PelvisScaleX), an ideal masculine waist-to-hip ratio. The waist is mostly fat and soft tissue, so it should be adjusted during the StyleCurve stage.

In the far future, we can make a skeleton by taking the rotation information and then calculating bone lengths based on the translation and scale.

Sunday, April 11, 2010

Look ma, limbs!

Sort of. I still need to scale, move, and rotate the boxes appropriately, but hey, it's parsing limb length fine and differentiates between each of the four limbs.

As this is for bipedal, bilateral humanoid characters, we assume there are left and right arms, left and right legs, and no more. Let's save the six-legged, winged horses from Avatar for version 2.0.


Rather than take a sketch and parse it, we actually record as the artists draws. Matching the familiar "QWER" format of Maya itself, artists can switch between drawing the spine, flat bones (not yet implemented) and limbs. Each is differentiated by a different color. I went for the less eye-burning ones.

The spine is always stored from the pelvis to the head, even if not originally drawn that way. This will help in the future for skeleton generation, where the root is commonly placed in the pelvis. We determine if reversal is needed through a simple test of the endpoints' Y co-ordinates.

We determine which curve is which very roughly. No limbs can be drawn until the spine is. Most artists won't even consider drawing limbs before the spine, but it's a good precaution. We assume that artists always draw limbs going away from the trunk. At the moment, there is nothing coded in to enforce this, and hopefully I can tend to this later.

First, we determine if a limb is on the left or right of the body (again - we assume bilateral figure). The spine is simplified into a straight line on the X axis. Anything entirely or mostly to its left is a left limb. Anything entirely or mostly to its right is a right limb.

Then we figure out if it's an arm or a leg. Because we assumed it was drawn away from the trunk, we take its first point and compare the distance of this point to the endpoints of the spine. If it's closer to the pelvis, we assume it's a leg. If it's closer to the head, we assume it's an arm.

Once we know what we're dealing with, we check if we need to mirror it. If an artist doesn't draw a limb on the left side but does on the right, we mirror the right limb to the left. The same goes for vice-versa. We do this by mirroring the simplified limb over the Y axis formed by the spine.

Each curve is simplified (we only take every 10th drawn point, going from say 287 to 28, which may still be too much) and saved into a vector. The endpoints of each vector are used to determine the length. It causes a lot of curve information to be lost, but I feel it is okay since we do not want to distort our future import mesh too much. Also, I know we will need straight lines for making future skeletons. It's all put into our handy-dandy FigureDataNode from last time.

Now, if only I could erase. :(

Saturday, April 10, 2010

Proof It Worked


I was able to extract a height from a line I drew on the canvas. Look, the node which contains the figure data has a height! (The default is 1.)

Now, to connect the value I extracted to the cube I made.


Well, automatically. I did this with MEL which is... well, cheating.

Wednesday, April 7, 2010

Scaling Cubes and Defining Human Template

Not feeling well today, but there's been moderate progress. Hopefully, it makes sense given that what's been on this blog previously is handwritten chickenscratch.

When the user draws a sketch, the curve information is saved into a FigureSketch object. It's mostly just arrays of points (one for the spine, one for each long bone, etc), but it also has the functionality of calculating from these curves, the starting value for the FigureDataNode. Unlike the FigureSketch object, the FigureDataNode is a Maya node which persists after the QT Tablet Application is closed.

Upon closing the tablet window, the plug-in generates a mesh (a cube, for now - it will be .ma loading for figures). I need to link the outputs of the FigureDataNode to the inputs of the mesh (say FigureDataNode.outHeight -> Mesh.scaleY). C++ is being whiny, but I can do it in MEL easy as pie, which is where this picture comes from. Worst comes to worse, I hack the connection together with MGlocbal::executeCommand():


I have also done some work figuring out what "rules" and assumptions we can make to derive parameters from the sketch. Most importantly, it assumes we are generating humanoid figures - bipedal, bilateral, head-abdomen-thorax things. These calculations and assumptions are made in the FigureSketch class before it puts these values into the FigureDataNode.


The next challenge is defining body parts for section stylization. The user can easily transition between colors and sketch modes (spine, long, flat) in the QT Window. However, I am trying to think of how to have these correspond to areas on a single mesh in Maya (areas on one mesh being very different and entirely more difficult than just the entire mesh). The initial reaction was to use Quick Select Sets, but that may not be possible in the API. However, I think it's possible in MEL. The alternative would be creating a C++ structure which ran sort of in parrallel to the Maya Quick Select Set - taking in each vertex location, applying the transformations, and then changing the vertex locations again.

Rage.

Tuesday, April 6, 2010

More Shader Progress

I fixed up the loop errors I was having, so now the final shader node combines layers to achieve its final color.

In this example, for simplicity of network, I'm only using Layer nodes with base colors (no pigment fraction).


By following the Lambert shading example included in the Maya development kit, I have thus been able to forgo writing a custom renderer. My shader now works with Maya Software - the default rendering package - which gives me sweet stuff like shadows. This is nice. Knocks a full task off my list and makes my shader more functional!


I now need to write the more complex equations (which take into consideration layer thickness, and ray scattering) as the light goes through the layers.

Better Than Nothing... Shader Progress

I've been working on the shader since there is a beta review tomorrow. Here's what it looks like at the moment. I'm trying to get it to render as a lambert before I start doing too many fancy layer things.

I rendered this using ordinary Maya Software. The light is red, explaining the ground plane and all. The right cube uses the default lambert1 shader and exists for comparison.

Not enough time. Not enough time. Not enough time!

Thursday, March 18, 2010

Old Planning Notes

I can't believe I forgot to post these. No wonder it seems like I didn't do any work for a really long time. Here are the finalized design plans for the plug-in, detailing the pipeline from start to finish.






Tuesday, March 16, 2010

Shader I

I missed putting up about seven pages' worth of planning, so I've been behind on updating this. That doesn't mean I haven't been working, though! I've started coding. This is for the "660" part of my project. It takes the algorithms from this paper and translates them into a Maya shader:

KRISHNASWAMY, A., AND BARANOSKI, G. 2004. A biophysically-based spectral model of light interaction with human skin. Computer Graphics Forum 23, 3 (Sept.), 331–340.

This is the shading network for the tissue texture. Conceptually, the paper outlines how light travels through each layer of tissue - being absorbed and scattered at each one. Therefore, I am imitating this structure with Maya shader nodes. Because the shader needs to be generalized to both plants and humans, the number of pigments, color, and physical properties are all user defined.

I've decided to make pigments and layers utility nodes, so artists who want greater customization can link them via the Hypershade (as I have been doing) as well as use the plug-in provided GUI (to be developed later).



Thursday, February 4, 2010

Where to Go Next

We are now going to develop a "sketch-based interface for polygonal modeling of stylized characters using an armature pipeline". Oh man, now doesn't that important?

This is the closest paper to my project:

Sketch-based Virtual Human Modelling and Animation
Chen Mao, Sheng Feng Qin, and David Wright

However, I have decided to use an armature-based approach akin to real sculpting technique.

The overall contour of the human body is defined by two types of bones: long bones (eg - limb bones, spine, the femur, the radius, etc.) and flat bones (eg - protective bones, skull, pelvis, etc.). Using a 2D sketching interface, the artist draws an armature, using lines ending in circles for long bones and circles and cubes for flat bones.

The long bones are then converted into a Maya skeleton. The flat bones are converted into I-forgot-what-they-are-called-but-they-are-like-deformers-but-rather-keep-things-from-deforming. The bones are then used to automatically generate a mesh - for example, the circle defining the ribcage will result in the chest mesh being roughly the same shape.

Based on the resulting skeleton, the plug-in adds curve pairs (tetraCurves from the previous blog entries) which give artists recursive control over figure obesity and fitness. Artists can alter the entire figure or selected body parts.

The plug-in will also automatically pelt map (or Ptex) the model for easy texturing. It comes with a skin shader, too.

The good part about this is that it allows the artist to create non-standard figures. I was thinking for another project even animating them procedurally.

These are papers I will use to help me. This list is not filled yet.:

Sketching Interfaces:

Amit Shesh and Baoquan Chen
SMARTPAPER: An Interactive and User Friendly
Sketching System


Automatic Meshing:

Rigging:

Jianhui Zhao1, Ling Li, and Kwoh Chee Keong
3D Posture Reconstruction and Human Animation from 2D Feature Points

Andrei Sharf, Thomas Lewiner, Ariel Shamir, and Leif Kobbelt
On-the-fly Curve-skeleton Computation for 3D Shapes

Curve Manipulation:

Pelt Mapping:

Texturing and Shading:


This is related to stuff I probably won't implement but would be cool additions.

Clothing:

Philippe Decaudin, Dan Julius, Jamie Wither, Laurence Boissieux, Alla Sheffer, Marie-Paule Cani1,2,3
Virtual Garments: A Fully Geometric Approach for Clothing Design

Ambition will get the better of me.

Wednesday, February 3, 2010

More Literature

The lab printer is broken, so I need to list a couple more papers on here. I've had a change in topic. I'm going to try to do modeling from sketching armatures.

Anyhow, here are papers based on pipeline.


---

User Studies:

---

Andrew Forsberg, Bob Zeleznik, Joseph La Viola, Sashi Raghupathy, Andrew Bragdon
An Empirical Study in Pen-Centric User Interfaces: Diagramming



---

Modeling from Sketches:

---

Chen Mao, Sheng Feng Qin, and David Wright
Sketch-Based Virtual Human Modelling and Animation

Igarashi, T., Matsuoka, S., and Tanaka, H. 2007.
Teddy: a sketching interface for 3D freeform design.
In ACM SIGGRAPH 2007 Courses (San Diego, California, August 05 - 09, 2007). SIGGRAPH '07. ACM, New York, NY, 21.

Seok-Hyung Bae, Ravin Balakrishnan, Karan Singh
ILoveSketch: As-Natural-As-Possible Sketching System for Creating 3D Curve Models
ACM Symposium on User Interface Software and Technology 2008 (Monterey, CA, USA, October 19-22, 2008)

Sezgin, T. M., Stahovich, T., and Davis, R. 2006.
Sketch based interfaces: early processing for sketch understanding.
In ACM SIGGRAPH 2006 Courses (Boston, Massachusetts, July 30 - August 03, 2006). SIGGRAPH '06. ACM, New York, NY, 22. DOI= http://doi.acm.org/10.1145/1185657.1185783

Added 2/4/10:

Adrien Bernhardt, Adeline Pihuit, Marie-Paule Cani, Loic Barthe
Matisse : Painting 2D regions for Modeling Free-Form Shapes

Masamichi Sugihara, Erwin de Groot, Brian Wyvill, Ryan Schmidt
A Sketch-Based Method to Control Deformation in a Skeletal Implicit Surface Modeler

Jeehyung Lee, Thomas Funkhouser
Sketch-Based Search and Composition of 3D Models

Orn Gunnarsson, Steve Maddock
Sketching Faces

Fabricio Anastacio, Przemyslaw Prusinkiewicz, Mario Costa Sousa
Sketch-based Parameterization of L-systems using Illustration-inspired Construction Lines

---

Automatic Skinning:

---

Baran, I. and Popović, J. 2007.
Automatic rigging and animation of 3D characters.
In ACM SIGGRAPH 2007 Papers (San Diego, California, August 05 - 09, 2007). SIGGRAPH '07. ACM, New York, NY, 72. DOI= http://doi.acm.org/10.1145/1275808.1276467

---

Curve-based Modeling:

---

Real-time Individualized Virtual Humans
Nadia Magnenat-Thalmann, Daniel Thalmann

Modeling Individual Animated Virtual Humans for Crowds
Mustafa Kasap, Nadia Magnenat-Thalmann

---

Pelt Mapping:

---

Brent Burley and Dylan Lacewell
Ptex: Per-face Texture Mapping for Production Rendering

---

HSV Textures:

---

Real-time Crowds: Architecture, Variety, and Motion-Planning
Jonathan Maïm, Barbara Yersin, Daniel Thalmann



These papers have nothing to do with my senior project but may be interesting for my other courses:

Simulation of Tearing Cloth with Frayed Edges
Napaporn Metaaphanon, Yosuke Bando, Bing-Yu Chen, Tomoyuki Nishita

Procedural Generation of Rock Piles using Aperiodic Tiling
A. Peytavie, E. Galin, J. Grosjean, S. Merillou

Peek-in-the-Pic: Flying Through Architectural Scenes From a Single Image
Amit Shesh and Baoquan Chen

Finally, these are for Kristen, since she wants to make plants.

Tuesday, January 26, 2010

Planning Notes II

I realized my blogs are very much not "weekly." I just kind of post whenever I've finished something.

Here are some more moderately intelligible notes. They describe in detail the class structure of the plug-in - like what node possesses what and which methods call which inside each other.


I have to work on the soft selection and mesh manipulation algorithm, but after that, I'll be ready to put it all to computer (given that I have, already put it all to paper).

Here are the tasks I am giving myself, about one or two a week:

  1. Do the plug-in writing exercise from my Maya plug-in writing class.
  2. Create a functional CurvePair.
  3. Make a TetraCurve where muscle curves affect fat curves.
  4. Make the TetraCurve manipulate the mesh.
  5. Model the figure. Going to start off with just the human male for now, since it's from the Goldfinger book.
  6. Set TetraCurves up on the figure.
  7. Create GUI.
  8. Allow for importing and exporting of skeletons and meshes.
These are things I would like to add if I have the time.
  1. Save and load from text files.
  2. Gender toggles.
  3. Viewer toggles (such as "hide curves").
Well, then. To work.

Sunday, January 24, 2010

Research Update

I've been doing more research for another class I'm in, and it's turned up more interesting papers. These - especially those on modeling humans - have been much more relevant.

I'm not listing the papers I mentioned in the other post, but I'm still sorting the papers by topic.

---

Modeling Humans

---

(From Range Scans)
Exploring the space of human body shapes: data-driven synthesis under anthropometric control
Brett Allen, Brian Curless, Zoran Popovic
University of Washington

(From Range Scans)
Allen, B., Curless, B., and Popovic, Z. 2003.
The space of human body shapes: reconstruction and parameterization from range scans.
ACM Trans. Graph. 22, 3 (Jul. 2003), 587-594. DOI= http://doi.acm.org/10.1145/882262.882311

(Volumetric Techniques)
Analysis of Human Shape Variation Using Volumetric Techniques
ZB Azouz, M Rioux, C Shu, R Lepage

(Head Modeling with Landmarks)
Head shop: Generating animated head models with anatomical structure
psu.edu [PDF]
K Kahler, J Haber, H Yamauchi, HP Seidel - 2002

(From 3D Scan Data with Templates)
Animatable Human Body Model Reconstruction
from 3D Scan Data using Templates

(From sizing parameters)
Seo, H. and Magnenat-Thalmann, N. 2003.
An automatic modeling of human bodies from sizing parameters. In Proceedings of the 2003 Symposium on interactive 3D Graphics (Monterey, California, April 27 - 30, 2003).
I3D '03. ACM, New York, NY, 19-26. DOI= http://doi.acm.org/10.1145/641480.641487

(Sweep-based)
Dae-Eun Hyun, Seung-Hyun Yoon, Jung-Woo Chang, Joon-Kyung Seong, Myung-Soo Kim and Bert Jüttler
Sweep-based human deformation

(Anatomy-based)
Scheepers, F., Parent, R. E., Carlson, W. E., and May, S. F. 1997. Anatomy-based modeling of the human musculature. In Proceedings of the 24th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 163-172.

(Morphable Faces)
Blanz, V. and Vetter, T. 1999.
A morphable model for the synthesis of 3D faces.
In Proceedings of the 26th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 187-194.

---

Differentiating Crowds

---

(Saliency-based Variation)
Eye-catching Crowds: Saliency based Selective Variation
Rachel McDonnell, Michéal Larkin, Benjamín Hernández, Isaac Rudomin, Carol O'Sullivan

---

Animating Figures in Crowds

---

(Perception of Motion)
Perception of Human Motion with Dierent Geometric Models
Jessica K. Hodgins, James F. O'Brien, Jack Tumbliny

(Sex Perception in Motion)
McDonnell, R., Jörg, S., Hodgins, J. K., Newell, F., and O'Sullivan, C. 2007.
Virtual shapers & movers: form and motion affect sex perception.
In Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization (Tubingen, Germany, July 25 - 27, 2007). APGV '07, vol. 253. ACM, New York, NY, 7-10. DOI= http://doi.acm.org/10.1145/1272582.1272584

(Automatic Rigging)
Automatic Rigging and Animation of 3D Characters
Ilya Baran, Jovan Popovic

(Physics-Based Walking)
SIMBICON: Simple Biped Locomotion Control
Kangkang Yin, Kevin Loken, Michiel van de Panne

(Skin and Muscle Deformation)
Data-driven Modeling Skin and Muscle Deformation
Sang Il Park, Jessica Hodgins

(Joint-Aware Deformation)
Joint-aware Manipulation of Deformable Models
Weiwei Xu, Jun Wang, KangKang Yin, Kun Zhou, Michiel van de Panne, Falai Chen, Baining Guo

---

Rendering Crowds

---

(LOD Comparative Study)
LOD Human Representations: A Comparative Study
Rachel McDonnell, Simon Dobbyn, Carol O’Sullivan

(Vein Textures)
Runions, A., Fuhrer, M., Lane, B., Federl, P., Rolland-Lagan, A., and Prusinkiewicz, P. 2005. Modeling and visualization of leaf venation patterns. ACM Trans. Graph. 24, 3 (Jul. 2005), 702-711.

(Skin Reflectance)
Analysis of Human Faces using a Measurement-Based Skin Reflectance Model
Tim Weyrich, Wojciech Matusik, Hanspeter Pfister, Bernd Bickel, Craig Donner, Chien Tu, Janet McAndless, Jinho Lee, Addy Ngan, Henrik Wann Jansen, Markus Gross

(Subsurface Scattering)
An Empirical BSSRDF Model
Craig Donner, Jason Lawrence, Ravi Ramamoorthi, Toshiya Hachisuka, Henrik Wann Jensen, Shree Nayar

(Rendering)
Drawing a Crowd
David R. Gosselin, Pedro V. Sander, and Jason L. Mitchel

(Face Cloning)
Hyneman, W., Itokazu, H., Williams, L., and Zhao, X. 2005. Human face project. In ACM SIGGRAPH 2005 Courses (Los Angeles, California, July 31 - August 04, 2005). J. Fujii, Ed. SIGGRAPH '05. ACM, New York, NY, 5.

(Skin Texture)
Multispectral Skin Color Modeling
Elli Angelopoulou, Rana Molana, Kostas Daniilidis

(Skin Texture)
Skin Texture Modeling
OANA G. CULA AND KRISTIN J. DANA

(Skin Texture)
The secret of velvety skin
Jan Koenderink, Sylvia Pont

(Skin Texture)
KRISHNASWAMY, A., AND BARANOSKI, G. 2004. A
biophysically-based spectral model of light interaction with human
skin. Computer Graphics Forum 23, 3 (Sept.), 331–340.

(Face Cloning)
Realistic Human Face Rendering for “The Matrix Reloaded”
George Borshukov and J.P.Lewis

---

Props and Clothing

---

(Clothes)
Frederic Cordier, Hyewon Seo, Nadia Magnenat-Thalmann,
"Made-to-Measure Technologies for an Online Clothing Store,"
IEEE Computer Graphics and Applications, vol. 23, no. 1, pp. 38-48, Jan./Feb. 2003, doi:10.1109/MCG.2003.1159612

(Clothes)
Clothing the Masses: Real-Time Clothed Crowds With Variation
S. Dobbyn, R. McDonnell, L. Kavan, S. Collins and C. O’Sullivan
Interaction, Simulation and Graphics Lab, Trinity College Dublin, Ireland

(LOD Evaluation for Clothes Perception)
McDonnell, R., Dobbyn, S., Collins, S., and O'Sullivan, C. 2006.
Perceptual evaluation of LOD clothing for virtual humans.
In Proceedings of the 2006 ACM Siggraph/Eurographics Symposium on Computer Animation (Vienna, Austria, September 02 - 04, 2006).
Symposium on Computer Animation. Eurographics Association, Aire-la-Ville, Switzerland, 117-126.

Thursday, January 21, 2010

Planning Notes

I've read a lot more than I've needed to, I think, but it's given me a better grasp on currently available technology for human modeling: volumetric methods, template methods. Hyewon Seo, et al. even did "An Automatic Modeling of Human Bodies from Sizing Parameters" for a clothing story.

Most use CESAR data, so I've decided to take my project down a slightly different path. The thing is, CESAR scan data produces ridiculously high resolution models. More importantly, the resulting figures cannot be easily stylized (pushed beyond the available data set). My editable curves methods aims to allow artists to create stylized models, to fit provided skeletons, and accomodate lower resolutions.

These are my planning notes. They are relatively intelligible planning notes. The next step is to get them off the paper and into the proposal.

Sorry, 3DS Max users, but I've settled on writing on Maya plug-in. The GUI (and as much as possible) will be done with Maya's Python Commands. However, the data structures (such as bodyPart and triCurve) will be written in C++.

I have also broken down the figure into seventeen overlapping regions and demarcated important landmarks - specifically using terms from figure drawing classes.




Monday, January 4, 2010

Research Update

My research so far hasn't turned up what I'm looking for. I need to find more (or, perhaps, it doesn't exist).

Currently, the paper on ontology of humans paper has offered the best in terms of modeling, but most of it has been done using body scans and interpolating between that data, which doesn't allow for stylization. As for rigging, landmarks are then found in a double-pass, and a skeleton built from that landmark. They use H-ANIM skeletons. They also reference accessories which I am interested in (if not necessarily implementing). I think this is a good paper to consider when I design what my artistic models need.

O'Sullivan's work gives good reason for why my project would be needed: people distinguish using appearance. But it doesn't offer insight into what has been done in similar veins. Her motion research is a little outside of my scope, though. The only thing that concerns me is that the work proved pretty solidly that color is the distinguishing factor for clone recognition, so I don't know how much I should do with that. Luckily, she references a paper which she used to create a procedural texture for her human models. I need to find it. While this isn't within the scope of my senior design project, it could become the project for another class and be merged into my senior design.

I'm going to keep looking at papers, but I need to concentrate now on building the framework for my program. So, top priority is going to be writing a simple, "Hello World" Maya plug-in to familiarize myself with the API. Then, I'm going to post a thorough design for my plug-in.