Thursday, April 29, 2010

Future Optimizations

Things I Obviously Don't Have the Time to Do Now, but Would Make Version 2.0 Much Nicer:

  • Re-write code to use clusters, rather than lattices. Clusters are less visually obstructive.
  • Rather than connect attributes of the FigureDataNode to the Lattice attributes, it'd be better to have the FigureDataNode possess just an attribute which links to the mesh it is relevant to. That way, the Lattice (although, now Cluster) attributes aren't locked. After all, the FigureDataNode is just storage. It holds the sketch information after the sketch is disposed of by the QT Application closing.
  • If each FigureDataNode possesses a mesh, we can pass the DAGPath to a mesh as an argument into our other commands. This would let us operate on multiple meshes in one file.
How I'd Add Skeletons:

  • Make a Lattice around each Cluster. (This could be a problem, given lattices are axis-aligned. This could be solved by keeping the current Lattice structures but making them invisible.)
  • Find its translation matrix. (I actually did that yesterday for adding wires, and then realized I didn't need it.)
  • Create a joint at (0, scaleY, 0) and (0, -scaleY, 0).
  • Apply the lattice's transformation matrix to the join.
So much I've learned, not enough time to make what I want to. And there's a lot of trash accumulating on my desk...

Wire Deformers

Hey, guess what doesn't work! Wire deformers!

Wednesday, April 28, 2010

Tasks, Everywhere! Tasks!

Caffeinated. Very caffeinated.

I'm working on implementing wires, but for now, cleaning up my to-do lists...

Major things:
  • Add a Maya command to attach wire deformers, based on projects from the lattice boxes.
  • Add a Maya command to rig and bind a skeleton to the mesh, based on the center points of the top and bottom faces of each lattice.
  • Write the light behavior equations for the tissue layer shader. (I have to do this anyway.)
  • Attach tissue layer shader to model and write a MEL script GUI to allow editing of pigments and layers.
Minor things:
  • Find how to make the FigureDataNode-Lattice connection two-ways. This may not be possible, in which case, the connection will simply be broken.
  • Divide figure into more parts to accommodate the Thalmann measurements: neck, waist, lower arms, lower legs, hands, and feet. However, I am not sure if they should be specifically tagged or not.
  • Re-implement properly scaled translation and rotation of lattices, around the correct pivot point.
  • Re-implement translation and rotation based on the slope of the drawn curve at certain points.
  • Drawing interface should erase curves. At the moment, it just paints them over white.
  • Drawing interface should use an ink drying metaphor, where multiple nearby strokes are averaged into one.
  • Drawing interface should not assume strokes are drawn in a particular order or direction.
  • Accommodate multiple figures and selection.
  • Re-implement things with clusters, not lattices.
  • Clean up C++ code and node attributes, so it's not calling MEL commands.

Tuesday, April 27, 2010

Scaling Figure Sections.

Been working on physics-based animation this weekend, but here is current progress with the sketch modeling. I turned off affecting translation, and scaling is now relative to the original size of the imported tagged mesh. This way, the size of the sketch dimensions does not affect the model, so it will still be appropriate for other assets created in the same world space.

I think I need to re-tag the skull area so it doesn't contain the neck, but I like how things are looking, especially with the hips. However, I am concerned with the rotation, especially of the limbs. It should be at the pivot point of the shoulder, not the middle of the arm.

Unless I can think of a way to make the relationship two-way, I think - despite having gone through major trouble of connecting lattices to the FigureDataNode - I should break the connections. This way, the model can be edited manually by lattice.


Because I almost lost the paper I wrote these ideas down on, I am now going to make an two impossibly long to-do lists, sorted from most to least feasible. First, major things which add to overall concept of the project:
  • Add a Maya command to attach wire deformers, based on projects from the lattice boxes.
  • Add a Maya command to rig and bind a skeleton to the mesh, based on the center points of the top and bottom faces of each lattice.
  • Write the light behavior equations for the tissue layer shader. (I have to do this anyway.)
  • Attach tissue layer shader to model and write a MEL script GUI to allow editing of pigments and layers.
Now minor things, which aren't necessary to the project concept, but help refine current ideas it poses:

  • Find how to make the FigureDataNode-Lattice connection two-ways. This may not be possible, in which case, the connection will simply be broken.
  • Divide figure into more parts to accommodate the Thalmann measurements: neck, waist, lower arms, lower legs, hands, and feet. However, I am not sure if they should be specifically tagged or not.
  • Re-implement properly scaled translation and rotation of lattices, around the correct pivot point.
  • Re-implement translation and rotation based on the slope of the drawn curve at certain points.
  • Drawing interface should erase curves. At the moment, it just paints them over white.
  • Drawing interface should use an ink drying metaphor, where multiple nearby strokes are averaged into one.
  • Drawing interface should not assume strokes are drawn in a particular order or direction.
  • Show and hide lattices and wires on different layers.
  • Accommodate multiple figures and selection.
  • Clean up C++ code and node attributes, so it's not calling MEL commands.
However, given my time limit and need for proof of concept, I'm solely going to focus on the wire system for now.

Wednesday, April 21, 2010

Tagged Mesh

I was feeling a little under the weather tonight, so I just properly tagged the Nancy model. Now if I need a model, I don't need to tag it again, I can just load this directly. It also only took me a half hour to do the tagging, proving how efficient this system can be despite its simplicity. My GUI, however, is still ugly.

As intelligently requested, here's the tagged model and the sketch which resulted in the figure. Minus the rotation pivot which is causing awkward translation of the limbs... this is right. I really like what it did on the ribcage-pelvis area.


I feel I need to, though, before anything, scale the figure down to match the sketch percentages before sketch-stylization since, right now it's rather large (say 40 world units while the sketch canvas is 10).

I then don't know if I should focus on wires (more proof of concept) or making the sketch system more robust (handling erasure, multiple strokes) or making the mesh distortions smoother.

Tuesday, April 20, 2010

I should make zombies.

The GUI is in this picture, and well, it seems to be working. This isn't properly tagged (so some vertices weren't recognized in the groups). I now need to fix the translation on the limb boxes.

Oh, god, it looks so gross.


I feel posts are a good way to tell myself waht to do next, so once I get this to be less horribly distorting: adding wires and adding bones. (And that, you know, shader and that SIMBICON implementation for my other classes.)

... so gross.

Scaling Issues

Joe says I blog in spurts. This is true. Anyhow, made a GUI (not pictured here), but it loads a model (default model if unspecified), the user tags different areas (if they are not tagged already, or they can re-tag areas), and manipulates it based on the sketch. The scale between the figure and the sketch is off...

Also, need to fix how it's putting each new lattice inside the previous one...

Monday, April 19, 2010

Multiple Set Selection

You can now define multiple sets on a single mesh and connect the FigureNode parameters to each set and distort each set. I have yet to focus on soft selection, though.


So far, I've been defining the sets by name manually. I plan to write a small GUI tonight which allows the user to load in an OBJ model and define the sets. I need to do this so that the set naming convention I've been using remains constant (or else the plug-in won't work, since Maya does selection by name). I hope to add a load/save aspect to the GUI as well, which lets you load an .ma file if you've already defined the sets. (Sets can also be defined through Maya's existing interface).

Problems I forsee in the future are - if a loaded mesh is edited in such a way to add vertices, the sets would have to re-defined to include these new vertices.

Sunday, April 18, 2010

Set Selection

Now, a user loads in a model (any model). I need to write a GUI which allows the user to label regions of vertices with a "Set" that follows a certain naming convention ("skullSet" "ribcageSet"... etc). Then, a lattice deformer is applied to that region. The node with the figure data connects to that lattice deformer, translating, rotating, and scaling it appropriately.

I spent all day trying to do it in C++ to no avail (because Maya's default lattice deformers are actually a small network of nodes - I was able to do it to an entire object, not a labeled set region :( ), so I hooked it together mostly in MEL.

Thursday, April 15, 2010

Invisible Progress

It doesn't look any different, so no screenshots for today, but here's a list of what I did:

Node structure now much much more manageable. Grouped floats into vectors.
Nodes link automatically, so no more need for MEL.
Rotations now take the signed angle into account.

Overall, I have a little better understanding of MDagPath and MDGModifier for node work in C++. I still need to, however, make it so that translations occur with the shoulder as the pivot point. After that, my tasks are now:

Load and segment the body mesh.
Attach wire deformers.
Generate a skeleton.

I also got 9 hours of sleep. How unheard of. But it means I can think again.

Tuesday, April 13, 2010

Now, with Vectors!

More of a reminder to myself as to what I need to do tomorrow, but I re-did everything with vectors, thus need to re-implement data connections to the plugs of my cube objects.

I also need to figure out how to co-ordinate the rotation and translation of the limbs. That was too much math for me just now.


During the review, Norm brought up that the cubes themselves could be treated as deforming controls for the artist. No duh, that's ingenious.

A Full Person...?

And we almost have a box-man. Something is off with my ability to calculate translation for the limbs. This is because the pivot point is automatically set to be the center of the object. I should be bale to figure that out, but I need more sleep.


Also thanks to some information from Aline, I can re-organize my FigureDataNode to use vectors in place of floats, so I have less arguments to deal with.

Monday, April 12, 2010

Saving Strokes and the Assumptions I Have Made To Do So

In order to create the template for this, I had to make a number of unfounded assumptions about how artists draw the figure. Most are drawn from personal experience. They won't work in all cases, but can be adapted to if an artist is patient enough.

In future versions, I would like to make it so that I do not have to make all of these assumptions, specifically those relating to drawing order, erasure, and multiple strokes.

I am also tossing around the idea of characters with less than four limbs, but that seems like a rare use case to be addressed later. So for now, symmetry is automatic, not optional.

Essential Assumptions: (made based on our definition of "animateable figure": - bilateral, bipedal, four-limbed)

1. Following the "line of movement" drawing principle, all characters must have spines. This defines the bilateral quality of our figure. However, thanks to our color-type system, artists can start with either the head or the spine. And limbs can be added at any point after the spine.

2. A character's legs will be closer to the root of its spine. Its arms will be closer to the head. Therefore, the upper set of limbs will be considered the arms. We are not considering creatures with tails, so we assume that the spine line will always end close to where the leg lines begin.

3. The artist will draw the character in a reference pose. While we use simple angle math to determine some rotations, We do not take into account severe limb bending or limb crossing. As we set out to make animateable figures, any posing should be added later, using the character's skeleton.

Potentially Detrimental Presumptions: (which are in place for now but should be worked out later for an actually useful product)

1. Artists never erase. The program assumes that the first line drawn for a body part is an accurate representation. This is the one presumption most detrimental to usability. Actually, it's so bad it might make the tool useless.

2. Artists always draw limbs radiating from the body. This way, the pen is put down where the limb connects to the trunk and then is lifted off at the extremity. This may not necessarily be true, especially if the limbs are bent or lifted.

3. Artists always draw flat bones starting from the skull and moving downwards, continuing with the pelvis and ribcage. This is not always the case, but most figure drawing classes teach it this way, so I think this is the least worrisome.

4. The spine is straight. Curving spines would be nice and add a lot of character, but we are also working with a 2D projection, and more personality comes from the lateral view. At the moment, we need a straight axis spine because of our calculation methods.

Given the aforementioned assumptions and presumptions, each stroke is saved appropriately, into its correct location. Hooray.

The next step is to analyze these strokes to acquire measurements. Salient measurements are chosen based on those listed in the paper by Mustafa Kasap and Nadia Magnenat-Thalmann, "Modeling individual animated virtual humans for crowds" and Eliot Goldfinger's "Human Anatomy for Artists: The Elements of Form." I want to combine both a graphics programming and artistic source - because if there's anything either side lacks, it's the opinion of the other.

The measurements are then put into the FigureDataNode which is used to translate, scale, and rotate boxes. Later, we will load a figure, but for now, we are using boxes as substitutes for limbs body parts. Also, because I am on the clock for tonight, I am going to ignore joints and consider the limb as one block. Therefore, the FigureDataNode should contain the following information as plugs.

For each flat shape, we have a box, whose transformations are defined as follows:

FlatBoneScaleX - flat bone width, least and greatest X values from sketch
FlatBoneScaleY - flat bone height, least and greatest Y values from sketch
FlatBoneScaleZ - flat bone depth, maintain ratio with FlatBoneX and FlatBoneY.
FlatBoneRotateX - 0.0, none.
FlatBoneRotateY - 0.0, none.
FlatBoneRotateZ - 0.0, none. (could we derive this later?)
FlatBoneTranslateX - center of flat bone, based on X and Y values from sketch.
FlatBoneTranslateY - center of flat bone, based on X and Y values from sketch.
FlatBoneTranslateZ - 0.0, none

At the moment, we are approximating flat bones with axis-aligned bounding boxes. Also, because we assume the spine is straight, we don't have to derive certain parameters such as RotateZ.

However, we have to take more care with long bones. For all these, we have moved the pivot point to one end of the bone:

LongBoneScaleX - 1.0, default.
LongBoneScaleY - long bone length, distance between curve endpoints from sketch
LongBoneScaleZ - 1.0, default.
LongBoneRotateX - 0.0, none.
LongBoneRotateY - 0.0, none.
LongBoneRotateZ - take vector from arm, find angle between arm vector and spine axis.
LongBoneTranslateX - beginning curve endpoint X, from sketch.
LongBoneTranslateY - beginning curve endpoint Y, from sketch.
LongBoneTranslateZ - none.

WaistGirth - .7*(PelvisScaleX) gives the ideal waist-to-hip ratio, a good starting point. This can be adjusted later with curves.
ScyeLength - distance between arm-trunk connection and base of skull.
ShoulderLength - distance between arm-trunk connections.

Overall height and width can be calculated by adding the heights and widths of various body parts above. More accurate measurements will be done later. This will allow us to have greater variation. For example, we can scale the top four vertices on the ribcage box to approximate the underbust. Shoulder width will be the distance between the arm-trunk connections. The scye length will be the difference between the arm-trunk connections and the base of the skull. Waist girth is tricker. For now, we will use .9*(PelvisScaleX), an ideal masculine waist-to-hip ratio. The waist is mostly fat and soft tissue, so it should be adjusted during the StyleCurve stage.

In the far future, we can make a skeleton by taking the rotation information and then calculating bone lengths based on the translation and scale.

Sunday, April 11, 2010

Look ma, limbs!

Sort of. I still need to scale, move, and rotate the boxes appropriately, but hey, it's parsing limb length fine and differentiates between each of the four limbs.

As this is for bipedal, bilateral humanoid characters, we assume there are left and right arms, left and right legs, and no more. Let's save the six-legged, winged horses from Avatar for version 2.0.


Rather than take a sketch and parse it, we actually record as the artists draws. Matching the familiar "QWER" format of Maya itself, artists can switch between drawing the spine, flat bones (not yet implemented) and limbs. Each is differentiated by a different color. I went for the less eye-burning ones.

The spine is always stored from the pelvis to the head, even if not originally drawn that way. This will help in the future for skeleton generation, where the root is commonly placed in the pelvis. We determine if reversal is needed through a simple test of the endpoints' Y co-ordinates.

We determine which curve is which very roughly. No limbs can be drawn until the spine is. Most artists won't even consider drawing limbs before the spine, but it's a good precaution. We assume that artists always draw limbs going away from the trunk. At the moment, there is nothing coded in to enforce this, and hopefully I can tend to this later.

First, we determine if a limb is on the left or right of the body (again - we assume bilateral figure). The spine is simplified into a straight line on the X axis. Anything entirely or mostly to its left is a left limb. Anything entirely or mostly to its right is a right limb.

Then we figure out if it's an arm or a leg. Because we assumed it was drawn away from the trunk, we take its first point and compare the distance of this point to the endpoints of the spine. If it's closer to the pelvis, we assume it's a leg. If it's closer to the head, we assume it's an arm.

Once we know what we're dealing with, we check if we need to mirror it. If an artist doesn't draw a limb on the left side but does on the right, we mirror the right limb to the left. The same goes for vice-versa. We do this by mirroring the simplified limb over the Y axis formed by the spine.

Each curve is simplified (we only take every 10th drawn point, going from say 287 to 28, which may still be too much) and saved into a vector. The endpoints of each vector are used to determine the length. It causes a lot of curve information to be lost, but I feel it is okay since we do not want to distort our future import mesh too much. Also, I know we will need straight lines for making future skeletons. It's all put into our handy-dandy FigureDataNode from last time.

Now, if only I could erase. :(

Saturday, April 10, 2010

Proof It Worked


I was able to extract a height from a line I drew on the canvas. Look, the node which contains the figure data has a height! (The default is 1.)

Now, to connect the value I extracted to the cube I made.


Well, automatically. I did this with MEL which is... well, cheating.

Wednesday, April 7, 2010

Scaling Cubes and Defining Human Template

Not feeling well today, but there's been moderate progress. Hopefully, it makes sense given that what's been on this blog previously is handwritten chickenscratch.

When the user draws a sketch, the curve information is saved into a FigureSketch object. It's mostly just arrays of points (one for the spine, one for each long bone, etc), but it also has the functionality of calculating from these curves, the starting value for the FigureDataNode. Unlike the FigureSketch object, the FigureDataNode is a Maya node which persists after the QT Tablet Application is closed.

Upon closing the tablet window, the plug-in generates a mesh (a cube, for now - it will be .ma loading for figures). I need to link the outputs of the FigureDataNode to the inputs of the mesh (say FigureDataNode.outHeight -> Mesh.scaleY). C++ is being whiny, but I can do it in MEL easy as pie, which is where this picture comes from. Worst comes to worse, I hack the connection together with MGlocbal::executeCommand():


I have also done some work figuring out what "rules" and assumptions we can make to derive parameters from the sketch. Most importantly, it assumes we are generating humanoid figures - bipedal, bilateral, head-abdomen-thorax things. These calculations and assumptions are made in the FigureSketch class before it puts these values into the FigureDataNode.


The next challenge is defining body parts for section stylization. The user can easily transition between colors and sketch modes (spine, long, flat) in the QT Window. However, I am trying to think of how to have these correspond to areas on a single mesh in Maya (areas on one mesh being very different and entirely more difficult than just the entire mesh). The initial reaction was to use Quick Select Sets, but that may not be possible in the API. However, I think it's possible in MEL. The alternative would be creating a C++ structure which ran sort of in parrallel to the Maya Quick Select Set - taking in each vertex location, applying the transformations, and then changing the vertex locations again.

Rage.

Tuesday, April 6, 2010

More Shader Progress

I fixed up the loop errors I was having, so now the final shader node combines layers to achieve its final color.

In this example, for simplicity of network, I'm only using Layer nodes with base colors (no pigment fraction).


By following the Lambert shading example included in the Maya development kit, I have thus been able to forgo writing a custom renderer. My shader now works with Maya Software - the default rendering package - which gives me sweet stuff like shadows. This is nice. Knocks a full task off my list and makes my shader more functional!


I now need to write the more complex equations (which take into consideration layer thickness, and ray scattering) as the light goes through the layers.

Better Than Nothing... Shader Progress

I've been working on the shader since there is a beta review tomorrow. Here's what it looks like at the moment. I'm trying to get it to render as a lambert before I start doing too many fancy layer things.

I rendered this using ordinary Maya Software. The light is red, explaining the ground plane and all. The right cube uses the default lambert1 shader and exists for comparison.

Not enough time. Not enough time. Not enough time!