Tuesday, November 08, 2011

Creating non-renderable "implicit" spheres, cones, cubes

Not sure how I didn't know about these until recently. The names sound so passive-aggressive. . .

createNode implicitSphere -n "name";
createNode implicitBox -n "name";
createNode implicitCone -n "name";


Each of these commands basically creates a non-renderable object (sphere, cube, cone) that has a very basic shape node (no components) and a transform node. You can still go into the shape node and change color using the drawing overrides and all. Doubt I'll be using them all the time or anything, but cool to know they are there if ever you should need them. 

Note on "distanceBetween" nodes . . .

A kind fellow named Maarten sent me an email asking about the distanceBetween node when using objects that have frozen transformations. More specifically, that it didn't work correctly after freezing the tranforms. Absolutely correct and I should have mentioned it in the video.
Basically once you freeze transforms on an object you sort of "reset" the matrix that the object lives in. As far as Maya is concerned internally, these objects now all live at 0, 0, 0, despite the fact that their pivots might be any old place.

Since I don't know much (read: anything) about matrix math, I'm not sure exactly what the technical description of what is happening is, but you're probably familiar with the results. If you're not, try this: create a cube, rotate, scale and translate it. Then freeze tranforms on it. Now look at the manipulator handles for the rotates (in local mode) and the tranforms (in object mode). Not only have the values in the channel bar been reset to 0's and 1's, but the object no longer has any idea of it's own orientation relative to the world. So any values relative to anything else in your scene are gone, thus things like distanceBetween (and loads of other stuff) don't really work any more. There are a bunch of little tricks you can use (sometimes) to pull out some information about the object, but things like world space position and orientation are kind of messed up. You won't run into this with things like joints very much, as you're probably not freezing them, but depending on how you're doing things elsewhere, this could be something to be aware of.

Some notes about this:
a) This is yet ANOTHER reason why it's smarter to "group freeze" or "group orient" your controls or anything else that you care about the position/orientation. By grouping an object at the origin on creation and then moving the group, not the object, you avoid the need to freeze transforms on anything.
b) When you're modeling for rigs, you DO want the freeze the tranforms (usually) but it's probably better to wait until you're pretty well finished with the model, THEN clean it up (freeze, delete history, etc). Most of these objects will end up being controlled by other things (joints, etc) anyways, but it's nice to have older versions of the file that have the orientations, etc (even better if you use groups here too, but let's not get too fussy)
c) in the ordinary course of events it's, of course, totally fine to use the standard measure distance tool, with it's UI component and locators and all. When you actually want to see the values, it's actually easier to use this method because of the visual feedback it gives. I'm guessing that this freezing business is why these tools don't measure the objects directly, but instead use the locators.
d) if you have frozen objects (whose world space matrix has been messed up at some point) and you still want to use the cleaner method of the "distanceBetween" nodes, you can just create some null groups (one for each object) and point constrain them to the objects. Then you can either delete the constraints and parent the nulls under the objects, or leave the constraints and stash the nulls away somewhere in the outliner. Then you just hook up the nulls to the distanceBetween, just like you would any other objects. A bit cleaner visually than using locators, but you could use those too if you want.
Anyways, thanks to Maarten for pointing out the omission!

Friday, October 07, 2011

Dealing with smoothing in your character rigs

Was literally thinking about whether to include this topic in another vid and someone emailed me and asked me about this specifically. Weird!
This video sort of quickly goes through some ways to deal with how to include various smoothing levels in your rig, from lo poly proxy geo to highly smoothed for rendering.
Here's the code I use in the video:
1) for grabbing ALL polySmoothFace nodes in the scene and connecting them to your master control in the rig scene (note you should change any appropriate names, obviously, to ones you're using):
{
string $smooths[] = `ls -type "polySmoothFace"`;
for ($each in $smooths){
    connectAttr master_CTRL.hiPolyLevel ($each + ".divisions");
    }
}

2) for changing the smooth state of any selected master control objects (probably best to use this as a mel button on your shelf, and of course change any names/values to what you want):
{
string $sel[] = `ls -sl`;
for ($each in $sel){
    setAttr ($each + ".loPoly") 1;
    setAttr ($each + ".hiPolyLevel") 0;
    }
}

Wednesday, October 05, 2011

Creating stretchy joint chains

Been meaning to do this tutorial for a while, but never got around to it. Some fella named Jeremy Parrish did a tutorial on this subject that I saw recently (http://vimeo.com/29466144) and did a few thing differently than I would do.
1) I now use the "distanceBetween" node in rigs. Easier, cleaner.
2) WHAT we're measuring makes a difference. So a bent leg, for example, needs to be measured differently than a straight leg. Subtle, but important distinction that needs to be looked at depending on your model.
3) by measuring the leg joints that are being stretched (or any static, constant measurement), you have issues with scaling. These can be solved (as Jeremy shows), but that may, in fact, create some weaknesses in your rig that can pretty easily cause problems.
So here's a vid where I walk through a stretchy leg the way Jeremy shows and then with a few modifications.

creating a nice clean distance node using "distanceBetween"

Here's another neat little node, called the "distanceBetween". As you might guess, it measures the distance between of two selected objects.  What's nice about this is that you don't get the extra gunk that comes the measure tool, like locators and UI stuff, which one typically doesn't need in a rig.
The only tricky part here is that in a DAG hierarchy you can't really use the translate attributes as the measure inputs because those don't reflect world space distances. So you'll need to hook up one of the pivots for each object and then use the "world space" attributes for each object into the respective input matrices of the distance node. Huh? Just watch the video and I'll walk through it.
The basic code for the node itself is:
shadingNode -asUtility distanceBetween -n "name;

BTW, here's the code I used in the video for creating this distance node . . . You can just copy it into your script editor and make a quick mel button for it. I've added just a quick dummy check to make sure there are 2 tranforms selected.
{
string $sel[] = `ls -sl -tr`;
if (`size($sel)` != 2){
    error "please select 2 transformable objects to measure";
    }
string $name = "distance";
string $dis = `shadingNode -asUtility distanceBetween -n $name`;
connectAttr ($sel[0] + ".worldMatrix") ($dis + ".inMatrix1");
connectAttr ($sel[1] + ".worldMatrix") ($dis + ".inMatrix2");
connectAttr ($sel[0] + ".rotatePivotTranslate") ($dis + ".point1");
connectAttr ($sel[1] + ".rotatePivotTranslate") ($dis + ".point2");
}

Sunday, September 25, 2011

Dealing with Double Transforms

Double transforms are probably the single most frustrating thing to deal with when you're first learning rigging. What can be even more frustrating is when these issues pop up after it seems like you've got a perfectly working rig and you just want to clean it up in the outliner!
Fortunately, the issues are almost always in one of a couple of relatively simple categories and you get accustomed to dealing with them fairly quickly. This is a video talking about some of those basic scenarios and some ways to address them. Hopefully enough to help get you started in solving some of the more complex issues you may run into down the line.

Maya/Rigging: Dealing with Double Transforms from zeth willie on Vimeo.

Using the Point on Curve Info node

I always think this kind of stuff is pretty cool. . . a) this node is semi-secret/hidden and you'll feel pretty cool showing it to someone who's not aware of it (though don't be too smug. No one likes that asshole) b) you get to create some stuff using the command line, which again is pretty sweet and nerdy and c) the pointOnCurveInfo node is exactly the kind of node that riggers like. It just does what it's sposed to do with no extra fussing around or additional junk.
Anyways here's a video talking about how to access this node and some basic ways to use it (namely for sticking stuff onto a user-defined point on a curve).
A quick tip, which is in the video and which is a reiteration of info I've posted before. . .
Rather than using:
createNode pointOnCurveInfo -n "name";
which will give you node, but leave it hidden from view in the hypershade, I prefer:
shadingNode -asUtility pointOnCurveInfo -n "name";
which does basically the same thing, but flags the node as a "utility" and thus puts it under the Utility tab in the hypershade. Just thought I'd point that out again.

Maya/Rigging: Attaching objects to a curve (using pointOnCurveInfo node) from zeth willie on Vimeo.

Friday, September 02, 2011

Why so many groups?

Been asked by a few people (in my rigging class and elsewhere) about why there are so many groups in my rigs. So thought I'd post a quick video talking about some ways to use groups (kind of like nulls in After Effects) as part of the functionality of my rigs.

Maya - Using groups in your rigs from zeth willie on Vimeo.

Swapping out rotation orders under an animation

Not an everyday thing to use, but useful when something like this crops up. . . 
Rotation orders are always kind of important, but on certain animations it can be pretty necessary to manage them correctly (I'll try to do a separate post on rotation orders in general later).
I was just on a commercial recently where I was tasked with animating a couple of leaves fluttering into position. Somehow I got something sideways in one of the leaves and already had a fairly decent animation base down before I realized it. One leaf was correctly oriented in the rig to point in X and the other got perpendicular in Z. I built the entire rig for both as though they were pointing in X with the default rotate order (XYZ) and thus had two slightly different behaviors. I couldn't get a clean rotation down the axis of the leaf that was oriented in Z because the rotation order was incorrect (should have been ZYX, if I remember correctly). The general rule of thumb is that you usually want any channel that you'd like clean to be FIRST in the rotate order (though technically, that means it rotates last). Anyways, that's a fairly easy fix in the rig, just had to go back to my working version and change the rotate orders of the control structure elements, publish it back to the master version and done.
BUT, since I'd already animated a bunch of stuff, swapping the rotation orders under the published version of the rig would mess all the rotations up. Didn't want to have to match each keyframe by eye, but there seemed to be no numerical way to match stuff up. I was banging my head against the wall when I asked Brad Friedman, who's a really smart pipeline TD at Method and he told me unequivocally, "nope. Can't be done mathematically".  My next thought was to try to go to a key, orient constrain the object, delete the constraint, key it and repeat. The evil trick, though, is that this isn't even doable by hand (except to go through and key by eye), because even if you were to try and constrain then delete the constraint, then key the value, it would blow away any keys you previously had. So you can do that once, but not multiple times over the length of the animation. Ugh.

So I came up with a quick little script (with a python assist from Brendon, who's another smart pipeline TD there).
Basically the idea is this, you have a scene with your incorrect rotate orders already animated. Import your new rig with the correct rotation orders. Select the animated original, then the new rig, then run the script. It will find any keyframes for rotation and translation. For each keyframe, it will point and orient constrain the new rig to the old rig and store the translate and rotate values of the NEW rig (which are the numbers, at least rotation, that aren't derivable from the orig data) and then, once it's stored them all, go back and key those stored values at the correct frame for your new object. Finally, it will run an Euler filter on the lot of the new curves (I noticed some rotations can get a bit weird without it).
Not perfect but serviceable and fast. Which is good cuz it was my mess up in building the rig that started the whole problem!

So here's the script:
zbw_swapRotateOrder.py.zip (rt-click,save as)

To use this, drop it in your version/scripts folder (or wherever else you stash your python scripts) and then in Maya, select the original object, then the new rotation order rig and type:
import zbw_swapRotateOrder
zbw_swapRotateOrder.swapOrder()

Once you've imported the script, you just need the second line to run the function thereafter, I believe. (I can write the python scripts fairly simply, but using them in Maya still seems like a pain to me)

Thursday, August 11, 2011

Follow up some Linear Workflow stuff for Maya 2011 +

Been meaning to do this for a while, but never got around to it . . . And I got scooped a little bit! Derek Flood did a couple of short videos about this and lit a fire. Thanks Derek!
Anyways a quick overview (as quick as I can do it, everyone else seems to have a better grasp of the brevity thing than I do) of how linear stuff embedded in Maya 2011/2 relates to what you may have been doing before . . . not expansive, just pointing a few things out.
*EDIT* Derek was kind enough to point out some of the specific bugs/weirdnesses he's found in the comments. Check it out if you're interested!

Maya/Mental Ray: Linear Workflow in Maya 2011 from zeth willie on Vimeo.


HERE'S (rt click, save as) the updated zbw_gamma script that works by shader selection and sticks gammas into the color channels. You can also adjust any/all gamma nodes in the scene or delete them globally.

Wednesday, August 10, 2011

Creating a reflection "sheen" pass for your scene

Every semester I end up explaining this in a really rushed and half-hearted way to a bunch of poor students who are in my animation class. Rather than do that again, I finally wised-up and made a video about it. . .
Somehow my videos seem to get longer and longer, so I split this up after the fact and made two videos (it's like my inner self is telling me to do less videos and more text-y type tutorials).
The first video is the basic stuff for setting up a render pass that will give a nice shiny reflection pass for your scene (stuff like a cell phone or tv in a commercial) and the second chunk of it gives a few ways to customize this concept and make it a more reusable using things like RGB passes for various different reflections, etc.
This is something that is really useful to know how to do in production, especially for stuff like commercials, so hopefully I haven't made it tooooo loooong and boring . . .

Maya/Mental Ray: Rendering a reflection "sheen" pass - part 1 from zeth willie on Vimeo.


Maya/Mental Ray: Rendering a reflection "sheen" pass - part 2 from zeth willie on Vimeo.



BTW, HERE'S the script that I use at the end for pulling stuff out to a render layer with custom color overrides . . . (rt click, save as)

Tuesday, August 09, 2011

A little script for those big scenes . . .

A little script from a job with a HUGE environment (basically a city made to scale in cm units. Good grief). Just to help get things moving more quickly I threw this together. The cameras layouts were set up and we had to throw the characters into the scene, but when we referenced them, they came in at the origin which might be 10's of thousands of units away from the camera position.
So to ballpark things I made this script. . . Really simple actually, select your camera, enter a distance and hit "create Locator" which will create a locator at the specified distance in front of the camera. (Or you could make your own, though that kind of defeats the purpose) You can then move the locator around if you're not happy with the distance or undo and try again with a new distance.
 Then go the outliner and select the locator you want, then the group or object you want to place at the locator and hit snap. The script will just snap the position of the object to the locator in front of the camera. . .
Not a big deal, just something to ease the pain of trying to pull things around a scene that's way too big for it own good :)
HERE is the script if you want to have a go.  (rt click and save as)

Creating an auto-swimming fish rig using expressions

I while back I posted a video about using joint chains in rigs and one of the things I demoed was a fish rig I had created for Method Studios. I got a bunch of questions about how I set that up, so here are a couple of videos about using expressions to control the auto-swim part of that rig (which IMO is the tricky part).
If you're an expert at expressions and such, probably nothing new here but if you're not this might be a good example of using expressions in rigs (for things other than auto-stretch, which seems to be the most common thing) . . .

Maya/Rigging: Creating an auto-swim fish rig using expressions, Part 1 from zeth willie on Vimeo.


Maya/Rigging: Creating an auto-swim fish rig using expressions, Part 2 from zeth willie on Vimeo.

Monday, August 08, 2011

Follow up on corrective shapes for Maya 2011 . . .

Sorry for the lack of posts, been pretty busy (which one can never complain about these days, though I try). Seems like this will be the M.O. generally speaking. . . Nothing for a while, then a flurry of stuff.
Well, get ready to be flurried on, cause I've got a bunch of stuff to throw up here in the next few days while I have moment :) Let's start with some follow up material:

Did a post on pose space deformation, or corrective shapes, a little while ago. Here's an update of some scripts, etc that may work for you (mileage may vary) in later versions (post 2010). I haven't had much time to play with these unfortunately, but if I come up with anything else, I'll let you know . . .

extract deltas from brave rabbit playground
This works nicely . . . but be careful about the scaling and transforms of your orig mesh. Seems like the script is having trouble recognizing at least any scale values in the bound mesh (so you'll get a tiny or big blendShape mesh as your result if there are scale values other than 1 in the orig mesh. Which there shouldn't be, but still . . .)

http://www.chadvernon.com/blog/resources/cvshapeinverter/ - cvshape inverter from Chad Vernon
Watch some of this guys videos. Seriously off the hook skills. This plug in is giving me some trouble though. Seems from the comments like some people have gotten this working. Not me. Might be my stupidity re: how python stuff works, might be something to do trying it on my Mac, but while I'm seeing the plug-in, I'm not getting the script version to work, failing on "import" (tried this on a PC at my last gig but didn't have access to the plug-ins folder to install everything. But it seemed like it was trying to work than on my Mac. Hmmm).

http://www.djx.com.au/blog/downloads/ - look for "poseDeformer" towards the bottom
This seems pretty interesting and certainly has lots of options . . . check out a walkthrough of features here. Seems like it was written a while back and is just being updated for newer versions of Maya. But also seems like overkill in most instances (at least for me) and because of all the bells and whistles, seems like it may require ALL the artists on the team to have it installed (at least those that need to see the deformation at the rig level). Can't even be fussed to install this :p 

Tuesday, June 14, 2011

Pipeline Basics

Pipeline stuff is a topic that TD's and 3D people (in particular) talk about a lot in the real world. The reason is that whatever you're working on, using whatever techniques, at whatever scale you always need to be going through a pipeline (even if it's an almost nonexistent one). Since the state of the pipeline will ALWAYS be affecting the quality of your work and your productivity, just a little bit of effort on this front will affect every job you do, usually positively. And I'm not even talking about the nuts and bolts of networking architecture or any heavily technical stuff, which most successful studios also spend a lot of effort and money on.
The main issue I've come across in studio work in NYC (and in almost all student work) is that, because any work you put into thinking about and developing a pipeline will pay benefits in a kind of diffuse way, it seems that lots of people would rather spend that time (and $) doing production work instead because it seems to pay more immediate dividends. The result is that, at some studios I've worked at, there's almost no pipeline or institutionalized workflow at all. Unsurprisingly, the result is that the same issues keep cropping up on every production.
Some things that I see regularly:
1)No naming conventions at all for either files or even folders. This can lead to mass confusion on a big job or even a smaller job, when a new artist has to step in. Where are the files? Which is the latest version? What does this abbreviation stand for? Which files should I reference?
2) No standard for passing files from one dept to the next. Without some kind of publishing system to create master files, the chances for hiccups (or worse) when using references, etc grow exponentially. What is the plan when you're animating and someone realized a modeling change needs to happen? How do you propagate a change from the front of the pipeline through to the end of the pipeline late in the game?
3) Since there's no standardization in naming or methodology, it's very difficult to develop any tools to help facilitate some tedious tasks. Even little chunks of custom code across a studio can be a huge boon to artists and actually prevent a lot of time/effort-draining errors.
4) it sounds obvious, but I spend a TON of time hunting for files, references, and project spec documents just to figure out what I'm supposed to be doing. When you multiply all that time by all the artists on all the jobs, you realize that some studios spend thousands, sometimes tens of thousands of dollars per year on time that artists aren't actually doing anything productive. Studios that deal well with these issues often have things like production calenders for all the artists to see their deadlines (and their interconnectedness to the pipeline as a whole), areas (on a computer or IRL) to look at the most recent/relevant artwork and ref material, an easy to find collection of tools, shaders, and regular team meetings (where input is heard and addressed) during production, etc.
I know it sounds like I'm slagging the studios, but I don't mean it that way. There are lots things involved here. Yes, in some cases there are studio heads are just ignorant of the business they are in (and remember I work in NYC so most of the studios I deal with are relatively small. I doubt there are many big 3D studios that don't take pipeline issues very seriously, certainly none I've worked at). Mostly I see this ignorance with design and 2D studios that are ramping up to try get 3D work. I worked with a studio a few years back that supposedly lost over $200,000 doing a their first BIG 3D job. They hit almost every point I made above in terms of issues with pipelines. They're not around anymore.
But most of the time it just boils down to not making a concerted effort in developing the infrastructure that's required to do 3D work efficiently. MacDonald's and Coke and Walmart and all other successful companies (whatever your opinion of them) are successful largely because they understand their business model really well and make huge efforts to stamp out inefficiencies. MacDonald's may not make the best burger (I'm pretty sure they don't), but they probably make the most efficient burger and that's part of why they are so successful. On the flip side (no pun intended), many of the producers I've worked with over the years know NOTHING about 3D production specifically. How can they possibly be expected to allot the time and people-power to developing a pipeline when they don't know what that even is? Many studios hire producers on a freelance basis. So where would the motivation be to develop in-house tools and other "invisible" things when you're expected to bring in a specific project on a tight budget. Many 3D leads are overworked and don't have the bandwidth in their day to spend a lot of time working  on this either. Coding for specific tools would probably require hiring a specialist, which can cost a bunch of money. Often, no persuasive argument is made for doing this. And of course, money and time are always tight, so things like pipeline infrastructure often are the last things to be dealt with. But in the end it really is like running a burger joint without knowing where you source your meat or potatoes, it doesn't really make sense to ignore your pipeline when THAT'S THE BUSINESS YOU ARE IN!
Lastly (and, in all honesty, mostly the reason I care) is that most of the "lifestyle" issues in this business that adversely affect us all (by which I mean, crazy late hours, lazy and inefficient scheduling, etc) are directly correlated with all that I've mentioned above. In my experience, studios that have really tight pipelines, have almost by definition, a better understanding of the business. They know how long x,y and z takes, they know where their man-power is going and steer it in productive ways and maybe because of that, tend to have more realistic and "human" schedules. I've been on long jobs, where within 20 minutes of sitting down I can spot a month of all-nighters coming from a mile away. It almost always has to do with the thought and effort put in long before I sit down. Obviously, stuff happens and late nights are required sometimes, but respect for the craft and the people who do it can be manifest in a real and tangible way. When a studio spends the time and energy to figure out how to best do the work that they're paid to do, my experience has been that they're also more likely to respect the people who do that work and to treat them that way (I can't speak for MacDonald's in this regard).
A long rant. Whew. So you know, this is also something that I try to talk about with my master's program students, trying to instill some rigor in the process by which they go about their work, whether on their thesis projects, on personal projects and certainly when get into the working world. Process isn't everything, but it sure helps.
Here's a video talking about some of the basics of this (organization, naming, publishing). Nothing very technical or high level, just the basics to get started.

Maya: Pipeline Basics from zeth willie on Vimeo.

Pose Space Deformation

Here's a video talking about the basics of pose space deformation. This is definitely one of those things that one should be aware as a rigger. The basic idea of pose space deformations is to create modeling fixes for certain areas based on the poses that that area will hit (a lot of times you can use the term "corrective blend shapes" for this type of thing as well). The math/programming behind creating pose space deformations can get tricky, but fortunately there are a few resources out there that will handle it (I use BSpiritCorrectiveShape in the video). The gist is that you pose your already rigged character and then make changes based on the deformations, etc IN THAT POSE. The tool you use should remove the influence of the joints, etc to calculate what the actual transformations of the vertices (or CV's) are in local space and give you a corrected model that can be worked in as blend shape or what have you. Really a great way to do two things: a) fix a model that's not deforming correctly based on weighting alone and/or b) add some movement specific deformation to enhance the rigging. Anyhoo, here's the vid:

Maya/Rigging: Pose Space Deformation from zeth willie on Vimeo.

Saturday, April 02, 2011

a basic intro to IK spine type setups (IK spline/Ribbon)

Here are a couple of videos that start to outline a couple of different approaches to IK "spine" systems, how they work and what are the pros/cons of each. The two main things are the IK spline setup (video 1) and the ribbon spine (video 2). Again, no great detail (they're both a bit tedious to build), but enough to get you started in looking for more info.
BTW, the tutorials I mention in the videos are 1) the "divine spine" in vid 1 can be found at http://www.peachpit.com/articles/article.aspx?p=102262&seqNum=4. In the second vid I mention 2) Aaron Holly's videos (http://www.fahrenheitdigital.com/dvds/rigging/feature-animation-rigging-dvd.php) which may not even be available any more (I've never actually seen this specific topic covered by him, though I've seen other stuff and he's awesome) and 3) Jose Antonio Martin Martin's tutorial on Creative Crash (formerly HighEnd3D) (http://www.creativecrash.com/maya/tutorials/character/c/ribbon-spine-rig). I never noticed the comments before now where everyone's referring to Aaron's videos. Funny (though if he's directly ripping Aaron's techniques, he should at least credit him, IMHO. Just saying).


Maya/Rigging: intro to IK-type spine setups part 1 from zeth willie on Vimeo.


Maya/Rigging: intro to IK-type spine setups part 2 from zeth willie on Vimeo.

scripting your "bind" joints into a Quick Select Set

As I mentioned in the videos in the previous post, here is the basic idea of how you'd create a Quick Select Set from your "_BIND_" joints. (thought I'd break it out into a separate post to make it easier to read) . . .
{
string $bind[] = `ls -type joint "*_BIND_*"`;

sets -text "gCharacterSet" -name BindJoints;

for ($obj in $bind) {
    sets -edit -forceElement BindJoints $obj;
    }
}

(it's MEL, btw)
The weird bit of that is the syntax for creating a QSS (sets -text "gCharacterSet" etc). Not sure what that's about . . .
You'll have to change stuff for your own setup (the name you're looking for), but otherwise it's pretty straightforward. It looks for joints, then within that selection, looks for the name you've got (in my case "_BIND_") and adds those to a QSS called BindJoints. I find the QSS a bit easier to use for this purpose than a regular set.
Now if I want a selection that isn't easily grab-able (like in the ribbon spine) I can just grab the QSS from the "Edit" --- "QuickSelectSets" menu and then grab my geo and off I go to binding-world. I'll usually end up adding some code to remove any joint that I don't end up wanting in the bind afterwards (though I try to name things with this in mind, so the end joints on toes, for example, WON'T get the "_BIND_" added to their name, as I don't need them bound.) I'll just add this code to the tail end of any rig I'm scripting (or as a stand alone chunk of code for a custom built rig) and the QSS will be there waiting for me every time I build the rig or want to bind/unbind. Definitely makes things faster to test things out in your rigs. Hope this helps!

Cleanly transferring UV's to a skinned model

Why does the transfer attributes node sometimes resist getting deleted on a rigged model? Doesn't it know resistance is futile?
I've had it happen many times that a UV set has to get transferred onto a skinned model via transfer attributes operation. And then refuses to go away. Which sucks cuz cleaner history is better. This a quick vid about why that happens and how to kill that history. Thanks to Christina Sidoti for pointing it out to me!

Edit: Oh yeah . . . Here's a script to help you do it faster! Download it HERE (rt-click, save). Put it in your scripts directory, start Maya (or type "rehash" in the MEL command line if it's already open). Select the obj with the new UV's, then the rigged object with the old UV's and type "zbw_cleanTransferUV". (catchy name, right?). It seems to work fine for anything I've tried it with. If you've already got some UV changing history (polyTweakUV or transfer UVs, etc) you can delete those, you're new UVs are coming straight from the source and any previous tweaks will get weird.

Carry on!


Maya/Rigging: Cleanly Transferring UVs to a Bound Rig from zeth willie on Vimeo.

Sunday, March 27, 2011

Creating an inky bleed effect in Nuke with Time Offset

Here's a quicker tip/trick related to Nuke. A really cool time offset effect that's actually way more complicated than it looks. This time it was for a job I was actually on, though we didn't actually end up using it for the final, as far as I know.
The gist of the technique is to use a time-offset node to take a frame that has literally just been written an instant before, add some effects and merge it back into the frame that's about to be written. So you get a cool trailing effect that iterates on itself, anything in the past growing while keeping a tight "leading edge". Not too crazy once you've seen it, but pretty hard to explain.
Anyways, Andy Jones at Method NY showed me this (he actually just did it and then had the unenviable task of explaining it to me once I took over the comp). Hopefully you understand it better than I did the first 30 minutes or so.
Cheers.


Nuke/Compositing: Inky Time Offset Effect from zeth willie on Vimeo.

Creating custom UV pass in MR and using it in Nuke

That's a mouthful.
On a job recently I got some secondhand Nuke goodness when a lighting TD showed me how they were using UV passes from Maya to "auto-track" some footage into their shots (I wasn't on that job). The basic issue was that a lot of footage had to go onto screens in the spot and some of that footage wasn't ready or the client was changing their mind about what the footage should be. So, in short, any manual trick to the get the footage locked onto the CG screens required redoing some additional work in post, in terms of tracking, etc. Additionally, sometimes late notes were coming in about animation changes to the screens themselves, which meant that there was no option to render "in-camera" in Maya, it had be done in post.
 So the simple solution was to render the screens with properly adjusted UV's as UV passes. This automatically locked the footage in place and wasn't dependent on any additional work once the comp was built, even if the footage had to be completely replaced or the animation of the screen itself changed. Pretty sweet. While I'm sure some people think this is obvious, I didn't (and neither did a lot of the other people I was working with. So I'm not the only ignorant one.)
In any event, I also realized that I never really followed up on any render pass stuff in Maya 2010/11, so the first vid below is basically what we're trying to do and how to get custom color passes out of Mental Ray (as opposed to the "regular" passes that are pre-set, like reflection, diffuse, etc). I use this method to get out RGB matte passes and a UV pass in EXR format.
The second video is how to actually use the UV pass in Nuke to get the basic effect we're looking for. Nothing crazy, just basic use of the UV pass.



Maya/Mental Ray: Custom Color Passes (2010) Part 1 - UV Pass from zeth willie on Vimeo.


Maya/Mental Ray: Custom Color Passes (2010) Part 2 - UV Pass from zeth willie on Vimeo.

Thursday, March 24, 2011

A quick little Gamma script

I said I would post some stuff and 10 minutes later, here I am (pats self on back) . . .
Working on a job recently, we were using linear workflow, which is sweeeeet, but not so shocking since this was a pretty big studio that does mostly photo-real work. But they actually didn't have any automated way to deal with the gamma correct nodes in the scene (that I or anyone I talked to knew about). I got to one scene that I was animating and went to do a rough mock up of the texturing and lighting and realized with all of the objects and references and such that there were 40 or 50 shaders that needed a gamma correct node. So I decided to just write a quick script to do it. I know there are versions of this kind of stuff out there already, but if it's something manageable, I prefer to do it myself. I feel like I learn more that way.

Turns out, it's actually kind of hard to do scripting for this stuff. Basically, I just wanted 3 basic things. Add a gamma correct node right before the color slot of any shader I select (or all shaders), be able to adjust those nodes (turn the gamma from .455 to 1 and visa versa) and be able to remove all the gamma nodes if I want to. What turns out to be really tricky is navigating upstream/downstream from the selected materials. If I wanted to put a gamma correct node somewhere upstream from the selected shaders I would have to traverse back up through connections recursively and run the risk of encountering loops and such. Basically, navigating through shader trees is trickier than objects, because without the DAG hierarchy (parent/child) there's not such a clear and definitive relationship between nodes. I'm sure it's doable, but started getting deeper than I was interested in going, so I simplified things.

I wrote a script that will look at the "color" channel of a shader (I did blinn, phong, anisotropic, lambert, surface shader, mia, mia_x, mia_x_passes, testing for their respective color channels) and look for  a gamma correct node connected there. If there's not one, it will add it with the values you input. You can choose materials to apply this to, or do it to all of them. If there's nothing going into the color channel, no gamma is created. You can then adjust or delete the gammas (again only the gammas connected to the color channels) from the other tabs in the UI.

Basically, as long as your only worried about color information connected to the material and you're not adding any gamma correct nodes elsewhere in the tree, this script should cover you. Just run the scripts (zbw_gamma) select the materials you need gammas for (i.e. blinn, mia_material_x_passes, etc) and use the buttons/tabs to add, adjust or remove them.

As usual, I make no guarantees about how it will work for you, so don't plan on funding your retirement by suing me. (Like I've mentioned before, part of my reasoning for posting these scripts is just so that I have access to them if the studio I'm at has locked USB ports or something. Easier than emailing them to myself every job :)
Oh, BTW, I know there is some pretty useful gamma stuff in 2011 now. I've only been at 1 (!) studio that uses 2011 in production. And that was pretty hairy. I'm sure more than that do, but 2010 is still the most common release for use in production in NYC as far as I can tell. Any studio that has any significant pipeline tools, render farms, etc, would have to change/update all the code to incorporate 2011 into the pipeline and I'm not sure it's worth it at this point (actually, it's obviously not worth it yet, the proof is in the pudding, as they say)
Anyways, happy gamma-ing!

 You can download the script HERE.



Been a while . . .

Wow. It's been a loooooong time since I've posted anything. Been really busy, moved apartments here in NYC over the Christmas break (on rather short notice, I might add, but for the better), did the holiday/insane blizzard thing and have basically been working crap-loads ever since the new year. But I have a day or two off and plan to get some more material up here in the next couple of days. Maybe some videos on some things I've been using for work and some new stuff that I've been learning about. 

BTW, thanks to Stuart Christensen for pimping the blog-izzle. He's got a you-tube channel where he cranks out tons of great tutorials on maya related stuff. Check it out at http://www.youtube.com/user/deepfriedectoplasm. Kind of bananas how many tutorials he does. . .

Anyhooo. I'll be back soon with some stuff to show.