code


Character Creator 3: Head Separation with Morph Preservation in 3DS Max

For my current work, I use Reallusion’s Character Creator 3 for my humanoid characters. They offer a lot of customization, are rigged and skinned, and come with a variety of morphs for facial expressions and lip syncing. One issue, however, is that because I am using these characters in a game engine — in this case Unity3D — the morphs are a bit problematic.

Morphs or Blendshapes are composed of vertex data describing translation of vertices between different blendshapes. You can then interpolate between these shapes to get a variety of small changes in the model.

The entire body mesh uses the morph but most targets just affect the face.

For Character Creator 3 models though, because the head and body are part of the same mesh, the morph data has a lot of empty space for all of the vertices from the neck down that do not move. This post goes over the process I use to

  • Separate the head mesh from the body mesh
  • Reapply morph targets to the head mesh
  • Reskin the separated head and body meshes

Note, these methods utilize 3DS Max; however, they can probably also be done in Blender or Maya using tools that those programs utilize.

Separating the Head and Body

The first part of this process includes separating the head and the body. By default, CC3 characters’ head and body are setup between different sub meshes; unfortunately, you can’t simply just use the head submesh and separated that as some morphs affect vertices in the torso’s submesh.

The first thing I do is copy the original mesh. These processes can cause some issues, so always make sure to have a version of the original mesh just in case something goes awry and you have to start over.

Selecting the Right “Loop”

In the duplicated mesh, I ADD an edit poly modifier. I want the original skinning and morph modifiers to remain. I’ll explain why later. Then, I try to select an edge loop that I’m sure is not affected by any of the morphs. In fact, if you character is clothed selecting an edge loop that is hidden or obscured by clothing would probably be a good idea.

Some morphs affect the neck slightly, so separating the head from the body at the base of the jaw could cause issues.

The goal of this is to eliminate as many unused vertices as possible, not all of them.

This edge loop is hidden by most of the shirt, which is hidden for demoing purposes.

Once the edge loop is selected, press “Split” in the edit poly panel. This will make the torso and the head separate elements. I then select the head elements as well as the eyelashes as they are considered separated elements but are also affected by the head’s morphs — and “Detach” the element from the body as a new mesh.

The body (red wireframe) and body (blue wireframe) separated.

The head and body have now been separated. In fact, the morphs on the removed head still work; however, the skin modifier data is no longer valid. This is because the number of vertices has been altered.

Hair-raising problems

Preserving the Morphs

Despite the morphs still working, they essentially contain the old morph data, the unused vertices we are trying to eliminate.

I wrote a maxscript to preserve this data. It can be downloaded here. To use the script, select the head mesh and then run the maxscript.

What this script does is essentially recreate every morph target but only for the head. Once this script is finishing executing, there will be a new, duplicated head mesh with only the morph modifier on it.

The new head mesh with no skinning

Reapplying Skinning Data with Skin Wrap

So now that the body and head mesh with new morphs have been created, we need to reapply the skinning data. For the first step, I right-click the body mesh and convert it to an edit poly. So, before starting the next step, we should have two meshes. The head mesh with just morph modifier and the body with no additional modifiers.

Anyway, select the body mesh and add a Skin Wrap modifier. This modifier essentially uses vertex positioning to recreate skinning from one mesh to another. In this case we are essentially copying the data from the original mesh to the new mesh. The following are the settings I use to accomplish this:

Skin Wrap Setup

Once the settings are defined, select the original CC3 mesh to copy over its skinning data to this new mesh. Once copied over, you can create a new skin modifier by pressing “button”. This will disable the Skin Wrap modifier and automatically add a skin modifier.

Repeat this process for the head, making sure that the morph modifier is beneath Skin Wrap modifier.

Once done, the head and body should now be separated, the morphs only applied to the head, and both skinned properly and identically to the original CC3 mesh.

Conclusion

In conclusion, these steps should help separate CC3 character heads and bodies while preserving morph targets and skinning data. This is a rather short process, but I hope one day CC3’s exports options include a way to separate meshes on export so this process is already taken care of. In the meantime, hopefully this will be useful for someone working with CC3 and importing their characters into a game engine. Again, here is the link for the Morph Preserve maxscript used during this process.


Maxscript: Constrain to Biped 2.0 6

Two years ago, I wrote a post about a maxscript I had written that constrains a humanoid rig to the 3D Studio Max’s biped. Recently, I’ve been working on a fighitng game prototype. I’m using animations from an asset package for this, and though the animations are very nice, there are sometimes things missing or I wish I could make certain tweaks. I said to myself, “I wish there was a way to record these animations so I could edit them more easily.”

I know you can import a .fbx file, the format of the aforementioned animations, into 3DS Max, but every frame is keyed and making edits is rather difficult. I could try and use animation layers, but if I want to apply the animation to a different character, this can’t really be done either.

So, remembering the script I wrote awhile ago, I figured I would try and make a version, so I could record animations. At the same time, one issue with the previous script was that when using it, it forced the original rig to rotate so it would fit the biped. This would cause this strange “bulging” in various areas that some users, including myself, didn’t care for.

Before rigging [left] / After rigging [right]

Most of this is due to the fact that not all rigs are not perfectly aligned like the biped so when going from a rig’s t-pose to the biped’s, the rotation done to conform the rig to the biped results in some rotations that otherwise, the original rig wouldn’t utilize.

The New Script

Version 2.0

This new version has a few changes compared to the original:

  • The bone selection area has been separated into two columns for easier organization
  • The addition of a lot of new features and buttons
    • Quick Midpoint – creates a new midpoint between selected objects
    • Quick Connector – creates a new bone that connects two selected objects
    • Foot Angle Adjustment in Degrees: An angle, measured in degrees, used to more correctly size the created biped’s foot
    • Turn Figure Mode Off: A toggle button that turns figure mode on and off
    • Alignment Tools and Animation Recording, both of which will be explained later

How to Use

Preparing the Rig

So, like the original version, you start off by preparing the rig. You have to add make sure that all bones (besides the infamous bone #7) are assigned properly. This can be done using tools such as quick child.

Determining Foot Angle

One new value that should be assigned is Foot Angle Adjustment in Degrees. This value is used to determine how big to make the biped’s foot and when aligning the biped’s foot to the original rig’s, how much to rotate it back so it matches the original rig’s foot angle.

One way to determine this value is to go into rotation mode and the view coordinate system and select the original rig’s foot bone.

Here, my rotation values are -12.979, -0.169, and 172.337. The biped’s foot will always be rotated positively on its z axis, so for this rig, I would use 12.979 for this value. This can be a little trial and error unfortunately, but as long as this value isn’t changed after building the biped, the toes should stay aligned properly.

Building the Biped

Once all of the bones are assigned and the rig is validated, the biped can be built. You’ll notice that when doing so a “FAUX_RIG” is created as well as the creation of a bunch of dummy objects. These dummy objects are used to align the biped to your rig.

New biped and “faux rig”
Small spheres are also added to the top of the biped’s fingers to help indicate the “top” of the fingers better.

Aligning the Faux Rig

This, unfortunately, is probably the longest part of this new process. Using the Biped Alignment section, you set the index of the bone you want to edit. Then you click one of the rotate buttons. When time this button is clicked, it’ll realign the associated bone with the newly aligned faux dummy.

How a misaligned biped MAY appear depending on the rig.

Fortunately, every time you do a rotation, it is recorded so you can save it out and reload it at a later time or for new rigs that are similarly oriented.

You can also check the alignment by clicking Align Bone or Align All. Also thighs, calves, upper arms, and fore arms do not need to be aligned since aligning the biped’s hands and feet will automatically align these better.

Another note is that you should stay in figure mode when aligning the first spine bone, the clavicles, neck, head, toes, and fingers. This is because, while in figure mode, these items are all oriented AND positioned. Once out of figure mode, they will not be moveable.

Additional Alignment Notes

If you are doing this from scratch, you should note that the clavicles are rather difficult to rotate while in figure mode. They translate to the proper position but will not align properly, but once out of figure mode, they will. Additionally, because of this, I suggest putting a slight bend in both the original rig’s elbow if possible. Even if the clavicles are off a bit, if the hands can reach the original rig’s, they and the fingers will line up properly. This is also useful to do at the knees so after positioning the hands and feet, the rig’s knees and elbows can be positions correctly. If they are too straight, these sometimes will rotate incorrectly.

Aligned rig with slightly bent knees and elbows

Finishing the Rig

Once the rig is aligned properly and figure mode is exited, you can either create constraints, which will add orientation constraints and positions constraints to the original rig so they follow the biped OR record the character’s animation to the biped.

The Key Frame button will do just that, recording the pose of the original rig to the given frame. However, you can also record the entire animation. You can set an interval. An interval of 1 means it will records every frame. An interval of 2 means it will record every other frame, 3 every third, etc.

This process, unfortunately, is rather slow. A 100 frame animation can take almost 10 minutes if every frame is captured, but once finished, the biped’s new animation can be saved to a .bip file and applied or edited.

Final Notes and Areas of Improvement

This script, though usable, could probably use some improvements.

  • Sometimes the script will crash like if you, for example, try to rotate a faux transform without building it first; thus, requiring the user to close the window and rerun the script. Having more error-catching would probably be useful.
  • I think there is a memory leak somewhere; after using the script many times or opening and closing it several times, 3ds may sometimes crash when starting a new project.
  • The alignment process takes awhile in general; I wish there was an easier way to automate this. Fortunately, I’ve created a file for Character Creator 3 rigs that should align the rig properly and quickly after being loaded.
  • Recording animation can be slow.
  • Adding rig automation would be nice so the nubs don’t need to be added manually
  • Foot sizing and placement can still be rather troublesome

Updates

March 31, 2019 – Version 2.0.1

  • Added new button to quickly create nubs for the head, fingers, and feet, since these are usually missing.

Download

You can download the script here for free. If you use my script, credit would be nice but not necessary. Additionally, I would love to see what people do with it. Enjoy!


ProtoFighter Dev Blog 01 4

So for the past couple of months I’ve been working on a new fighting game prototype.  After discovering TrueSync by Exit Games, I’ve been trying very hard to create a new fighting game with it.  Again, one of my biggest regrets with Battle High is that I was never able to implement multiplayer before its release.  I definitely feel that TrueSync could definitely help me achieve that!  Anyway, I decided to write a little bit about the game and what I’m trying to do with it.

ProtoFighter

I chose this name because what I made was a prototype, and I wanted to make this clear.  I decided to use only assets from the Unity3D Asset Store, which TrueSync already is.  This includes my characters, audio, and more!  Here is a short list of some of the assets I am using:

Goals

I had several goals while making this prototype.

Learn TrueSync With a Focus on a Fighting Game

My first goal was to learn TrueSync and make a game using it.  I think I accomplished this.  In fact, it’s not my first TrueSync experiment.  Diamonds Not Donuts, a small game I released on itch.io for free, is!  That being said, for ProtoFighter, I wanted to focus more on fighting games and various issues concerning them.  ProtoFighter has a lot of gameplay functionality that most fighters do — blocking, jumping, attacking, special moves, supers, rounds, etc.  Obviously it’s missing a lot to be a complete fighting game package — single player modes, balance is a MESS, more characters, etc.  Again, for a pre-pre-pre alpha, I think I achieved my goal, but of course, when it comes to TrueSync, there are still a ton of questions I have and hope to continue to answer them as I expand upon this prototype.

Make a Fighter That Is Slightly More Accessible Than Most

Though not TrueSync related, I’ve always wanted to try and make a fighting game that was a bit more accessible to the average player.  Maybe not as extreme as Fantasy Strike, but something that I could still explain relatively easily.

In ProtoFighter, though I sadly haven’t released a tutorial yet, I tried to do this.  Essentially, instead of performing quarter circle attacks, I simplify this to forward or back plus an attack.  Now, a lot of people would immediately say this oversimplification could cause issues such as instant dragon punches or anti-airs, so to solve this I did two things.  Firstly, all initial moves such as forward+punch have rather long start-up and are reserved for moves like overheads or projectiles.  Then, every special move has a “secondary” special that branches from it.  For example, forward+punch may begin an overhead but then pressing up before the attack activates, a secondary attack, probably an anti-air attack, would be performed.  The hope is that performing the initial move and then the secondary move will require just enough time and frames that the anti-air move won’t be so instantaneous.  Maybe this won’t help, but the idea it’s simple to actually perform an attack, but requires some dexterity and memorization to cancel one move into another properly.

A secondary idea I then had is to still allow players to perform attacks using quarter-circles; however, these players would be rewarded with a slight meter bonus, so you don’t have to perform moves properly to compete or play, but players who can are rewarded slightly for taking the time and effort to perform more complex inputs.  I can’t really tell if this input system will be good or not until someone tests it, which is why I released the prototype.

Create a Framework

My third goal was to begin creating a framework so that I can create future titles, TrueSync or not, more quickly.  A lot of games I work on are usually fighting game influenced, so I wanted to construct a framework so that creating future titles, whether 2D or 3D, would be easier in the future.  Though not perfect, I definitely tried to abstract more of my classes and functionality and believe I could quickly go from this 2.5D fighting game to a 3D game rather quickly with few changes.

TrueSync Tips

So, for this fighting game, I learned a good amount about TrueSync.  TrueSync attempts to be deterministic, allowing a local player’s inputs to be immediately respected, passed over the network, and compared to the game’s state and rolled back if there are inconsistencies found and resimulated.

The issue though is that Unity3D wasn’t built to be deterministic.  Its use of floats and random system for example can cause various issues.  It’s animation system also isn’t deterministic so trying to perfectly simulate results across two machines can be rather problematic.  Anyway, here are some tips I found were helpful for completing my prototype.

Note, these tips were written for Unity3D version 2017.1.1f1 and TrueSync version 1.1.0B.

Don’t “Press” Inputs

TrueSync uses a unique method to capture and send input, it’s called OnSyncedInput.  Here’s an example of how it works.

public class MyTSClass : TrueSyncBehaviour
{
    public override void OnSyncedInput()
    {
        TrueSyncInput.SetBool(0, Input.GetKeyDown(KeyCode.Space));
    }
}

So in the above, TrueSyncInput is used to pass inputs over the network.  The first argument is a byte, used as a key.  I’m just using 0 for now, but if you use multiple,  you should probably assign them to a constant.  Then, I’m using Input.GetKeyDown to send a bool if space is down or not.  One issue with this method is that it is performed similarly to OnFixedUpdate so calls such as “Input.GetKeyDown” don’t work consistently as when OnSyncedInput is called, Input.GetKeyDown is sometimes missed.  To resolve this for button inputs, here’s what I did:

public class MyTSClass : TrueSyncBehaviour
{
    bool hasPressed = false;

    public override void OnSyncedInput()
    {
        bool singlePress = false;
        if (Input.GetKey(KeyCode.Space))
        {
            if (!hasPressed)
            {
                hasPressed = true;
                singlePress = true;
            }
        }
        else if (hasPressed)
        {
            hasPressed = false;
        }

        TrueSyncInput.SetBool(0, singlePress);
    }
}

This change uses a bool that is set to true when OnSyncedInput is executed if the space bar is currently down. The toggle is then reset once the spacebar is no longer being held down.  The bool that is actually pased in TrueSyncInput.SetBool is only set if the keyboard is down AND hasPressed was false before being set to true.  This way, the first entry of TrueSyncInput will be true for only one execution of OnSyncedInput.  This should prevent any issues with OnSyncedInput missing an input as the average button press usually occurs for a few frames.  I don’t use this method exactly in ProtoFighter, but the idea is similar.  Instead of doing separate Booleans for each input type — up, down, left, right, etc. — I use an integer and bitmasking to change it during OnSyncedInput.

Treat TrueSync Like A Separate Engine

This sounds silly as Unity3D is a game engine; however, to make TrueSync’s determinism work properly, you have to use a lot of unique structs and classes that it introduces.  There’s FP, or FixedPoint, for float values for example and TSVector for Vector3’s.  Also, TrueSync has its own Transform class (TSTransform) that does not have all the functionality — at least now — that Unity3D’s Transform class has.  You can’t use children the same way and certain methods such as those that convert transform information from world to local space are missing.  Overall, you can’t just take a finished game and integrate TrueSync into it quickly.

One trick I had to do, for example, was figure out a way to align character hit spheres to certain joints.  In a normal setting, I could just use the following:

Animator anim = GetComponent<Animator>();
Transform t = anim.GetBoneTransform(HumanBodyBones.Chest);
Vector3 chestPos = t.position;

However, one problem is that this creates a Vector3 and even though I can convert the position to TrueSync’s Vector3 equivalent, a TSVector, they may be different values between the multiple players due to floating point precision errors.

To resolve this, I built a tool to cycle through my animations and store important point information as a TSVector  in a ScriptableObject.  I don’t save the position though; instead, I save the vector from the center to this point.  So, to get where the chest would be in my animation, it would be something like the following:

TSVector localChestVector = GetChestPosition(frame);
TSVector worldChestVector = tsTransform.position + tsTransform.rotation * localChestVector;

So, in the above, I’ve gotten a local vector for my chest position and then used the position of my player and rotation to define the world position for my chest now.  You’ll also notice that I’ve used a frame.  This is because a lot of fighting game interpret things as frames, and I believe interpreting your deterministic game in TrueSync is a lot easier to understand through the concept of frames than through time.  Even though my 3D animation is made up of curves, I store different bone information in these TSVectors so they can be referred to later regardless of the rotation or position of my character.  I also do a similar technique for moving a character by their root animation without actually having the Animator drive it.

No Animators — At Least How You Think

As of right now, TrueSync doesn’t have an Animator class.  For fighting games, this can be an issue since animations and the accuracy of said animations is so important.  To handle this, I did the following:

  • Stored all of my animator data in a separate data structure, mostly just my transition parameters and conditions
  • Muted ALL of my animation transitions
  • Disabled the Animator Component
  • Use Animator.Update(float)

So, even though the animator is disabled, Animator.Update(float) still allows the Animator to be updated.  Even though you do have to use a float instead of an FP, the amount I update is determined by the frame I’m supposed to be on, so my update function looks like this.

FP syncedFrame;
FP localFrame;
Animator anim;

private void Update()
{
    anim.Update(((TrueSyncManager.DeltaTime) * syncedFrame - localFrame).AsFloat());
    localFrame = syncedFrame;
}

So, here I have syncedFrame which is the frame of my animation that is set during OnSyncedUpdate.  Then I substract the syncedFrame to the localFrame and convert it to a float value.  I then set the localFrame to the syncedFrame.  I used FP instead of integers in case I want to play the game in slow motion.  This still needs some tweaking, however, but it gets the general idea across.

Overall, using Animator.Update(float) is great because it allows me to still get a lot of the functionality of Animators

  • Transition blending
  • IK
  • Mirroring
  • Humanoid rigs

But with more control.  This is one reason all transitions in the Animator are muted actually.  Because I don’t want transitions to happen automatically and switch states suddenly if there is rollback.  Doing it more manually allows me to switch state when I need to.

Just one small part of my AnimatorController; the red arrows show that my transitions are muted.

Anyway, the future of ProtoFighter is uncertain.  I will most certainly not release this as a full game, but instead a fighting game demo.  I know in this current 2D version I’d like to do the following:

  • Add rooms and lobbies instead of the “Ranked Match” system it uses now
  • Add stage select
  • Add an interactive tutorial
  • Balance and clean up the existing characters, Protolightning and Protaqua
  • Start looking into AI and single player modes

Overall, the goal with this game is to eventually get a framework to a place where I can experiment with a variety of gameplay styles and making something myself later down the road, hopefully sooner rather than later.  Maybe I can even use this to integrate TrueSync into Battle High 2 A+ — though I make zero promises.

ProtoFighter is available on itch.io & Game Jolt for free!  If you download the game and play them, I’d love to hear your feedback — but make sure you try the multiplayer as that’s the main area I’m trying to focus on.  Also, if you have any questions on TrueSync, I’d love to try and help as I think it’s a great asset and can help give online functionality to a lot of new indie game content — fighting games and other — in the future.


Maxscript: Constrain to Biped 12

So in 2016, I wrote a maxscript to constrain humanoid characters to 3ds Max bipeds for easier animation.  Every once and awhile, I get someone asking me for the script and some questions regarding it, so I decided to write a post about it for the future.

Here are two videos demonstrating how it works and its setup:


A quick summary:

  • What this script does:
    • Builds a biped, sizing it to fit a specified character
    • Uses Orientation Constraints to drive the original rig’s bones to the biped
  • What this script does NOT do:
    • Require new reskinning or transfer of the original rig’s skinning, which often causes issues.
    • Transfer .fbx animations — or any kinds for that matter — to the biped.  This is JUST for the rig itself.

Why?

Why write a script like this?  Well, for one I’m old-fashioned.  I’ve always liked the 3ds Max’s biped.  It’s not perfect, a little buggy; however, I’ve felt it gets the job done and has a lot of extra features — saving poses, postures, animations, etc. — that writing on my own would be rather time-consuming.  Additionally, exporting just the biped itself can be rather problematic as it sometimes moves bone objects, which causes issues with animation retargeting since that is focused more on rotation.  Since this original rig is preserved and only driven by Orientation Constraints from the biped, this is less of a problem.

Then, despite the fact other character creator tools such as Mixamo supplied rig to biped scripts, though scripts never quite worked as well as I would want, often deforming the original mesh or rig and causing unforeseen issues.

The Script

Firstly, you can download the script here.  Note, this script was written for 3ds Max 2016 but has also been tested in 2017.

Instructions

Unzip the downloaded file and run the .ms file.  You should then see the following window:

There are two columns.  The left column, Biped Bones,  is for all of the biped bones that’ll be created; the right column, Character Bones, is for the bones in the original rig.  Note, there are 2 neck bones and 3 spines in the left column; however, the 2nd neck joint and 3rd spine joint do not need to be defined and are prefaced with [IGNORE].  When this was written, it was for one specific rig that used 3 spine joints and 2 neck joints; however, since this caused issues with Unity, I decided to remove those; thus enabling it to work on more humanoid rigs.  Unity can now handle a 3rd spinal joint, but still doesn’t use a 2nd neck joint in its default, humanoid rigs.

To start populating the right column, click the row you want to define and then click the bone / node you’d like to associate with the biped bone.  It’s a bit tedious.  There are two buttons for saving and loading, Save Selection Set and Load Selection Set, respectively, that can help a bit.  If you know the names or they are named in a way that can be populated quickly through copy-and-paste, this can be done by saving a text file, updating it, and then reloading it.  In the .zip, there are two examples of these files; they are setup for use with iClone Character Creator 1 rigs.

Once the right column has been populated properly, the Validate Bones button will check to make sure the bone slots are all assigned.  This will also show a pop-up for any bones that are missing.  Warning:  This’ll generate a pop-up for every missing bone.  

If all bones have been signed, click the Build Biped button.  This will generate a biped that’ll match the size of the original rig.  You do not need to, but it is suggested to then rotate the biped as closely to the original rig.

Then, the Build Helper Rig button will create a new rig that is identical to the original rig except it’s bone orientation will match the biped’s, meaning the up, forward, and right axes will match the biped’s.  This is important for the next step.  Essentially, an early thought for this experiment was to:

  • Build a biped
  • Align the original rig to the biped

However, one of the big issues is that rigs and their bone rotations can come in a variety of orientations.  If you use 3ds Max’s default align too, arms will sometimes be rotated in strange positions.  The helper rig solves this by standing as the middleman between your original rig and the biped.  It’ll be the same size as your original rig but the bone’s will match the orientation of the biped.

Next there is the Align To Biped button.  This aligns the helper rig to the biped and then the original rig to the helper.  This is why aligning the biped to the original rig helps; otherwise the changes can look rather broken.  They are easy to fix because, again, this is just affecting rotation and not placement of the original rig.

The Create Constraint button is the final step.  All other steps should be completed first — including making backups in case there is an issue.  This will create Orientation Constraints between your original rig to the helper rig and from the helper rig to the biped.

Once this is done, the rig should now be driven by the biped.

Other Buttons & Tips

As you may note, there are two buttons I’ve yet to discuss, Quick Parent and Quick Child.  Quick Parent create a parent bone the selected bone’s parent and itself.  This would be used for something like a rig with only one spine.  This will create the second spine automatically that can be used in the rig.  Then, Quick Child, creates a joint at the end of a joint.  The biped rig requires 5 fingers as well say finger nubs, for example, and this button will create these quickly.

Another tip is that if you create a child, for something like the head nub, make sure that they are aligned perfectly vertically; otherwise, the head will be tilted when aligned to the biped.  The toes have a similar problem I haven’t quiet figured out, but again, aligning the created biped as closely to the original rig as possible will help resolve some misalignment issues.  Another tip is that instead of rotating the biped once it’s created, rotated the bones of the original rig to match the newly created biped as closely as possible.

Final Steps

After completing the steps, you can now animate just the biped as your would except you should NOT rotate the pelvis bone; this causes the hip and spine bones to translate slightly, causing issues upon export.  They will export fine, but your animations won’t match perfectly and when importing to Unity, you’ll get errors about how those bones have translation data and that said data will be ignored if it’s part of a humanoid avatar.

Also, don’t export everything; use the export selection and select only the original rig’s joints and/or any meshes you’d like to export.

Quick Summary

  • Unzip this file.
  • Run the BipedRigCreator.ms script in 3ds Max
  • Define the joints in the right column, creating children or parents where needed
  • Validate the bones
  • Build the biped
  • Build the helper rig
  • Align to the biped
  • BACKUP (if not already)
  • Create constraints

Wishlist

I’m unsure if I’ll add anything to this script anytime soon, but here is a list of things I’d like to do:

  • Streamline the bone selection or remove the left, right column idea as they aren’t lined up
  • Allow for multiple spine joints / make the correct number of spines based on the number of spine joints assigned)
  • Adjust errors for the head nub and foot nub issues
  • Not show a pop-up for every missing bone, but instead a list of all missing bones upon validation

Anyway, if you use the script, great!  I’d love to see what people do with it.  Again, I mostly wrote this so people who would like to use it in the future have something to refer to.


Unity3D Script: Quick Texture Editor

Last year I wrote a Unity3D editor script for combining textures as well as swapping and combining their different color channels.


Someone on YouTube recently commented, asking for more details. Since I haven’t touched the script in over a year, I decided to just make the script public. It’s not perfect and some of my comments don’t make sense. I’ll probably clean it up in the future, or at least add better documentation.  I sound very professional right now.

via GIPHY

What this script does:

  • Allows you to swap color channels
    • For example, take the red channel of a grayscale smoothness map and apply it to the alpha channel of your albedo texture
  • Allows you to combine texture onto a new, larger texture
    • You have two 512×512 texture and want to combine them onto one 1024×512 texture

What this script does NOT do:

  • Resize textures
  • Rearrange meshes’ UVs
  • Paint onto textures
  • Create textures other than PNGs
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using UnityEditor;
using System.IO;
 
namespace MattrifiedGames.Assets.TextureHelpers.Editor
{
    /// <summary>
    /// Editor window for quickly swapping, rearranging, and other things to textures in Unity3D.
    /// </summary>
    public class QuickTextureEditor : EditorWindow
    {
        /// <summary>
        /// A list of the current affected textures.
        /// </summary>
        List<TextureInformation> texturePositionList;
 
        /// <summary>
        /// If true, the new texture's size will be forced to the nearest power of two.
        /// </summary>
        bool forcePowerOfTwo = false;
 
        /// <summary>
        /// Width of the new texture.
        /// </summary>
        int newTexWidth = 512;
 
        /// <summary>
        /// Height of the new texture.
        /// </summary>
        int newTexHeight = 512;
 
        /// <summary>
        /// The name of the new texture to be created.
        /// </summary>
        string newTextureName = "New Texture";
 
        /// <summary>
        /// Operations affecting different channels.
        /// </summary>
        public enum ChannelOperations
        {
            Ignore = 0,
            Set = 1,
            Add = 2,
            Subtract = 3,
            Multiply = 4,
            Divide = 5,
        }
 
        public struct ChannelBlendSetup
        {
            public ChannelOperations rCU, gCU, bCU, aCU;
        }
 
        /// <summary>
        /// Information about each texture being used to create the new texture.
        /// </summary>
        internal class TextureInformation
        {
            /// <summary>
            /// The texture being used.
            /// </summary>
            public Texture2D texture;
 
            /// <summary>
            /// The x and y position of the new texture.
            /// </summary>
            public int xPos, yPos;
 
            /// <summary>
            /// The x and y position of the new texture.
            /// </summary>
            public int width, height;
 
            /// <summary>
            /// Should a multiply color be used?
            /// </summary>
            public ChannelOperations blendColorUse = ChannelOperations.Ignore;
             
            /// <summary>
            /// The color to be blended with the texture.
            /// </summary>
            public Color blendColor;
 
            public ChannelBlendSetup rBS = new ChannelBlendSetup() { rCU = ChannelOperations.Set },
                gBS = new ChannelBlendSetup() { gCU = ChannelOperations.Set },
                bBS = new ChannelBlendSetup() { bCU = ChannelOperations.Set },
                aBS = new ChannelBlendSetup() { aCU = ChannelOperations.Set };
 
            public void OnGUI(string label, ref int refWidth, ref int refHeight)
            {
                if (texture != null)
                    label = texture.name;
                texture = (Texture2D)EditorGUILayout.ObjectField(label, texture, typeof(Texture2D), false);
 
                if (GUILayout.Button("Set as new texture size."))
                {
                    refWidth = width;
                    refHeight = height;
                }
 
                if (texture == null)
                {
                    Vector2 s = new Vector2(width, height);
                    s = EditorGUILayout.Vector2Field("Size", s);
                    width = Mathf.Max(1, Mathf.RoundToInt(s.x));
                    height = Mathf.Max(1, Mathf.RoundToInt(s.y));
                }
                else
                {
                    width = texture.width;
                    height = texture.height;
                }
 
                blendColorUse = (ChannelOperations)EditorGUILayout.EnumPopup("Blend Color Usage", blendColorUse);
                if (blendColorUse != ChannelOperations.Ignore)
                    blendColor = EditorGUILayout.ColorField(blendColor);
                else
                    blendColor = Color.white;
 
                Vector2 v = new Vector2(xPos, yPos);
                v = EditorGUILayout.Vector2Field("Pos", v);
                xPos = Mathf.RoundToInt(v.x);
                yPos = Mathf.RoundToInt(v.y);
 
                EditorGUILayout.BeginHorizontal();
 
                EditorGUILayout.BeginVertical();
                GUILayout.Label("");
                GUI.color = Color.red;
                GUILayout.Label("R");
 
                GUI.color = Color.green;
                GUILayout.Label("G");
 
                GUI.color = Color.blue;
                GUILayout.Label("B");
 
                GUI.color = Color.white;
                GUILayout.Label("A");
                EditorGUILayout.EndVertical();
 
                ChangeBlendSetup("R", ref rBS, Color.red);
                ChangeBlendSetup("G", ref gBS, Color.green);
                ChangeBlendSetup("B", ref bBS, Color.blue);
                ChangeBlendSetup("A", ref aBS, Color.white);
 
                EditorGUILayout.EndHorizontal();
            }
 
            private void ChangeBlendSetup(string p, ref ChannelBlendSetup bS, Color guiColor)
            {
                EditorGUILayout.BeginVertical();
                GUI.color = guiColor;
                GUILayout.Label(p);
                GUI.color = Color.white;
                 
                bS.rCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.rCU);
                bS.gCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.gCU);
                bS.bCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.bCU);
                bS.aCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.aCU);
                 
                EditorGUILayout.EndVertical();
            }
 
            internal void EditColor(ref Color colorOutput, ref Color colorInput)
            {
                EditChannel(ref colorOutput.r, ref colorInput, rBS);
                EditChannel(ref colorOutput.g, ref colorInput, gBS);
                EditChannel(ref colorOutput.b, ref colorInput, bBS);
                EditChannel(ref colorOutput.a, ref colorInput, aBS);
            }
 
            private void EditChannel(ref float outputValue, ref Color inputColor, ChannelBlendSetup bs)
            {
                EditChannel(ref outputValue, ref inputColor.r, bs.rCU);
                EditChannel(ref outputValue, ref inputColor.g, bs.gCU);
                EditChannel(ref outputValue, ref inputColor.b, bs.bCU);
                EditChannel(ref outputValue, ref inputColor.a, bs.aCU);
            }
 
            private void EditChannel(ref float output, ref float input, ChannelOperations channelUsage)
            {
                switch (channelUsage)
                {
                    case ChannelOperations.Set:
                        output = input;
                        break;
                    case ChannelOperations.Add:
                        output += input;
                        break;
                    case ChannelOperations.Divide:
                        output /= input;
                        break;
                    case ChannelOperations.Multiply:
                        output *= input;
                        break;
                    case ChannelOperations.Subtract:
                        output -= input;
                        break;
                    case ChannelOperations.Ignore:
                        return;
                }
            }
        }
 
         
 
        // Add menu named "My Window" to the Window menu
        [MenuItem("Tools/Quick Texture Editor")]
        static void Init()
        {
            // Get existing open window or if none, make a new one:
            QuickTextureEditor window = (QuickTextureEditor)EditorWindow.GetWindow(typeof(QuickTextureEditor));
            window.Show();
        }
 
        /// <summary>
        /// On GUI function that displays information in the editor.
        /// </summary>
        void OnGUI()
        {
            OnGUICombineTextures();
        }
 
        /// <summary>
        /// Quickly gets the importer of a specified asset
        /// </summary>
        /// <typeparam name="T">The type of importer to be used.</typeparam>
        /// <param name="asset">The asset whose importer is being referenced.</param>
        /// <returns>The importer, converted to the requested type.</returns>
        private T GetImporter<T>(UnityEngine.Object asset) where T : AssetImporter
        {
            return (T)AssetImporter.GetAtPath(AssetDatabase.GetAssetPath(asset));
        }
 
        private void SetupList<T>(ref List<T> list, int p)
        {
            if (list == null)
                list = new List<T>();
            while (list.Count <= p)
                list.Add(default(T));
        }
 
        private T GetFromList<T>(ref List<T> list, int p)
        {
            SetupList(ref list, p);
            return list[p];
        }
 
        private void DefineTexturePose(int index)
        {
            SetupList(ref texturePositionList, index);
            if (texturePositionList[index] == null)
                texturePositionList[index] = new TextureInformation();
 
            texturePositionList[index].OnGUI("Texture " + index, ref newTexWidth, ref newTexHeight);
        }
 
        private static Color DivideColor(Color c)
        {
            return new Color(1f / c.r, 1f / c.g, 1f / c.b, 1f / c.a);
        }
 
        Vector2 scroll;
        private void OnGUICombineTextures()
        {
            // Defines information about the new texture.
            newTextureName = EditorGUILayout.TextField("New Texture Name", newTextureName);
 
            forcePowerOfTwo = EditorGUILayout.Toggle("Force Power of 2", forcePowerOfTwo);
            if (forcePowerOfTwo)
            {
                newTexWidth = Mathf.ClosestPowerOfTwo(EditorGUILayout.IntField("Width", newTexWidth));
                newTexHeight = Mathf.ClosestPowerOfTwo(EditorGUILayout.IntField("Height", newTexHeight));
            }
            else
            {
                newTexWidth = EditorGUILayout.IntField("Width", newTexWidth);
                newTexHeight = EditorGUILayout.IntField("Height", newTexHeight);
            }
 
            EditorGUILayout.Separator();
 
            scroll = EditorGUILayout.BeginScrollView(scroll);
            if (texturePositionList == null)
                texturePositionList = new List<TextureInformation>();
            for (int i = 0; i < texturePositionList.Count; i++)
            {
                DefineTexturePose(i);
            }
 
 
            EditorGUILayout.BeginHorizontal();
            if (GUILayout.Button("Add Texture"))
            {
                texturePositionList.Add(new TextureInformation());
                return;
            }
            if (GUILayout.Button("Remove Texture"))
            {
                texturePositionList.RemoveAt(texturePositionList.Count - 1);
                return;
            }
            EditorGUILayout.EndHorizontal();
 
            EditorGUILayout.EndScrollView();
 
            EditorGUILayout.Separator();
 
            if (GUILayout.Button("Save Texture"))
            {
                int textureCount = texturePositionList.Count;
 
                Texture2D newTex = new Texture2D(newTexWidth, newTexHeight);
                newTex.name = string.IsNullOrEmpty(newTextureName) ? "New Texture" : newTextureName; 
                Color[] mainColors = new Color[newTex.width * newTex.height];
                newTex.SetPixels(mainColors);
 
                List<TextureInformation> pulledTextures = new List<TextureInformation>();
                for (int i = 0; i < textureCount; i++)
                {
                    TextureInformation pos = GetFromList(ref texturePositionList, i);
                    if (pos == null)
                        continue;
                    else if (pos.texture == null)
                    {
                        pos.texture = new Texture2D(pos.width, pos.height);
                        pos.texture.name = "Texture " + i;
                        Color[] c = new Color[pos.width * pos.height];
                        for (int j = 0; j < c.Length; j++) c[j] = pos.blendColor; pos.texture.SetPixels(c); pos.texture.Apply(); } if (pos.texture.width + pos.xPos > newTex.width ||
                        pos.texture.height + pos.yPos > newTex.height)
                    {
                        Debug.LogWarning(pos.texture.name + " will not fit into new texture.  Skipping.");
                        continue;
                    }
 
                    pulledTextures.Add(pos);
                }
 
                for (int i = 0; i < pulledTextures.Count; i++)
                {
                    EditorUtility.DisplayProgressBar("Saving Texture", "Working on Texture " + i, (i + 1) / (pulledTextures.Count));
 
                    TextureImporter ti = GetImporter<TextureImporter>(pulledTextures[i].texture);
                    bool wasReadable = ti.isReadable;
                    bool wasNormal = ti.normalmap;
 
                    if (wasReadable != true)
                    {
                        ti.isReadable = true;
                        ti.SaveAndReimport();
                    }
 
                    if (wasNormal)
                    {
                        ti.normalmap = false;
                        ti.SaveAndReimport();
                    }
 
 
                    Color[] pulledColors = pulledTextures[i].texture.GetPixels();
 
                    if (pulledTextures[i].blendColorUse != ChannelOperations.Ignore)
                    {
                        for (int c = 0; c < pulledColors.Length; c++)
                        {
                            switch (pulledTextures[i].blendColorUse)
                            {
                                case ChannelOperations.Set:
                                    pulledColors = pulledTextures[i].blendColor;
                                    break;
                                case ChannelOperations.Add:
                                    pulledColors += pulledTextures[i].blendColor;
                                    break;
                                case ChannelOperations.Divide:
                                    pulledColors *= DivideColor(pulledTextures[i].blendColor);
                                    break;
                                case ChannelOperations.Multiply:
                                    pulledColors *= pulledTextures[i].blendColor;
                                    break;
                            }
                        }
                    }
 
                    Color[] colorsToModify =
                        newTex.GetPixels(pulledTextures[i].xPos, pulledTextures[i].yPos, pulledTextures[i].texture.width, pulledTextures[i].texture.height);
                     
                    // Adds these colors instead of setting.  Slower, but allows for combining channels or for combining reasons.
                    for (int c = 0; c < colorsToModify.Length; c++)
                        pulledTextures[i].EditColor(ref colorsToModify, ref pulledColors);
 
                    newTex.SetPixels(pulledTextures[i].xPos, pulledTextures[i].yPos, pulledTextures[i].texture.width, pulledTextures[i].texture.height,
                        colorsToModify);
 
                    if (ti.isReadable != wasReadable)
                    {
                        ti.isReadable = wasReadable;
                        ti.SaveAndReimport();
                    }
 
                    if (wasNormal)
                    {
                        ti.normalmap = true;
                        ti.SaveAndReimport();
                    }
                }
 
                SaveTexture(newTex);
 
                EditorUtility.ClearProgressBar();
            }
        }
 
        void SaveTexture(Texture2D texture2D)
        {
            byte[] bytes = texture2D.EncodeToPNG();
 
            File.WriteAllBytes(Application.dataPath + "/" + texture2D.name + ".png", bytes);
 
            AssetDatabase.Refresh();
        }
    }
}

If you use the script, credit would be nice. If you have any questions, feel free to ask here or on my twitter.