Dev Log: Jan. 2020

It’s 2020!

So, it’s 2020.  Unfortunately, this year is off to a semi-rough start for me.  It started with anxiety and sickness, and now, I’m having some dental woes.  Just 2 cavities, but I’ve had cavities turn into a lot more; I’ll be fine regardless, but it’s never fun.  That being said, I wanted to write about what I plan on doing in 2020.

2019 Summary

2019 was pretty much the year of MerFight.  I really hit a stride with the game, creating every character model, implementing 5 characters, and showing the game off in public.  I still have a long way to go though.

Despite my progress, I did, however, experience some burnout in 2019.  For 2020, I made a schedule I want to try and keep up with to get decent work on the game done every week but also use as a reminder to take breaks and limit how much I work to prevent a similar setback.

Burnout is not fun…

2020 Plans

Anyway, here is a rather lofty list of things I’d like to accomplish or start in 2020.  I will NOT get all of these done; heck, I may not even touch some, but I like having a lofty list.

Continue MerFight

There are 7 characters left to finish for MerFight.  Plus a ton of other features, and polish, so the chances of finishing it in 2020 are near impossible, even if I worked intensely on it everyday.  Showing the game off at a convention like Combobreaker would be cool; however, I think in trying to do so, I would burn myself out, so I’m going to, instead, focus on trying to get it in a good state for 2021.

New Game Prototypes

I have some new games I’d like to start prototyping and get to stable enough states that I can pause them.  Most are fighting games — shocker. I started doing an interactive roadmap, but that began feeling cumbersome to update and share, but here is a current list:

Swapping Limb, Uh Cylinder Fighter

A fighting game that takes place on a cylinder or a round arena.  Goal is to create a 2D fighting game without corners. I also want to allow players to create unique characters by swapping limbs, inspired by a line of toys I had growing up in the 90’s called Socket Poppers.

I’m also playing around with frame rate; the game is at 60 FPS but the animations are at 24 FPS.
A very strange toy…

Fighter RPG

I feel there have been fighting games with RPG mechanics, but not one like I’m looking for yet. I experimented with this last year during a jam week, and this may be my fallback game if I find my rollback netcode results are abysmal.


CupKick is a 3D fighter I’ve been wanting to do for a long time.  It has 3 unique aspects. It only has two buttons (punch and kick), it has no visible UI (clothing destruction is used to display damage), and all characters are based on different desserts.  Unfortunately, the more I’ve explored the design of this, the more I find that doing all 3 things might be difficult. I’ll continue to explore and contemplate this idea though. Maybe it’ll end up being two different games.

Battle High 2 A+ Update

I’d love to do a Battle High 2 update.  There is still one unfinished character, but I’d also love to see if I can integrate my rollback netcode to the game.  I think I need to release or get more comfortable with said netcode in Merfight or another release first though.

Nadine, an unfinished character.

Dimension Swap

This is lofty, and I don’t even see myself starting this till 2021, but I wanted to do a game where it starts as a 3D fighter, but you can switch to a 2D fighter.  So imagine Bloody Roar, but when in animal form, the gameplay changes entirely for both players. Could it be a disaster? Sure, but I’d like to at least prototype it.

Other Goals

Though many are game development related, there are some other goals I have for 2020.

Rollback Netcode Tutorial

I’m currently working on this, but I’d like to make a tutorial for my rollback netcode approach in Unity and share it.  I may start this with just my patrons though, but we’ll see.


I want to start a Mattrified Games discord.  I’m hesitant because I know I won’t be able to be on it all the time, and I know there are some issues with hosting a server, but it’s something I’d like to try and have a hub for people who like my work to interact with me and each other.

Rigging Script

In 2019, I did a few gigs using my biped constraint script in 3ds max.  I’d like to continue some work with this. I’ve contemplating cleaning and releasing the script or maybe making it a fiverr gig.  I’m not 100% sure, but having a way to make a few extra bucks with the script from time to time would be nice.

Attend Combobreaker (or some other gaming con)

I’ve been to EVO and Magfest, and both were fun to a degree.  I felt a bit out of place at EVO, and I was showing a game off at Magfest so I didn’t really get to enjoy it entirely.  That being said, I hear good things about Combobreaker and would like to attend this year, but NOT while showing a game.  Though showing MerFight at Combobreaker would be cool — if accepted — trying to get something done by May would be a hassle and probably cause burnout.  I’d rather attend the convention first, see how it is, and then think about showing the game off there.


This is a lot of stuff.  Additionally, I also have the usual resolutions like being kinder and healthier.  But from this list, I’d be happy if I could accomplish just a few items. So, though 2020 is off to a rough, uncertain start, I’m hoping the schedule I developed can help make it more productive and positive than 2019.

Tutorial: Setting up a 3D Fighting Game Camera Using Cinemachine in Unity3D

This is a simple tutorial exploring the use of a Cinemachine camera in Unity3D for a 3D fighting games such as Virtua Fighter or Tekken.  This is by no means the only solution to achieve this; this tutorial just explores methods that I’ve had some success with when prototyping.

Why use a Cinemachine camera? One nice advantage to a Cinemachine camera is that you can blend into other camera views quickly and easily, so, for example, if you have a unique camera animation for a throw or super move intro, you can easily blend to this animation and back to the main camera using Cinemachine by just simply switching the priority of the virtual camera.

This being said, my Cinemachine camera has the following goals:

  • Track the two characters in the environment
  • Rotate as the two characters move around the scene in any direction.
  • Move in and out as the characters get closer and farther apart.

My scene initially looks like this.  I have two characters, a pink fighter and a teal fighter, which will be tracked by my Cinemachine camera setup.

Initial scene setup

Tracking the Characters

To track the characters, I first create a Cinemachine Target Group Camera (Cinemachine -> Create Target Group Camera).  This creates my Cinemachine Virtual Camera as well as a Cinemachine Target Group.

The Cinemachine Target Group allows me to track multiple transforms.  The following is the Cinemachine Target Group Component setup:

The Cinemachine Target Group Setup

I want the position mode to be Group Center and the Rotation Mode to be Manual.  A script will be applied that’ll rotate this later. I use Late Update for the Update Method.

I set both fighter transforms in the target list with the same weight.  I set the radius to about 1.7. Originally, I was experimenting with the Framing Transposer, where this value is very important; however, for now, just making sure this radius is the same for both transforms is the most important.

The Cinemachine Virtual Camera is setup as follows:

Cinemachine Virtual Camera Setup

The camera is setup to follow and look at the target group.  For the body of the virtual camera, I’m using a Transposer. Again, I originally tried a Framing Transposer, however, I found it was really jittery when rotating the camera.  I later read that the Framing Transposer is better for 2D camera usage than 3D, rotated camera use, so I went back to the basic Transposer.

Anyway, the values are pretty similar to the default values, except I lowered the damping to 0.5 per axis and set the Follow Offset to [0, 2, -4.3333333].  This makes it so the camera is 2 units up and -4.33333333 units away from the center of the target group during runtime.

For the Aim, I use “Same as Follow Target” meaning it’ll use the same rotation as the target group’s transform.

Using this initial setup, the camera should appear like this:

The camera setup just following the center of the targets.

Right now, the camera does a pretty decent job tracking the center of the two characters; however, it doesn’t move back to fit them in view when the pink fighter gets a certain distance away and the camera doesn’t rotate as the pink fighter walks around their opponent.

The next section of this tutorial will go over setting up the camera so it both rotates and moves to track the fighters better.

Rotating and Aligning the Camera

To achieve this, instead of fighting with built-in Cinemachine tools, I decided to write a script.  The MonoBehaviour, Align3DCam, is attached to the Target Group GameObject and appears as follows:

Cinemachine Target Group with Align 3D Cam

TA and TB are the two transforms that will be tracked.  In this case, our fighters.

We then reference the virtual camera itself.  Its Transposer Component will be referenced as well, but this reference will be set during Awake.

Framing Normal is the normal vector.  This is set on Awake based on the follow offset of the virtual camera’s Transposer value.

Distance shows the distance between the two tracked transforms; it is serialized in the inspector for debugging purposes.

Transposer Linear Slope and Transposer Linear Offset are two values that represent a simple linear equation (y = mx + b) where x is the distance between the two tracked transforms and y is the distance along the Framing Normal that the virtual camera will be offset.

The framing helpers are used to help create the slope and offset as well as set the minimum allowed distance so that the camera doesn’t move in too closely when the fighters are standing next to one another.

Now, the following is the script used for Align3DCam:

using Cinemachine;
using UnityEngine;

public class Align3DCam : MonoBehaviour
    [Tooltip("The transforms the camera attempts to align to.")]
    public Transform tA, tB;

    [Tooltip("The cinemachine camera that will be updated.")]
    public Cinemachine.CinemachineVirtualCamera virtualCamera;

    /// <summary>
    /// The Transposer component of the cinemachine camera.
    /// </summary>
    private Cinemachine.CinemachineTransposer tranposer;

    /// <summary>
    /// Boolean that is set based on whether or not a virtual camera is supplied.
    /// </summary>
    private bool hasVirtualCamera;

    [SerializeField(), Tooltip("The starting normal of the cinemachine transposer.")]
    private Vector3 framingNormal;

    [SerializeField(), Tooltip("The current distance between the two tracked transforms.")]
    float distance;

    [Tooltip("Slope Value (m) of the linear equation used to determine how far the camera should be based on the distance of the tracked transforms.")]
    public float transposerLinearSlope;

    [Tooltip("Offset Value (b) of the linear equation used to determine how far the camera should be based on the distance of the tracked transforms.")]
    public float transposerLinearOffset;

    [Header("Framing helpers")]
    [Tooltip("The minimum distance allowed between the two transforms before the camera stops moving in and out.")]
    public float minDistance;

    [Tooltip("The minimum distance the camera will be from the tracked transforms.")]
    public float minCamDist;

    [Tooltip("A secondary distance between the two transforms used for reference.")]
    public float secondaryDistance;

    [Tooltip("A secondary distance the camera should be at when the tracked transforms are at the secondary distance.")]
    public float secondaryCamDistance;

    /// <summary>
    /// Function to help determine the
    /// </summary>
    [ContextMenu("Calculate Slope")]
    void CalculateSlopes()
        if (virtualCamera == null)
        tranposer = virtualCamera.GetCinemachineComponent<CinemachineTransposer>();
        if (transposer == null)

        // If the application is playing, we don't update the minimum values.
        if (!Application.isPlaying)
            // We get the distance between the transforms currently
            minDistance = Vector3.Distance(tA.position, tB.position);
            distance = minDistance;

            // We get the magnitude of the follow offset vector.
            minCamDist = tranposer.m_FollowOffset.magnitude;

        // We calculate the slope ((y2-y1)/(x2-x1))
        transposerLinearSlope = (secondaryCamDistance - minCamDist) / (secondaryDistance - minDistance);

        // We calculate the offset b = y - mx;
        transposerLinearOffset = minCamDist - (transposerLinearSlope * minDistance);

    private void Awake()
        // Determines if a virtual camera is present and active.
        hasVirtualCamera = virtualCamera != null;
        if (hasVirtualCamera)
            transposer = virtualCamera.GetCinemachineComponent<CinemachineTransposer>();

            if (transposer == null)
                hasVirtualCamera = false;
                // Sets the framing normal by the transposer's initial offset.
                framingNormal = tranposer.m_FollowOffset;

    // Update is called once per frame
    void LateUpdate()
        // Gets the distance between the two tracked transforms.
        Vector3 diff = tA.position - tB.position;
        distance = diff.magnitude;

        // The Y is removed and the vector is normalized.
        diff.y = 0f;

        // Adjusts the follow offset of the transposer based on the distance between the two tracked transforms, using a minimum value.
        if (hasVirtualCamera)
            tranposer.m_FollowOffset = framingNormal * (Mathf.Max(minDistance, distance) *
                transposerLinearSlope + transposerLinearOffset);

        // If the two transforms are at the same position, we don't do any updating.
        if (Mathf.Approximately(0f, diff.sqrMagnitude))

        // We create a quaternion that looks in the initial direction and rotate it 90 degrees
        Quaternion q = Quaternion.LookRotation(diff, Vector3.up) * Quaternion.Euler(0, 90, 0);

        // We create a second one that is rotated 180 degrees.
        Quaternion qA = q * Quaternion.Euler(0, 180, 0);

        // We determine the angle between the current rotation and the two previously created rotations.
        float angle = Quaternion.Angle(q, transform.rotation);
        float angleA = Quaternion.Angle(qA, transform.rotation);

        // The transform's rotation is set to whichever one is closer to the current rotation.
        if (angle < angleA)
            transform.rotation = q;
            transform.rotation = qA;

The script is rather lengthy, so I’ll summarize it a bit.  It’s essentially doing two things. Offsetting the camera based on the linear equation values and rotating the camera based on the vector between the two tracked transforms.

The slope values are calculated in the CalculateSlope method, which has a ContextMenu attribute, meaning it can be accessed through the right-click menu of the Align3DCam Component.

The Calculate Slope Context Menu Method

What this does is it takes the current distance of the fighters and the magnitude of the cinemachine camera follow offset position to set the minimum distance and minimum camera distance.  The secondary values are then used to calculate Transposer Linear Slope and Transposer Linear Offset.

Now, to get good secondary values, you’ll have to manually adjust them until you have something you like.  If you use Calculate Slope while the game is playing, the minimum values will not be adjusted, so you can test different secondary values out, and then copy the Component and paste its values when you are no longer running.  I could have probably written a more advanced algorithm that uses a bounding box, but for now, I found this got the job done pretty quickly.

When it comes to rotation, the method works by taking the vector between the two transforms, which is found by subtracting the position of TB from TA.  The y value of this vector is then set to 0 and the vector is normalized.  

A quaternion is then created using Quaternion.LookAt, which takes the normalized diff Vector and Vector3.Up to create a rotation that is essentially look in the direction of this vector.  This quaternion is then multiplied by a 90 degree rotation, thus creating a rotation that will look at both characters; however, this assumes that the TA will always be on the left and TB will be on the right.  If they switch sides, such as one fighter jumping over the other, the camera will rotate and snap quickly like this:

How the camera will look if we only care about one Vector

We certainly don’t want that, so we create another quaternion, which is the first quaternion we created and rotate it 180 degrees on the Y axis, essentially, the same rotation looking in the opposite direction.  We then get the angle between both of these quaternions and the camera’s current rotation. Whichever angle is lower, that is the rotation we use. So now, when jumping over the opponent, the camera will no longer pop to keep the pink player on the left side:

The camera no longer forcing the pink player to the left side.

So, once applied with proper values, the cinemachine virtual camera and target group should work as follows:

The Final Result of the 3D Fighter Cinemachine Camera Setup

As the pink player moves around the teal player, the camera rotates.  The camera moves back as the pink player gets farther away and the camera doesn’t snap sharply when jumping over the opponent.  All of the initial goals have been achieved.

Overall, this is a very simple setup, but it’s a good place for setting up a 3D camera for a fighting game using Cinemachine, especially for early prototypes.  Future features that would probably need to be added are collision with objects in the world if scenes are more complex or using a bounding box to frame fighters in more properly, but again, this is a simple approach to get something off the ground.

If you have questions feel free to comment here or send me a tweet to @mattrified. Additionally, a sample can be found on my Patreon.

Jam Week 2019: Golem Jox 3

Golem Jox 3 (GJ3) is a prototype demo I developed during Schell Game’s Jam Week. Here’s quick preview:

Essentially, once a year the studio “closes” and allows its employees to work on whatever they want – within reason. Usually I work on something fighting game related by myself. Last year, for example, I worked on developing something that utilized my own rollback netcode solution in Unity. This year I decided to experiment with what I was calling a “Single Player Fighter” or “Fighting Game RPG.” Someone suggested a fighting adventure game; someone else, a turn-based fighter. I’m still not 100% sure what to call it, or if it’s even that unique as apparently there are a few games that have attempted similar approaches.
The game flow is rather simple.

  1. Player explores rather simple environments
  2. Player encounters an enemy
  3. Short dialog introduction
  4. The player’s turn begins where they attack the enemy, trying to perform the most damage in an allotted amount of time
  5. The enemy takes their turn
  6. Repeat 4 to 5 until someone wins
  7. If the player wins, return to 1; otherwise, end the game


So many, many questions…

I find one of the primary goals behind prototypes is to answer questions. Here are some of the questions I was trying to answer a lot of questions with GJ3’s prototype:

How should the player explore the environment?

I decided to just have the player explore the environment like they would if they were in a 2D fighting game.  I feel if I – especially within the 4 day jam period – tried to implement a top-down RPG exploration map or 4-way movement system, I wouldn’t have gotten to answer a lot of the other questions I was trying to answer. This also allows players to practice various moves, and I can “teach” how to perform different attacks in the environment.

I know it’s not the “right” input for that attack style…

Do character move sets evolve overtime? If so, how?

The Golem Jox theme sort of comes in for this. Golem Jox is a silly IP that I used for Jam Weeks in the past in which players control a “golem” or just an entity made of random things. You start off as “Juhnk,” a golem made of white cubes. As you progress, you swap and equip different “limbs.” Some of the limbs are more powerful than what you previously had, either granting new moves, having more attack power, or granting other changes such as increased max health. I sort of “force” limb switching by locking off sections without wearing different limbs. Most people during playthroughs didn’t switch back after going through a “door” and then realized the new limb or move set was better.

For this prototype the idea was:

  • Your base or body, torso and head, determined things like your walk speed, jump weight, max health, etc. Unfortunately, I didn’t get very far with these.
  • Left arm was for weak or light punch
  • Right arm was for strong or heavy punch
  • Left leg was for weak or light kick
  • Right leg was for strong or heavy kick

Players were then supposed to have a forward and/or back special move for each non-torso limb and a super attack, but this sadly didn’t happen due to time. In the prototype, they got unique limbs and some had unique special moves, but supers were never implemented.

How do you prevent players from sticking to one set?

Sadly this question is still unanswered. What I wanted to do is that the player does not level up based on how many matches they win, but by how often they use a limb. So, for example, if I’m level 1, and I use my left arm 5 times in one fight, and it levels up to level 2, then I level up to level 2 as well. However, if the same limb is level 4 and maxed out, then I will no longer gain EXP for using it. As a player, I’d have to make the choice, “Do I keep using a limb I’m really good with or do I equip a newer, maybe weaker one, so I can continue to level up overall.”

Again, unfortunately, due to time, I didn’t get this far, but is probably the first question I would try to answer next if I were to continue to polish this prototype.

I think the other, final question, that I’m not 100% sure is answered, is will a player enjoy this gameplay loop.  That’s difficult to tell without more work, but based on the playtest I had, I think, with a lot of polish to the combat itself, I think they could.

Learning New Tools:  Playables

Not my playable graph, but a sample one provided by Unity.

Learning is an important part of Jam Week.  Besides learning the answers to prototyping questions, I often decide to try something new.  This Jam Week in particular, I decided to work with Unity’s Playable System. One challenge with this game is that characters would need to be able to choose from a wide variety of animations; however, having all of these animations loaded at runtime would probably not be very efficient.

Take remedy this, I utilized the Playable System.  Unlike a Unity’s runtime animator controller, you can dynamic build a Playable System at runtime.  So, for example, if a character is equipped with a cubic right leg, I can utilize an animation, let’s call it, “cubic right kick.”  If I then equip a spherical right leg though, I can replace it with “spherical right kick.” All I have to do is rebuild the playable graph and apply it.  There is still a lot of finesse needed, such as how to make the animations blend cleanly, but the playable’s system ability to load animations dynamically definitely make them seem like a great.  The system also has some strict rules such as you MUST destroy a playable graph once you’re done with it.

Next Steps

Getting something playable — not pun intended — felt nice, but there is still a lot that can be done.

Getting as far as I did felt like a minor victory.

This is just a prototype, but also something I’d like to continue at a future time in some capacity.  I think the following are things I would like to answer in the future:

  • Should there be guard functionality?  If so, what does that look like?
  • Can this work with an original IP that does NOT involve swapping limbs?  
    • Would swapping “styles” like in Final Fantasy Tactics work better?
    • How many moves does a character need to make them feel “complete?”
  • Can you have multiple characters on a team?  
    • If you have multiple enemies on a team, can you change position and try to line up a “shot”?

And this is just a few questions.  Overall, there is a lot that would need to be done to make this a full game; however, I think Jam Week gave me a good head start to understand the idea a lot better.  For now though, I’m most likely going to continue with MerFight and give this a break for a few weeks before returning to it with fresh eyes. I’d like to eventually release this prototype to the public to try, but I think it needs a bit more polish before that.

Unity3D Tool – HairKit

Hair.  It’s probably one of the most difficult things for me to 3D model.  Whether I’m trying to go for more chunky, anime look for my hair or planar hair, it’s a challenge.  To try and remedy this, I created a tool a few months ago — maybe even over a year — that I called HairKit.  This post is about this tool, a brief overview, and where it is now.

HairKit Components

The following diagram demonstrates how the different components of HairKit come together.

Hair Kit Main is the main component that creates the Unity3D mesh.  This is made up of a set of Hair Kit Lines which require a Hair Kit Shape and a set of Hair Kit Line Points.  Finally, there is an optional component, HairKit Smoothed Line Helper, which can create a set of Hair Kit Line Points with smooth interpolation and spacing.

HairKit Main

The HairKitMain component is pretty straightforward.  When adding it to an object, it’ll automatically add a MeshFilter and a MeshRenderer component to the GameObject.  It will also create a new mesh named <GameObject’s Name> mesh.  Before lines can be rendered, a HairKitShape and HairKitLine needs to be set up.


HairKitShape defines the shape that will be used when making lines.  It uses the children of this GameObject to define the shape. The Gradient is used by the gizmo system to draw the shape.  The UV Percentages, which will be used for the UV layout of the different meshes are based on the distance between points. You can automate this based on the by having Automate UV Percentages checked; if you uncheck it, it’ll force the UV Percentage array to the correct number but you can set the values as you wish.

You can rename the children of this game object more cleanly by pressing the Renamed Children Button.

You can also create an enclosed, circular shape by Pressing “Create Shape.”  The resulting shape will have the number of points specified by Count, minus one, and a radius of that specified.  You cannot set count lower than 3. So, to create a line, use 3 points, a triangle 4, etc.

The previous image showcases some examples of shapes created by different HairKitShape configurations.


One you have a satisfactory shape, it’s time to start creating the line.

HairKitLine is a pretty full component.  For the quickest approach, assign a HairKitShape and then press “Add Child.” This will create a HairKintLinePoint.

Each point can then add a child or a sibling, which will be added to the line itself.

Locking a point allows the parent to be moved around without disrupting the position of the point.

The following .gif demonstrates adding a set of points:

Once you have a line you are happy with, you can add it to the Hair Kit Main see the shape itself.

HairKitLine has the most fields to edit, but for now, this covers the basics of the HairKit system.  The HairKitLineSmoother

Saving the Mesh

Once you are happy with the mesh, you can save out the mesh using the context menu of the Hair Kit Main component.

You can either Clone and Save the mesh or Skin the mesh.

Clone and Save will create a new GameObject except this game object will not have a HairKitMain component and the Mesh Filter will now refer to a newly created asset.

Skin is a bit more difficult.  The bones have to be set up in a specific way, cascading child-by-child for this to work properly.

Either way, the mesh should be saved out because the update methods used by the HairKit components are not the most efficient and should not be included in a final game.

Was It Useful?

In the end, I essentially recreated 3D Max’s loft tool; I realize now that this tool should probably be renamed LoftKitTool, but the original intent was for hair.  Recently, however, I discovered I can use it for creating trails in my current game rather quickly. The following are some .gif of it being used.

Anyway, I wanted to share about this as I may release this one day as a Unity Package or maybe even in the Unity Asset store.  Anything you’d like to see in the tool, if you would pay for said tool or how much, or any comments will be greatly appreciated.

ProtoFighter Dev Blog 01 4

So for the past couple of months I’ve been working on a new fighting game prototype.  After discovering TrueSync by Exit Games, I’ve been trying very hard to create a new fighting game with it.  Again, one of my biggest regrets with Battle High is that I was never able to implement multiplayer before its release.  I definitely feel that TrueSync could definitely help me achieve that!  Anyway, I decided to write a little bit about the game and what I’m trying to do with it.


I chose this name because what I made was a prototype, and I wanted to make this clear.  I decided to use only assets from the Unity3D Asset Store, which TrueSync already is.  This includes my characters, audio, and more!  Here is a short list of some of the assets I am using:


I had several goals while making this prototype.

Learn TrueSync With a Focus on a Fighting Game

My first goal was to learn TrueSync and make a game using it.  I think I accomplished this.  In fact, it’s not my first TrueSync experiment.  Diamonds Not Donuts, a small game I released on itch.io for free, is!  That being said, for ProtoFighter, I wanted to focus more on fighting games and various issues concerning them.  ProtoFighter has a lot of gameplay functionality that most fighters do — blocking, jumping, attacking, special moves, supers, rounds, etc.  Obviously it’s missing a lot to be a complete fighting game package — single player modes, balance is a MESS, more characters, etc.  Again, for a pre-pre-pre alpha, I think I achieved my goal, but of course, when it comes to TrueSync, there are still a ton of questions I have and hope to continue to answer them as I expand upon this prototype.

Make a Fighter That Is Slightly More Accessible Than Most

Though not TrueSync related, I’ve always wanted to try and make a fighting game that was a bit more accessible to the average player.  Maybe not as extreme as Fantasy Strike, but something that I could still explain relatively easily.

In ProtoFighter, though I sadly haven’t released a tutorial yet, I tried to do this.  Essentially, instead of performing quarter circle attacks, I simplify this to forward or back plus an attack.  Now, a lot of people would immediately say this oversimplification could cause issues such as instant dragon punches or anti-airs, so to solve this I did two things.  Firstly, all initial moves such as forward+punch have rather long start-up and are reserved for moves like overheads or projectiles.  Then, every special move has a “secondary” special that branches from it.  For example, forward+punch may begin an overhead but then pressing up before the attack activates, a secondary attack, probably an anti-air attack, would be performed.  The hope is that performing the initial move and then the secondary move will require just enough time and frames that the anti-air move won’t be so instantaneous.  Maybe this won’t help, but the idea it’s simple to actually perform an attack, but requires some dexterity and memorization to cancel one move into another properly.

A secondary idea I then had is to still allow players to perform attacks using quarter-circles; however, these players would be rewarded with a slight meter bonus, so you don’t have to perform moves properly to compete or play, but players who can are rewarded slightly for taking the time and effort to perform more complex inputs.  I can’t really tell if this input system will be good or not until someone tests it, which is why I released the prototype.

Create a Framework

My third goal was to begin creating a framework so that I can create future titles, TrueSync or not, more quickly.  A lot of games I work on are usually fighting game influenced, so I wanted to construct a framework so that creating future titles, whether 2D or 3D, would be easier in the future.  Though not perfect, I definitely tried to abstract more of my classes and functionality and believe I could quickly go from this 2.5D fighting game to a 3D game rather quickly with few changes.

TrueSync Tips

So, for this fighting game, I learned a good amount about TrueSync.  TrueSync attempts to be deterministic, allowing a local player’s inputs to be immediately respected, passed over the network, and compared to the game’s state and rolled back if there are inconsistencies found and resimulated.

The issue though is that Unity3D wasn’t built to be deterministic.  Its use of floats and random system for example can cause various issues.  It’s animation system also isn’t deterministic so trying to perfectly simulate results across two machines can be rather problematic.  Anyway, here are some tips I found were helpful for completing my prototype.

Note, these tips were written for Unity3D version 2017.1.1f1 and TrueSync version 1.1.0B.

Don’t “Press” Inputs

TrueSync uses a unique method to capture and send input, it’s called OnSyncedInput.  Here’s an example of how it works.

public class MyTSClass : TrueSyncBehaviour
    public override void OnSyncedInput()
        TrueSyncInput.SetBool(0, Input.GetKeyDown(KeyCode.Space));

So in the above, TrueSyncInput is used to pass inputs over the network.  The first argument is a byte, used as a key.  I’m just using 0 for now, but if you use multiple,  you should probably assign them to a constant.  Then, I’m using Input.GetKeyDown to send a bool if space is down or not.  One issue with this method is that it is performed similarly to OnFixedUpdate so calls such as “Input.GetKeyDown” don’t work consistently as when OnSyncedInput is called, Input.GetKeyDown is sometimes missed.  To resolve this for button inputs, here’s what I did:

public class MyTSClass : TrueSyncBehaviour
    bool hasPressed = false;

    public override void OnSyncedInput()
        bool singlePress = false;
        if (Input.GetKey(KeyCode.Space))
            if (!hasPressed)
                hasPressed = true;
                singlePress = true;
        else if (hasPressed)
            hasPressed = false;

        TrueSyncInput.SetBool(0, singlePress);

This change uses a bool that is set to true when OnSyncedInput is executed if the space bar is currently down. The toggle is then reset once the spacebar is no longer being held down.  The bool that is actually pased in TrueSyncInput.SetBool is only set if the keyboard is down AND hasPressed was false before being set to true.  This way, the first entry of TrueSyncInput will be true for only one execution of OnSyncedInput.  This should prevent any issues with OnSyncedInput missing an input as the average button press usually occurs for a few frames.  I don’t use this method exactly in ProtoFighter, but the idea is similar.  Instead of doing separate Booleans for each input type — up, down, left, right, etc. — I use an integer and bitmasking to change it during OnSyncedInput.

Treat TrueSync Like A Separate Engine

This sounds silly as Unity3D is a game engine; however, to make TrueSync’s determinism work properly, you have to use a lot of unique structs and classes that it introduces.  There’s FP, or FixedPoint, for float values for example and TSVector for Vector3’s.  Also, TrueSync has its own Transform class (TSTransform) that does not have all the functionality — at least now — that Unity3D’s Transform class has.  You can’t use children the same way and certain methods such as those that convert transform information from world to local space are missing.  Overall, you can’t just take a finished game and integrate TrueSync into it quickly.

One trick I had to do, for example, was figure out a way to align character hit spheres to certain joints.  In a normal setting, I could just use the following:

Animator anim = GetComponent<Animator>();
Transform t = anim.GetBoneTransform(HumanBodyBones.Chest);
Vector3 chestPos = t.position;

However, one problem is that this creates a Vector3 and even though I can convert the position to TrueSync’s Vector3 equivalent, a TSVector, they may be different values between the multiple players due to floating point precision errors.

To resolve this, I built a tool to cycle through my animations and store important point information as a TSVector  in a ScriptableObject.  I don’t save the position though; instead, I save the vector from the center to this point.  So, to get where the chest would be in my animation, it would be something like the following:

TSVector localChestVector = GetChestPosition(frame);
TSVector worldChestVector = tsTransform.position + tsTransform.rotation * localChestVector;

So, in the above, I’ve gotten a local vector for my chest position and then used the position of my player and rotation to define the world position for my chest now.  You’ll also notice that I’ve used a frame.  This is because a lot of fighting game interpret things as frames, and I believe interpreting your deterministic game in TrueSync is a lot easier to understand through the concept of frames than through time.  Even though my 3D animation is made up of curves, I store different bone information in these TSVectors so they can be referred to later regardless of the rotation or position of my character.  I also do a similar technique for moving a character by their root animation without actually having the Animator drive it.

No Animators — At Least How You Think

As of right now, TrueSync doesn’t have an Animator class.  For fighting games, this can be an issue since animations and the accuracy of said animations is so important.  To handle this, I did the following:

  • Stored all of my animator data in a separate data structure, mostly just my transition parameters and conditions
  • Muted ALL of my animation transitions
  • Disabled the Animator Component
  • Use Animator.Update(float)

So, even though the animator is disabled, Animator.Update(float) still allows the Animator to be updated.  Even though you do have to use a float instead of an FP, the amount I update is determined by the frame I’m supposed to be on, so my update function looks like this.

FP syncedFrame;
FP localFrame;
Animator anim;

private void Update()
    anim.Update(((TrueSyncManager.DeltaTime) * syncedFrame - localFrame).AsFloat());
    localFrame = syncedFrame;

So, here I have syncedFrame which is the frame of my animation that is set during OnSyncedUpdate.  Then I substract the syncedFrame to the localFrame and convert it to a float value.  I then set the localFrame to the syncedFrame.  I used FP instead of integers in case I want to play the game in slow motion.  This still needs some tweaking, however, but it gets the general idea across.

Overall, using Animator.Update(float) is great because it allows me to still get a lot of the functionality of Animators

  • Transition blending
  • IK
  • Mirroring
  • Humanoid rigs

But with more control.  This is one reason all transitions in the Animator are muted actually.  Because I don’t want transitions to happen automatically and switch states suddenly if there is rollback.  Doing it more manually allows me to switch state when I need to.

Just one small part of my AnimatorController; the red arrows show that my transitions are muted.

Anyway, the future of ProtoFighter is uncertain.  I will most certainly not release this as a full game, but instead a fighting game demo.  I know in this current 2D version I’d like to do the following:

  • Add rooms and lobbies instead of the “Ranked Match” system it uses now
  • Add stage select
  • Add an interactive tutorial
  • Balance and clean up the existing characters, Protolightning and Protaqua
  • Start looking into AI and single player modes

Overall, the goal with this game is to eventually get a framework to a place where I can experiment with a variety of gameplay styles and making something myself later down the road, hopefully sooner rather than later.  Maybe I can even use this to integrate TrueSync into Battle High 2 A+ — though I make zero promises.

ProtoFighter is available on itch.io & Game Jolt for free!  If you download the game and play them, I’d love to hear your feedback — but make sure you try the multiplayer as that’s the main area I’m trying to focus on.  Also, if you have any questions on TrueSync, I’d love to try and help as I think it’s a great asset and can help give online functionality to a lot of new indie game content — fighting games and other — in the future.

Unite 2017

I recently returned from Austin, Texas from Unite 2017 one of several conferences Unity holds annually to discuss upcoming features about the Unity3D game engine.

I usually write long posts about my experience at these conferences, but this year was a bit, not a letdown per se, but I just didn’t feel I got as much out of it as I have in previous years.  I didn’t leave feeling inspired and invigorated.

First, there weren’t a ton of sessions like in previous years, in fact, the first day of the conference, only the expo hall was open.  It was a nice expo, but also felt lacking in ways.  Last year Unite was held in a difference convention center, so it’s possible that the larger expo floor made it feel smaller, but regardless, having no sessions the first day just felt odd and had myself and others question “What’s the point?”

Overall, none of my sessions blew me away nor did the keynote.  Most of them were great overviews.  There was a talk about different network architectures from Exit Games that I liked as well as one that went over the character building techniques of the Rick & Morty VR game.  There was also a decent discussion and demonstration about future AR features coming to Unity3D in the coming years.

I think that would be my next biggest complaint.  A LOT of AR and VR, almost too much.  I understand they are exciting technologies, but I wish, like last year, there were a few more talks about design itself or just more variety in general.  I always feel that no matter how good your engine or tools are, if your games aren’t designed well, it won’t matter.  Maybe there were, and I just missed them.

Overall, I think it was worth the price of admission but am definitely on the fence if I’ll go next year or go to a different conference such as GDC instead.

Unity3D Script: Quick Texture Editor

Last year I wrote a Unity3D editor script for combining textures as well as swapping and combining their different color channels.

Someone on YouTube recently commented, asking for more details. Since I haven’t touched the script in over a year, I decided to just make the script public. It’s not perfect and some of my comments don’t make sense. I’ll probably clean it up in the future, or at least add better documentation.  I sound very professional right now.


What this script does:

  • Allows you to swap color channels
    • For example, take the red channel of a grayscale smoothness map and apply it to the alpha channel of your albedo texture
  • Allows you to combine texture onto a new, larger texture
    • You have two 512×512 texture and want to combine them onto one 1024×512 texture

What this script does NOT do:

  • Resize textures
  • Rearrange meshes’ UVs
  • Paint onto textures
  • Create textures other than PNGs
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using UnityEditor;
using System.IO;
namespace MattrifiedGames.Assets.TextureHelpers.Editor
    /// <summary>
    /// Editor window for quickly swapping, rearranging, and other things to textures in Unity3D.
    /// </summary>
    public class QuickTextureEditor : EditorWindow
        /// <summary>
        /// A list of the current affected textures.
        /// </summary>
        List<TextureInformation> texturePositionList;
        /// <summary>
        /// If true, the new texture's size will be forced to the nearest power of two.
        /// </summary>
        bool forcePowerOfTwo = false;
        /// <summary>
        /// Width of the new texture.
        /// </summary>
        int newTexWidth = 512;
        /// <summary>
        /// Height of the new texture.
        /// </summary>
        int newTexHeight = 512;
        /// <summary>
        /// The name of the new texture to be created.
        /// </summary>
        string newTextureName = "New Texture";
        /// <summary>
        /// Operations affecting different channels.
        /// </summary>
        public enum ChannelOperations
            Ignore = 0,
            Set = 1,
            Add = 2,
            Subtract = 3,
            Multiply = 4,
            Divide = 5,
        public struct ChannelBlendSetup
            public ChannelOperations rCU, gCU, bCU, aCU;
        /// <summary>
        /// Information about each texture being used to create the new texture.
        /// </summary>
        internal class TextureInformation
            /// <summary>
            /// The texture being used.
            /// </summary>
            public Texture2D texture;
            /// <summary>
            /// The x and y position of the new texture.
            /// </summary>
            public int xPos, yPos;
            /// <summary>
            /// The x and y position of the new texture.
            /// </summary>
            public int width, height;
            /// <summary>
            /// Should a multiply color be used?
            /// </summary>
            public ChannelOperations blendColorUse = ChannelOperations.Ignore;
            /// <summary>
            /// The color to be blended with the texture.
            /// </summary>
            public Color blendColor;
            public ChannelBlendSetup rBS = new ChannelBlendSetup() { rCU = ChannelOperations.Set },
                gBS = new ChannelBlendSetup() { gCU = ChannelOperations.Set },
                bBS = new ChannelBlendSetup() { bCU = ChannelOperations.Set },
                aBS = new ChannelBlendSetup() { aCU = ChannelOperations.Set };
            public void OnGUI(string label, ref int refWidth, ref int refHeight)
                if (texture != null)
                    label = texture.name;
                texture = (Texture2D)EditorGUILayout.ObjectField(label, texture, typeof(Texture2D), false);
                if (GUILayout.Button("Set as new texture size."))
                    refWidth = width;
                    refHeight = height;
                if (texture == null)
                    Vector2 s = new Vector2(width, height);
                    s = EditorGUILayout.Vector2Field("Size", s);
                    width = Mathf.Max(1, Mathf.RoundToInt(s.x));
                    height = Mathf.Max(1, Mathf.RoundToInt(s.y));
                    width = texture.width;
                    height = texture.height;
                blendColorUse = (ChannelOperations)EditorGUILayout.EnumPopup("Blend Color Usage", blendColorUse);
                if (blendColorUse != ChannelOperations.Ignore)
                    blendColor = EditorGUILayout.ColorField(blendColor);
                    blendColor = Color.white;
                Vector2 v = new Vector2(xPos, yPos);
                v = EditorGUILayout.Vector2Field("Pos", v);
                xPos = Mathf.RoundToInt(v.x);
                yPos = Mathf.RoundToInt(v.y);
                GUI.color = Color.red;
                GUI.color = Color.green;
                GUI.color = Color.blue;
                GUI.color = Color.white;
                ChangeBlendSetup("R", ref rBS, Color.red);
                ChangeBlendSetup("G", ref gBS, Color.green);
                ChangeBlendSetup("B", ref bBS, Color.blue);
                ChangeBlendSetup("A", ref aBS, Color.white);
            private void ChangeBlendSetup(string p, ref ChannelBlendSetup bS, Color guiColor)
                GUI.color = guiColor;
                GUI.color = Color.white;
                bS.rCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.rCU);
                bS.gCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.gCU);
                bS.bCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.bCU);
                bS.aCU = (ChannelOperations)EditorGUILayout.EnumPopup(bS.aCU);
            internal void EditColor(ref Color colorOutput, ref Color colorInput)
                EditChannel(ref colorOutput.r, ref colorInput, rBS);
                EditChannel(ref colorOutput.g, ref colorInput, gBS);
                EditChannel(ref colorOutput.b, ref colorInput, bBS);
                EditChannel(ref colorOutput.a, ref colorInput, aBS);
            private void EditChannel(ref float outputValue, ref Color inputColor, ChannelBlendSetup bs)
                EditChannel(ref outputValue, ref inputColor.r, bs.rCU);
                EditChannel(ref outputValue, ref inputColor.g, bs.gCU);
                EditChannel(ref outputValue, ref inputColor.b, bs.bCU);
                EditChannel(ref outputValue, ref inputColor.a, bs.aCU);
            private void EditChannel(ref float output, ref float input, ChannelOperations channelUsage)
                switch (channelUsage)
                    case ChannelOperations.Set:
                        output = input;
                    case ChannelOperations.Add:
                        output += input;
                    case ChannelOperations.Divide:
                        output /= input;
                    case ChannelOperations.Multiply:
                        output *= input;
                    case ChannelOperations.Subtract:
                        output -= input;
                    case ChannelOperations.Ignore:
        // Add menu named "My Window" to the Window menu
        [MenuItem("Tools/Quick Texture Editor")]
        static void Init()
            // Get existing open window or if none, make a new one:
            QuickTextureEditor window = (QuickTextureEditor)EditorWindow.GetWindow(typeof(QuickTextureEditor));
        /// <summary>
        /// On GUI function that displays information in the editor.
        /// </summary>
        void OnGUI()
        /// <summary>
        /// Quickly gets the importer of a specified asset
        /// </summary>
        /// <typeparam name="T">The type of importer to be used.</typeparam>
        /// <param name="asset">The asset whose importer is being referenced.</param>
        /// <returns>The importer, converted to the requested type.</returns>
        private T GetImporter<T>(UnityEngine.Object asset) where T : AssetImporter
            return (T)AssetImporter.GetAtPath(AssetDatabase.GetAssetPath(asset));
        private void SetupList<T>(ref List<T> list, int p)
            if (list == null)
                list = new List<T>();
            while (list.Count <= p)
        private T GetFromList<T>(ref List<T> list, int p)
            SetupList(ref list, p);
            return list[p];
        private void DefineTexturePose(int index)
            SetupList(ref texturePositionList, index);
            if (texturePositionList[index] == null)
                texturePositionList[index] = new TextureInformation();
            texturePositionList[index].OnGUI("Texture " + index, ref newTexWidth, ref newTexHeight);
        private static Color DivideColor(Color c)
            return new Color(1f / c.r, 1f / c.g, 1f / c.b, 1f / c.a);
        Vector2 scroll;
        private void OnGUICombineTextures()
            // Defines information about the new texture.
            newTextureName = EditorGUILayout.TextField("New Texture Name", newTextureName);
            forcePowerOfTwo = EditorGUILayout.Toggle("Force Power of 2", forcePowerOfTwo);
            if (forcePowerOfTwo)
                newTexWidth = Mathf.ClosestPowerOfTwo(EditorGUILayout.IntField("Width", newTexWidth));
                newTexHeight = Mathf.ClosestPowerOfTwo(EditorGUILayout.IntField("Height", newTexHeight));
                newTexWidth = EditorGUILayout.IntField("Width", newTexWidth);
                newTexHeight = EditorGUILayout.IntField("Height", newTexHeight);
            scroll = EditorGUILayout.BeginScrollView(scroll);
            if (texturePositionList == null)
                texturePositionList = new List<TextureInformation>();
            for (int i = 0; i < texturePositionList.Count; i++)
            if (GUILayout.Button("Add Texture"))
                texturePositionList.Add(new TextureInformation());
            if (GUILayout.Button("Remove Texture"))
                texturePositionList.RemoveAt(texturePositionList.Count - 1);
            if (GUILayout.Button("Save Texture"))
                int textureCount = texturePositionList.Count;
                Texture2D newTex = new Texture2D(newTexWidth, newTexHeight);
                newTex.name = string.IsNullOrEmpty(newTextureName) ? "New Texture" : newTextureName; 
                Color[] mainColors = new Color[newTex.width * newTex.height];
                List<TextureInformation> pulledTextures = new List<TextureInformation>();
                for (int i = 0; i < textureCount; i++)
                    TextureInformation pos = GetFromList(ref texturePositionList, i);
                    if (pos == null)
                    else if (pos.texture == null)
                        pos.texture = new Texture2D(pos.width, pos.height);
                        pos.texture.name = "Texture " + i;
                        Color[] c = new Color[pos.width * pos.height];
                        for (int j = 0; j < c.Length; j++) c[j] = pos.blendColor; pos.texture.SetPixels(c); pos.texture.Apply(); } if (pos.texture.width + pos.xPos > newTex.width ||
                        pos.texture.height + pos.yPos > newTex.height)
                        Debug.LogWarning(pos.texture.name + " will not fit into new texture.  Skipping.");
                for (int i = 0; i < pulledTextures.Count; i++)
                    EditorUtility.DisplayProgressBar("Saving Texture", "Working on Texture " + i, (i + 1) / (pulledTextures.Count));
                    TextureImporter ti = GetImporter<TextureImporter>(pulledTextures[i].texture);
                    bool wasReadable = ti.isReadable;
                    bool wasNormal = ti.normalmap;
                    if (wasReadable != true)
                        ti.isReadable = true;
                    if (wasNormal)
                        ti.normalmap = false;
                    Color[] pulledColors = pulledTextures[i].texture.GetPixels();
                    if (pulledTextures[i].blendColorUse != ChannelOperations.Ignore)
                        for (int c = 0; c < pulledColors.Length; c++)
                            switch (pulledTextures[i].blendColorUse)
                                case ChannelOperations.Set:
                                    pulledColors = pulledTextures[i].blendColor;
                                case ChannelOperations.Add:
                                    pulledColors += pulledTextures[i].blendColor;
                                case ChannelOperations.Divide:
                                    pulledColors *= DivideColor(pulledTextures[i].blendColor);
                                case ChannelOperations.Multiply:
                                    pulledColors *= pulledTextures[i].blendColor;
                    Color[] colorsToModify =
                        newTex.GetPixels(pulledTextures[i].xPos, pulledTextures[i].yPos, pulledTextures[i].texture.width, pulledTextures[i].texture.height);
                    // Adds these colors instead of setting.  Slower, but allows for combining channels or for combining reasons.
                    for (int c = 0; c < colorsToModify.Length; c++)
                        pulledTextures[i].EditColor(ref colorsToModify, ref pulledColors);
                    newTex.SetPixels(pulledTextures[i].xPos, pulledTextures[i].yPos, pulledTextures[i].texture.width, pulledTextures[i].texture.height,
                    if (ti.isReadable != wasReadable)
                        ti.isReadable = wasReadable;
                    if (wasNormal)
                        ti.normalmap = true;
        void SaveTexture(Texture2D texture2D)
            byte[] bytes = texture2D.EncodeToPNG();
            File.WriteAllBytes(Application.dataPath + "/" + texture2D.name + ".png", bytes);

If you use the script, credit would be nice. If you have any questions, feel free to ask here or on my twitter.