Procedural Climbing in Unity

By: Brian - UpRoom Games founder

Free Technical Demo Download

uproomgames.itch.io/procedural-climbing-tech-demo

Update (11/15/2021)

You can check out our dev log about the game this system is being used in here

Introduction

This dev-blog (not really a “log” but very “dev”) is about something very near and dear to my heart, and in some ways has been almost 9 years in the making. Back in 2012 I played my very first Assassin’s Creed game with the release of Assassin’s Creed III. While I had played many third person adventure games before, I was absolutely floored by the traversal system in that game. The movements felt so fluid, and the structures you could climb were so varied that it almost seemed impossible. Being relatively new to programming, I obviously decided that it would be a good idea to try to create my own climbing and traversal system to rival the one in AC3. This was stupid. Unfortunately for me I wasn’t experienced enough to know just how stupid so I went on ahead and downloaded the free version of Unity (back in the day there were separate versions of the engine.) Since I “already knew how to program” I figured that I just needed to slap some code into a game engine and hey presto I’m basically already done. This was not the case. However, after many many hours of trying to learn the engine as well as animation, game physics and a host of other things I finally had my prototype. I tried to find this prototype for this post but unfortunately (or fortunately I guess) it is lost to time, so I’ll do my best to describe it. What I wound up with was a single scene with a humanoid character model and 3rd person camera (so far so good.) There was also a cube that was a very specific size, and if you ran up to the cube and pressed a button you would sort of slide up the surface while performing some sort of mantling animation from a raw mocap data set Unity used to provide. I didn’t really know how to properly retarget the animation so most of the character would actually animate and maybe 40% of the time you would actually wind up on the cube, the other 60% of the time you would sort of slide back down to the ground after the animation completed. Basically a 100% success, definitely better than Assassin’s Creed. As terrible as it was, this prototype didn’t discourage me nearly as much as it should have. I felt much more confident that I really understood the problems involved in creating a climbing system, certain my next attempt would knock it out of the park. It didn’t. In fact the next 3 attempts had major issues, so much so that I wouldn’t call them successful in any way other than as learning experiences. I want to go over these intermediate systems since I think the concepts involved are important and the attempts are pretty representative of the approaches a lot of people seem to take when making their own climbing systems. If you just want to get into the technical breakdown of the procedural system in the demo please feel free to jump directly to The Procedural One.

 

The Target Matching One

In Unity, there is a nifty function that allows you to modify the underlying motion of an animation at runtime so that part of a humanoid avatar will wind up at a certain position at a certain time. This was (I assumed) the secret sauce that was missing from my first climbing system. If I could target match, say, my character's right hand to be on the top of a cube when playing my “climb” animation I could climb a cube of any height! I also probably wouldn’t slip off once I got to the top, since my character would be in the right location. It turns out I was mostly right, but there were a few key things I didn’t understand. The first was how target matching probably worked behind the scenes, which turns out to be pretty important. What I didn’t realize is that target matching isn’t magic. This probably sounds silly but I think a lot of new game developers get to a point where certain engine functions seem like a magic bullet that will just “fix” problems without having to think about them. Modern engines are getting to a point where this can almost be true, which is a bit of a double edged sword in terms of learning, but “luckily” for me this was much less the case in 2013. Because of this “magic bullet” belief, I basically took my existing “climbing” system and added a target match from the start of the animation to the end, telling the right hand to wind up at a specific point on the top of the cube. This looked horrible since the climbing animation had a lot of wind-up, and with the target match running the entire time the character just started sliding up the cube even more obviously than before. Figuring this out, I modified the target match to only run from the frame when the character’s feet left the ground up until the right hand hit the ledge. This worked better, but not as well as I thought. The character would jump much higher if they needed to, but the motion was really odd, and the character often still slipped off the ledge anyway.

To figure this out why this happens we need to actually dig a bit into how target matching works. First, we need to have an animation with root motion baked in, since we are modifying that root motion to generate a new trajectory for our animation. Once we have this, all we are doing is identifying a specific bone in our animation rig (in this case the right hand bone) and making sure that transform winds up at a certain world space point by a certain time in the animation. So how does that actually work? Well, we aren’t directly modifying the motion of the hand, since that would distort the animation file. Instead we are moving the character “root” to be in the correct relative position to the hand at a specified time. If we look at the last frame of the target match this is pretty simple, we just calculate the relative transform between the hand and the root in the base animation, apply that transformation to the world space target we are matching to and then set our animation root to be at that world space position / orientation. However, in order to make this transition smooth we need to look at more than the last frame and this is where things get funky. Let’s take a look at the actual raw mocap data I used for this particular implementation. I’ve outlined the right hand in purple, you can see how much motion that hand goes through in a couple snapshots of the climb, and the relative position to the center of the character (what we’ll use as the root for this example) is not constant.

Sequence-2.png

This makes target matching a bit trickier, since any relative motion we apply to the root won’t always be correct based on how the target hand is moving. I’m not sure exactly how Unity handles this, but what I can say is that, intuitively, the more the relative position between your target and your root changes over the target match period, the stranger and less fluid the animation looks. This transform issue plagued me for a while before I finally understood the problem, although it paled in comparison to the second problem I encountered. To Illustrate this, have a look at a few frames from the climbing animation I was using with a wall tracked in by a green outline for the start of the climb, and a second wall tracked in red for the end.

Sequence-1.png

Clearly, the wall doesn’t actually move. This is one of the issues with using raw mocap data for character animation, it’s messy. In this case, the transformation of the character is way off, sliding backwards when climbing up. This sliding is what was really messing up my climb, since I was just using the root motion of the climb to transform the character, so sometimes the base collider on my character would still catch the ledge and sometimes it wouldn’t. The real lesson here is to make sure your animations are clean and doing what you want. If that isn’t an option, however, we can be a little more clever with our target matching and make sure we stick on the ledge when we climb up. IF we split into two separate target match events, we can have on event that matches our hand to a point on the ledge (starting from a place where the relative transform between the hand and root stays relatively constant) and then we can have a second, much shorter event which matches our foot to the same point (or one close by.) If we pick our moments well we can have a very small change in relative transform between target and root and just “nudge” the character up a bit higher or lower on a ledge grab, and nudge them a bit further forward (or back) when they climb up. Putting this all together, the system worked decently well for climbing on top of cubes of varying height. The big problem, however, was that I had created a mantling system, not a climbing system. And furthermore, the world isn’t made of cubes (usually). This meant that I had to start digging a little deeper with my next iteration of the system.

 

The LEdge Grabbing One

 A few years later I decided to try to tackle this problem yet again. Armed with a bunch of excellent free animations from Mixamo (which included a bunch of ledge jumps and climbs) as well as a nice placeholder body mesh, I thought I finally had the tools to tackle this head on. Combining target matching with complex Animator state machines I thought I had finally cracked the problem. This is sort of correct, however despite the quality of the animations I was using, ultimately I think I needed orders of magnitude more than the 20 or so I managed to get to handle the range of motion I wanted. The approach I took was this: Have a large animation state for a static ledge grab, which would transition to lateral or vertical movement along the ledge, with animations for jumping up, left, right and down to other holds. I would use target matching to make the hands / root move to the right spot through the animation lifecycle in order to handle gaps / distances that were not explicitly animated. This worked...alright but the issue with simply target matching is that it can look very floaty since no part of the animation is modified, just the root position. What I was missing (but didn’t know it at the time) was Inverse Kinematics. However, even with a solid IK system there are a LOT of different transitional states and micro animations that are needed to cover the full range of motion appropriately with this approach. I don’t want to spend a lot of time on this approach since I do think it is a viable way to solve this problem, however I didn’t have the resources to make it work, and when you don’t have that it winds up feeling extremely underwhelming, so let’s dig into the main course here.

 

 The Procedural One

The biggest limiting factor I faced when designing this system was that I’m not an animator. I had recently watched this excellent GDC talk about (among other things) the climbing system in Assassin’s Creed III and while there are a lot of really good information presented there, at the end of the day there was a team of animators creating a lot of transitional animations and animation variations to handle large portions of the climbing. I had already seen that just using some freely available (and some paid) animations wasn’t really enough to create a system that felt in any way fluid or that stable, so the prospect of using many many more animations wasn’t that appealing. Luckily, around this same time I also happened across another excellent GDC talk from Wolfire Games. This talk focused on approachable procedural animation and Inverse Kinematics for character locomotion, including ledge climbing. The use of simple spring simulations and animation curves to create so much dynamic motion made me believe that maybe it was possible to take those same concepts further and completely redesign my climbing system around them.

While it may not be the best decision for most published games, I decided that I wanted a climbing system that would handle almost any environment, rather than one that only worked within limited parameters. This is largely because at the time I wasn’t confident in my level design ability or my animation skills, which meant that crafting a careful environment was likely to be much more effort than making a dynamic climbing and traversal system to handle whatever environment I threw at it. With that in mind, here are the parameters I came up with for the climbing system

  1. Ledge grading at any height

  2. Ledge traversal in all reasonable directions

  3. Ledge jumps in all reasonable directions

  4. Hanging and traversing ledges without footholds

  5. Climbing on moving surfaces

  6. Climbing on moving handholds

  7. Climbing on rigidbodies for physics simulation

  8. Surface climbing (for example, rock faces that you can just “traverse” rather than looking for specific hand holds)

With all these requirements it was pretty clear that I needed an almost entirely procedural system, since linking thousands of small animations together just isn’t an option if you want truly dynamic movement. The obvious solution when you want to modify  animations to fit multiple scenarios is to use Inverse Kinematics. Basically, if you have a pose and you want to modify an “effector” (hand, foot, face whatever) to be in a specific location at a specific time in the pose timeline you can do some trigonometry to figure out what angles you need to bend the joints at to move that effector at the right time. In more complex systems, you can take musculature into account and have effectors “pull” or “push” other effectors in a chain. This requires a bit more math to solve since there are several valid solutions, but the results are much more dynamic and believable.

To be frank, I had no interest in writing a full body inverse kinematics system so for this project I did some research and purchased FinalIK from the Unity Asset store. I have now used this system for years and I genuinely couldn’t recommend it enough, it is the lifeblood behind the procedural motion I accomplished for the climbing system and has worked flawlessly no matter how hard I have pushed it. With IK system in hand I began trying to figure out how to define climbing motions by moving various effectors. After a lot of experimentation, I came up with a 5 effector system, RIGHT_HAND, LEFT_HAND, RIGHT_FOOT, LEFT_FOOT, and ROOT. Each effector also has three position states, LAST, CURRENT and NEXT. Basically, the IK system tries to match the position of the CURRENT effector, and the current effector interpolates between the LAST and NEXT positions based on some amount of smoothing. The ROOT effector defines the character position and is derived from the CURRENT positions of the hands, also with some offsets and smoothing. Let’s look at a snippet from the code that sets the effector position to see how this works:

bool reachedDestination = false; float afterTime = ikEventTimes [effector]; afterTime += Time.deltaTime; Vector3 newPosition; // delay and lag are used somewhat similarly, the important idea is sometimes we want to speed up or slow down an effector. // We sometimes also want to delay the start of interpolation but still reach the goal in time float delay = ikEffectorDelayTime [effector]; time = time - delay + ikEffectorLagTime[effector]; float adjustedAfterTime = Mathf.Clamp(afterTime - delay, 0f, time); float adjustedAfterTimeNoDelay = Mathf.Clamp(afterTime, 0f, time + delay); // this effector has reached the target if (adjustedAfterTime >= time) { newPosition = ikTargets[effector][NEXT].transform.position; reachedDestination = true; character.SetParentTransform(ikTargets[effector][NEXT].transform.parent); ikEventTimes[effector] = 0f; ikTargets[effector][LAST].transform.position = newPosition; ikTargets[effector][LAST].transform.rotation = ikTargets[effector][NEXT].transform.rotation; ikTargets[effector][LAST].transform.parent = ikTargets[effector][NEXT].transform.parent; } else { // EaseLerp is a custom linear interpolation function with smoothing applied at the beginning and end of the motion. newPosition = EaseLerp(ikTargets[effector][LAST].transform.position, ikTargets[effector][NEXT].transform.position, adjustedAfterTime / time); ikEventTimes[effector] = afterTime; } // All effectors (HAND, FOOT, ROOT) have a custom offset in the interpolation that is determined by the effector type and the current motion. newPosition += GetOffsetVectorForEffector(effector, Mathf.Clamp ((adjustedAfterTime / time) - 0.5f, -0.5f, 0.5f)); Quaternion newRotation = Quaternion.Slerp (ikTargets [effector] [LAST].transform.rotation, ikTargets [effector] [NEXT].transform.rotation, (adjustedAfterTime / time)); ikRotationWeights [effector] [CURRENT] = (1f - Mathf.Sin ((adjustedAfterTime / time) * Mathf.PI)) * ((currentTransitionState == TransitionState.NONE) ? (currentClimbType == ClimbingType.FREE ? 0.75f : 0.1f) : 1f); // This code is just modifying the "stretch" of the legs during a jump if (effector.Contains ("FOOT") && this.jumpingDuringClimb) { float reachWeight = (1f - Mathf.Sin ((adjustedAfterTimeNoDelay / (time + delay)) * Mathf.PI)); if (effector.Contains ("RIGHT")) { reachWeight *= rightLegMaxReach; currentRightFootReach = reachWeight; } else { reachWeight *= rightLegMaxReach; currentLeftFootReach = reachWeight; } } // This actually tells the IK system to set the effector target to the CURRENT transform position. SetIKTarget (effector, CURRENT, newPosition, newRotation, ikTargets [effector] [NEXT].transform.parent);

It may be surprising to hear, but in the end, after a lot of experimentation, all the traversal, jumping to hand holds and ledge hanging are derived from two key frames of animation - one with feet planted on a wall and one with the legs dangling freely. All the climbing motion is generated through inverse kinematics, simulated harmonic oscillation, layered offset functions and multi-part pendulum simulations. Here are the actual animation frames that make up the entirety of the animated motion when climbing (there are traditional animations for climbing up off a surface, jumping off a climb hold and running up a wall.)

Braced Climbing Key Frame

Braced Climbing Key Frame

Free Hanging Key Frame

Free Hanging Key Frame

So how does all this motion get generated? The effectors can interpolate, but what about the body? The sway of the character body as they climb and the impact cushioning they perform after a jump? This motion is largely accomplished through simulated damped harmonic oscillation. Basically, picture a pendulum where the character body is. The pendulum is connected to an anchor placed at the average of the hand positions, and is also attached to several springs, one connected to the wall the character is climbing and the others attached to the sides of the pendulum mass, preventing it from swinging side to side too much.

Braced-Annotated.png
Baced-2-Annotated.png

You could accomplish this with an invisible mass and actual spring joints in Unity, however I found the stability to be somewhat dubious and I wanted more control over how the forces were applied. For this reason I used a simple damped harmonic oscillation simulation that runs in the Fixed Update loop, with varying angular velocity and k-factor parameters. The simulation just returns a single number, an offset, and the application of the offset is determined by a separate direction parameter. With this design, I have a system that oscillates numbers with various degrees of damping and can then use the output to both linearly move the character root as well as rotate it around the invisible pivot of the pendulum that is modelling the character. The oscillators are simply stored in a dictionary via a key, so they can be updated via name or modified as needed. The sum total linear and angular offsets from the oscillators is applied to the character root after the position of the effector is set from the RIGHT and LEFT HAND effectors, the total position and offset code looks something like this:

Vector3 forward = transform.forward; Vector3 handAvg = (ikTargets[LEFT_HAND][CURRENT].transform.position + ikTargets[RIGHT_HAND][CURRENT].transform.position) / 2f; float handDiff = Mathf.Abs ((ikTargets[LEFT_HAND][CURRENT].transform.position - ikTargets[RIGHT_HAND][CURRENT].transform.position).y); if (currentClimbingType == ClimbingType.FREE) { return (handAvg - (forward * 0.1f) - (Vector3.up * 2.05f)); } else { return (handAvg - (forward * (0.5f)) - (Vector3.up * (1.3f - (handDiff / 4f)))); } // start with static offsets based on what looks good while climbing, can really be anything. Vector3 offset = GetCharacterOffset(); // The demo uses 6 different climbing springs, although they aren’t all active at the same time. Besides the ones mentioned there are springs for impacting a wall, and for a “wind-up” before jumping to hard to reach holds. foreach(string s in climbingSprings) { offset += (transform.position + (animationUtils.GetOscillatorValue (s) * animationUtils.GetOscillatorDirection (s))); } transform.position += offset; // The pivot is based on the “next” position to keep rotations a bit more stable. This also gives the character a bit more intention with their motion since they are reacting based around where they want to go, not where they are. Vector3 pivot = (ikTargets[RIGHT_HAND][NEXT].transform.position + ikTargets[LEFT_HAND]NEXT].transform.position) / 2f; // The demo uses two rotational pendulum springs foreach(string st in pendulumnSprings) { Vector3 axis = Vector3.Normalize(Vector3.Cross(animationUtils.GetOscillatorDirection(st), Vector3.up)); float angle = animationUtils.GetOscillatorValue (st) * (180f / Mathf.PI); transform.RotateAround(point2, axis, angle); }

As you can see from the above pseudo-code, there are two distinct climbing states, “BRACED” and “FREE,” which depend on whether or not there are foot holds. The above positioning code works for the BRACED climb state, while the FREE state has an additional step applied in the LateUpdate loop in Unity. This additional step is what allows the character to swing back and forth believably when grabbing ledge with no foot holds.

Pendulum configuration

Pendulum configuration

This time we actually do simulate the pendulum, but not just any pendulum, a two mass connected pendulum. I wanted to be able to simulate this the same way the oscillators were simulated, which is to say deterministically within a certain timestep. Unfortunately, simulating multi-part pendulums is actually pretty hard, so I copped out and just made an invisible pendulum fixed to where the character is, with one mass near the upper chest and one near the hips. The character is then “influenced” by a script to move along with the motion of the pendulum, without being directly attached to it. By actually attaching the pendulum to whatever geometry we are climbing on, secondary motion in the pendulum from motion in the level geometry translates into the character, so climbing on moving objects looks more believable. Here is a look at what the pendulum controller in the demo is doing, since I feel like the description is lacking something.

// The amount which the character is influenced by the movement of the pendulum [0-1] Public float pendulumWeight = 0.4f; // the root bone Public Transform root; // upper spine bone reference Public Transform spine; // reference to a prefab of a multi-part pendulum with some helper functions to extract the masses etc. Public CharacterPendulum pendulumRef; // timing parameters related to exiting from the pendulum follow state private float timeToExit = 0f; private float currentExitTime = 0f; private bool exiting = false; void LateUpdate () { if (usePendulumn) { // weight may be modified from the average pendulum weight if we are interpolating out of a FREE climb. float modifiedWeight = pendulumnWeight; if (this.exiting) { modifiedWeight = pendulumnWeight * Mathf.Clamp((1f - (currentExitTime / timeToExit)), 0f, 1f); currentExitTime += Time.deltaTime; } // The root is the character hips root.position = Vector3.Lerp (root.position, pendulumRef.mass1.transform.position, modifiedWeight); root.rotation = Quaternion.Slerp (root.rotation, pendulumRef.mass1.transform.rotation, modifiedWeight); // the spine is a reference to a bone in the animation rig, roughly upper-chest. spine.rotation = Quaternion.Slerp (spine.rotation, pendulumRef.mass2.transform.rotation, modifiedWeight / 2f); if (this.exiting && currentExitTime >= timeToExit) { this.StopPendulumn(); } } }

*** It’s important to know that the Script Execution Order for this project needs to be modified so that the pendulum happens after the updates to character position in the climbing system. I actually found it helpful to have almost every script given a specific relative order for this project, since the order of operations is extremely important.

We can further combine these elements to create much more exaggerated motion, such as jumping to difficult to reach hand holds by using the exact same principles. Rather than reaching a hand and foot to a hold, we are reaching both hands and both feet simultaneously. This is where the ROOT effector really comes into play. Basically, we can calculate the position the character would be in using the above position calculation code and set that as our NEXT ROOT position state. We can then figure out how long the jump will take to execute based on the distance. We can then “reach” the ROOT effector the same way we reach any other effector, with delay and easing that simulates a jump. The biggest thing to remember when figuring out the correct delay and effector acceleration is compression and expansion. Compressing before a jump (using a climbSpring) and stretching during the jump (keep the hand and feet effectors far apart) and compression on landing (another climbSpring.) 

While there are a lot of other small aspects to the climbing system, they all rely on the same principles and systems and are just small tweaks to, for example, make the character bob up and down as they move, or extend one hand rather than another. These small changes do add up in terms of code and can be a pain to set up. However, the small effects do add up in terms of creating believable motion.

 

 Placing Handholds

Now that we’ve seen how to move between climb holds, it’s time to figure out what a climb hold actually is. As might be apparent from some of the above snippets, the climb holds are actually GameObjects which exist in our scene. This was done to benefit from the Transform hierarchy relationship, which in turn means that holding a reference to the transform lets our character reach to moving hand holds without changing our code at all, the position and orientation are updated automatically. If we parent the effector transforms, as long as we update our position while climbing every frame based on the CURRENT effector, our character will also automatically move with any surface they are on, so moving platforms are also trivial to climb. Furthermore, if we process this motion in the FixedUpdate loop we can also climb on rigidbodies. It’s worth noting that the FOOT effectors are not tied to Transforms, there are no “foot holds” in the demo. Instead, since climbers often place their feet on nearly imperceptible imperfections on the surfaces they climb, the system just constructs a series of raycasts  from where each foot “should be” based on the hand position and inserts a Transform at the location of contact.

So how do we figure out where these hand holds actually are? The first iteration of this system used active tracking to try to dynamically find a spot to place the character’s hand. That is to say, you would press “right” and a bunch of raycasts would happen to find an object “to the right” and then it would try to find the correct surface normals of the hold and whether or not it was an edge by using even more raycasts. I think this approach feels like it should make sense with a reactive climbing system, after all isn’t that the point? Well, it turns out this seems to just be a really bad idea if you actually want to limit what the character can grab onto, not only from a performance standpoint but the edge cases are a nightmare. In the end, this system uses a helper script that runs in the editor (or at runtime in a separate thread if need be) which parses the underlying triangles in a climbable mesh to find edges and corners and then spawns empty GameObjects on these edges that we can use as effector targets. The Transforms of these GameObjects are oriented with the surface normal to make finding the correct orientation trivial, and can be limited in what they consider an “edge” by tweaking a few angle limiters. Here is a look at what the mesh parsing code looks like.

private void CrawlMeshForHandholds() { // trianglePairs is a list of triangles that are next to each other, this is determined by finding shared vertices in the triangles array of the mesh. I left it out here since it’s mostly just confusing to read and isn’t that interesting. for (int i = 0; i < trianglePairs.Count; i++) { TrianglePair tp = trianglePairs[i]; // GetNormal takes the cross product of two edge vectors on the triangle to calculate a surface normal Vector3 n1 = GetNormal(tp.t1); Vector3 n2 = GetNormal(tp.t2); Triangle upT = tp.t1; Triangle forT = tp.t2; // here we are comparing the surface normals to the world up vector to find out which triangle is more “up facing” and which is more “out facing” Vector3 upNormal; Vector3 forwardNormal; float upScore1 = Vector3.Dot(Vector3.up, n1); float upScore2 = Vector3.Dot(Vector3.up, n2); if (upScore1 > upScore2) { upNormal = n1; forwardNormal = n2; } else { upNormal = n2; forwardNormal = n1; upT = tp.t2; forT = tp.t1; } if (Vector3.Angle(Vector3.up, upNormal) > UP_THRESHOLD) continue; if (Vector3.Angle(upNormal, forwardNormal) < MIN_DIFF || Vector3.Angle(upNormal, forwardNormal) > MAX_DIFF) continue; float upY = ((upT.v1 + upT.v2 + upT.v3) / 3f).y; float forY = ((forT.v1 + forT.v2 + forT.v3) / 3f).y; if (forY > upY) continue; AddHandholds(tp.e, upNormal, forwardNormal); } } private void AddHandholds(SharedEdge e, Vector3 up, Vector3 fn) { Vector3 diff = e.v2 - e.v1; if (diff.magnitude < MIN_HH_DIST) { GameObject h1 = AddHold(e.v1, up, fn, activeGameObject.transform); GameObject h2 = AddHold(e.v2, up, fn, h1.transform); return; } else { Vector3 dir = diff.normalized; GameObject obj = AddHold(e.v1, up, fn, activeGameObject.transform); Transform prev = obj.transform; for (float i = MIN_HH_DIST; i < (diff.magnitude); i += MIN_HH_DIST) { GameObject t = AddHold(e.v1 + (dir * i), up, fn, prev); prev = t.transform; } } }

Let’s break down this code a little, since it’s not necessarily intuitive. The main idea here is that climb holds get placed on mesh “edges” where two triangles meet and form a somewhat sharp angle between their surface normals (this is why we don’t use the mesh normals in our calculations since the smoothing that is applied to normals to make surfaces not look like crap actually hinders this algorithm.) Finding adjacent triangles is pretty simple, if we use vertex welding, our triangles array will index into the same place in the vertex array for shared vertices, and two triangles which share 2 vertices must share an edge defined by those vertices. If we don’t use vertex welding on our model we can do it manually and perform a “fuzzy compare” of the mesh vertices to find those that are co-located. Once we have a list of all adjacent triangles we can start comparing normal vectors to find out if we are “close enough” to an edge. This part is simple, but there is a bit of an ugly hack to address the “inside” vs “outside” edge problem. Basically, if you just check the angle between the surface normals of two triangles that share an edge there is no way to tell if your corner is like Figure A below or like Figure B. In order to account for this, I do a quick test to see which triangle is facing more “up” and which is facing more “out.” Then, if the “up” triangle is below (has a lower y value at the centroid) than the “outward” triangle I assume the edge is internal, and if it’s the other way around I assume the edge is external and is a candidate for climb hold placement. The climb holds are then instantiated in local space along the edge, placed at each vertex and then, if necessary, at even points between the vertices for better coverage.

 
Figure A

Figure A

 
Figure B

Figure B

 

This approach generally works really well, but there are a few drawbacks. First, the only validation on the holds we are performing is a local test to see if we are on an edge candidate. This means that we can sometimes place holds in unreachable positions due to occlusion or other issues with the underlying mesh. Second, if we are creating GameObjects for every hold position we are going to have potentially hundreds of thousands of these in our scenes, which is going to slow everything down a lot.

For the first problem (validation) I decided to implement a runtime fix, rather than a preprocessing fix. This decision was mostly made because there needed to be runtime validation anyway in that some hand holds, while valid, wouldn’t be practical in certain circumstances. Furthermore, there are likely to be many handholds available to the character at any given time so we will need some selection processing anyway, so why not throw some validation in there. At runtime, the demo basically finds all handholds in a given radius (based on whether we are doing a simple directional move or a jump) and then goes through the list and does some validation rejection. The validation is currently just doing some sphere overlaps based on where the character would be if they used the potential hand hold and a couple sphere overlaps to see if the character hand would be inside geometry at the target hold. This rule set isn’t perfect and you can see some issues in the demo, but for most cases it works pretty well and is relatively quick. After validation rejection, the hand holds are sorted based on  a function of direction from the character, distance from the character and relative orientation. Basically, a hand hold that is closest to the input direction and an “ideal reach distance” away with minimal rotational variation is preferred. Once we have a sorted list of handholds, the top 10 or so are passed to an optional more rigorous validation test, which involves line of sight raycasts from the LAST position. The top remaining hold after this validation process is passed through as the one to use.


The second problem (a whole bunch of GameObjects) is a little more complicated. Ultimately I needed a better solution than storing a bunch of GameObjects, since even not at runtime it can become a problem. The solution the demo uses is an on-demand queue of hand holds which are streamed in around the character when needed. Basically, each climbable object has a serialized list of transform data which can be parsed for placing actual transforms in the correct location at runtime. Here is a look at the pruning process which is run as a coroutine once the character is close enough:

public IEnumerator PruneHoldTransforms() { Vector3 cp = searchCenter.position; // climbHolds is the list of serialized data attached to this object representing valid hand holds int num = climbHolds.Length; int count = 0; float r = searchRadius * searchRadius; Transform instance = null; // release holds that are too far from the character for (int i = 0; i < holdTransformInstances.Length; i++) { if (holdTransformInstances[i] == null) continue; float dist = (holdTransformInstances[i].position - cp).sqrMagnitude; if (dist > r) { queue.Enqueue(holdTransformInstances[i]); holdTransformInstances[i].SetParent(null, true); holdTransformInstances[i] = null; } } Vector3 p; float d; while (count < num) { // since we are running in a coroutine, we only look at 5000 holds per frame for performance for (int i = count; i < count + 5000; i++) { if (i < num) { // Serialized hold data is stored relative to the identity transform of the asset // when running we need to apply the transform to each hold. p = climbHolds[i].parent.TransformPoint(climbHolds[i].GetPosition()); d = (p - cp).sqrMagnitude; instance = holdTransformInstances[i]; if (d <= r) { if (instance == null) { // add a hold holdTransformInstances[i] = queue.Dequeue(); } // if we added a hold, we need to rotate it properly and set the transform heirarchy if (holdTransformInstances[i] != null) { couldHaveHolds = true; holdTransformInstances[i].position = p; holdTransformInstances[i].localRotation = climbHolds[i].GetRotation(); if (holdTransformInstances[i].parent != climbHolds[i].parent) { holdTransformInstances[i].SetParent(climbHolds[i].parent, true); } } } } else { break; } } count += 5000; yield return null; } }

This code is probably fairly self explanatory, but it’s important to note that there is a global “anchor point queue” (I called climb holds anchor points for...reasons) from which all of these climbable objects draw. There are about 2500 holds in the queue, which caps the GameObject budget for climb holds a couple orders of magnitude lower than the naïve approach. I found that there could be some stuttering when using this system in combination with rigidbody dynamics, likely due some interlacing between the coroutine calculating transforms and the rigidbody actually moving with the physics engine. For this reason, fully dynamic climbable objects use the naïve approach and have strictly parented climb holds, whereas the static geometry utilizes the climb hold queue.

 

 Conclusions

I think it’s very safe to say that this system is not perfect, although it is certainly my best attempt thus far and I think there are a lot of things it does well. Primarily, I think the procedural ability to climb on all types of moving and dynamic surfaces is relatively well done, even if it’s not particularly efficient. I also think that layering procedural content is a great way to achieve a wide range of motion, especially when combined with physics simulations. There are, of course, a number of things I think could be done better. First, if you play the demo you’ll probably wind up either reaching through a wall, or placing a foot on the wrong side of a surface at some point. These moments are usually pretty rare, but in particularly tricky scenarios (like climbing on the windmill) they can quickly rear their ugly head. I think this is a combination of not hand modifying the placement of climb holds and needing to reduce validation for the sake of performance, I think a future system could benefit from some serious optimization so there would be enough spare cycles for a few more physics checks. Another issue is that the body motion is sometimes more stiff than I would like. The procedural nature of the system makes it so these can be adjusted, but I think that the lack of secondary animated motion to apply on top of the procedural motion is ultimately a bit of a handicap none the less. I would love to be able to add more animation to the climbing and see how it impacts the overall look and feel. As always, thanks for reading and if you have any questions or comments you can find me on Twitter @UpRoomGames. Also, if you haven’t played around with the tech demo you can download it here, just remember this is highly experimental and may not even run or run properly on all systems. The demo is for personal use only, not for re-distribution.