Saturday, July 22, 2017

[Alliance:HotS] Gear Set Bonuses

One of the things that makes Alliance: Heroes of the Spire so deep is the serious amount of gear customization you can do. Six gear slots, with set bonuses for having matching pieces that can seriously change how you'd deploy that Hero, and then slotting in gems, as well.

Gear set bonuses come when you equip either 2 or 4 pieces of a given set. The 2 piece sets are generally raw stat bonuses, whereas the 4 piece sets are the interesting effects:
  • 2P Bone; +20% HP
  • 2P Wraithbone; +25% HP
  • 2P Furyborn; +20% Power
  • 2P Dragonfury; +25% Power
  • 2P Coldsteel; +15% Armor
  • 2P Icesteel; +20% Armor
  • 2P Sharpthorn; +10% Crit
  • 2P Elderthorn; +20% Crit
  • 2P Brightshield; +15% Block, +5% Reflect on Block
  • 2P Sunshield; +25% Block, +7% Reflect on Block
  • 2P Nightleather; +20% Aim, +5% Armor Penetration
  • 2P Voidleather; +25% Aim, +10% Armor Penetration
  • 2P Bloodstone; 15% Lifesteal, +10% Healing Received
  • 4P Ironclaw; 35% Counterattack Chance
  • 4P Swiftsteel; 40% Chance of Bonus A1, +50% Crit (Damage Only)
  • 4P Wartech; 50% of Crits will have +200% CritMult
  • 4P Witchstone; Buffs/Debuffs can Crit
  • 4P Lifesilk; +30% Healing Done, HoTs can Crit
  • 4P Titanguard; 15% Less Direct damage taken, Redirects 30% of party damage to self
That is seriously a lot of options. Today's post is going to be looking mostly at the 4 piece sets, though I may talk a bit about the 2 piece sets in relation. How much does each set help? When might you use each 4 piece set? Let's start with some of the easier to discuss sets.


Lifesilk
+30% Healing Done, HoTs can Critical Strike

Lifesilk is pretty straight forward: you want this unit to do more healing. The details are the +30% is multiplicative (so if your Pyrus heals for 30% health, with Lifesilk he'd heal for 39% health), and critical HoTs heal for 15% of maximum health instead of 10% for the basic version.

Since critical HoTs require critical strike, you may want to pair this set with things that increase your critical strike, but if the unit doesn't have any HoTs, then it'll go well with literally any 2P stat increase. On the other hand, for high-level arena, you may end up using a more defensive 2P for your healers to survive. My Anat, for example, runs Lifesilk and Sunshield so she can have the extra block.

Witchstone
Buffs/Debuffs can Critical Strike

Witchstone is possibly the most complex of the sets, because what does a buff or debuff critting even mean? You can find an exhaustive list on the Alliance website here, but here's the rules of thumb to remember:
  • If the buff/debuff has a number, crits increase it (ie: Armor Break goes from -50% armor to -75% armor)
  • If the buff/debuff doesn't have a number, crits make them unpurgeable (Stuns, Sleep, Silence, Debuff Immunity, etc.)
  • Bombs are the odd one out, on a crit they stun when they explode
  • Bar drains and Bar fills are affected multiplicatively
  • Witchstone cannot cause HoTs to crit
Witchstone makes a good choice if you have buffs or debuffs you want to supercharge. Unpurgeable silence or stuns can do wonders in Path of the Ancients or some Lost Dungeons where one unit constantly cleanses their team. If the unit you're bringing is less about damage and more about control or support, Witchstone may make for a good choice.

An example here is Sunslash, the Order Sabretooth. He doesn't do much damage, but making his AoE Mark a Critical Mark effectively increases your team's direct damage output by 15.4% (130% extra damage for basic Mark to 150% extra damage for critical Mark).


Titanguard
15% Less Direct Damage taken, redirects 30% of party damage to self

Titanguard is a fun one, and in high level arena you can often see it appear in what seems at first glance weird places. The first benefit of Titanguard is a straight-up 15% damage reduction on Direct Damage. The damage transfer effect is not direct damage, nor are DoTs or damage reflect, so they would not be affected by this damage reduction.

The other benefit is redirecting 30% of direct damage taken by other units to the Titanguard unit. Again, direct damage, so DoTs, damage reflect, or other Titanguard transfers do not count. This will often alow some of your squishier units to survive that much longer, even if you don't have a taunt up, or allow them to survive against AoEs.

This does create a weakness in Titanguard units: it's often easier to kill them indirectly by piling on damage on other units, especially if said other unit has a lot of HP but not a lot of armor--Petra comes to mind here. Titanguard units may also be susceptible to teams that have a lot of high-power AoE attacks for a similar reason: 3 units' worth of damage redirects at once can drain the Titanguard unit's HP bar very quickly.

Finally, multiple Titanguard teams work interestingly: the 30% damage redirect is calculated first, then split among all available Titanguard units. So if you have 2 Titanguard units in your team, each one will take half (15%) of the redirected damage. You also cannot redirect damage to yourself, so if a Titanguard unit is one that's hit, they're not considered an available Titanguard unit in the damage transfer calculation.

Often you'll want to ensure Titanguard is on a unit that has a lot of health to begin with, since the damage transfer cannot be mitigated. Shields will still absorb it, but nothing else reduces it. So units like Petra, Valorborn, or even the Mechanics are good choices for Titanguard.

Ironclaw
35% Chance of A1 Counterattack

Ironclaw and Swiftsteel are relatively similar in a mechanical sense. They both give you more (automatic) uses of your first ability. The difference largely lies in the trigger mechanism. Ironclaw requires you to be attacked in the first place. This makes Ironclaw a good fit for units that get attacked often: taunters, provokers, and guarders.

Ironclaw is effectively a DPS increase, but it can also be useful if the Hero has a debuff on their A1 you want to apply as often as possible. Gaius' A1 stuns, for example. Otto's A1 hits like a truck. Both good reasons to bring Ironclaw to the table. In the case where you want more debuffs, you'll likely want to pair it with a surplus of Aim. Especially for tanks, going mostly Aim instead of Block feels weird, but if the enemy team is mostly Stunned anyhow, it's not a big deal.

Ironclaw on farmer units that have self-healing (i.e.: Otto, Petra) is ridiculously effective since they're always getting attacked.

Note that you cannot Counter more than once per attack, so if your Hero already counters as part of their kit, Ironclaw is not likely a great choice.

It's a pretty straightforward ability, but look lower in the post for an analysis of Ironclaw vs. Swiftsteel, because that's where things start to get muddy.


Swiftsteel
40% Chance of a Bonus A1 followup, +50% Critical Strike for Damage

Swiftsteel is a bit more complex than Ironclaw. Every time you use an ability, any ability, you have a 40% chance of following up with a Free Attack, which is a usage of your A1 against the target. If your target is friendly, or the ability has no target, the Free Attack will choose a random enemy, ignoring Taunt or Provoke.

Rallies and Counters count as ability usage for Swiftsteel procs, so you can get bonus A1 attacks on those. However, the Free Attack cannot proc another Free Attack--but a Free Attack proccing a Counter on the opponent to proc a Counter on yourself and then Proccing a Free Attack off that Counter can occur. It's rare, but when it happens you basically just watch the two units hit each other over and over until one of them dies or someone doesn't proc a Free Attack. It behaves like a bug, but it's permissable under the rules of Counters/Free Attacks.

Swiftsteel also provides a +50% Critical Strike, but for damage only. Heals do not benefit from this. This means you'll often pair a Swiftsteel set with either a +Power or +CritMult weapon, rather than the typical +Crit weapon many go with.

The reasons for going Swiftsteel are pretty well the same as going for Ironclaw: it's a DPS increase, and if your A1 has a great effect, you may want more of those. Midorimaru (or any Samurai Cat) is a great case for Swiftsteel to spread more DoTs, for example.

Look below for an analysis of Swiftsteel vs. Ironclaw, and Swiftsteel vs. Wartech.


Wartech
Half Critical Strikes are Supercrits (+200% CritMult)

Supercrits. The name sounds awesome, but what is a Supercrit? It's a Critical Strike that has an extra 200% CritMult applied to it. So for example, if you normally have 50% CritMult, a Supercrit will actually do 250% extra damage instead of 50% extra.

Unless you're rocking a surplus of Crit gem slots, you'll almost always want to pair this with a +Crit weapon.

It's ridiculously straightforward, and basically, if you need burst damage, Wartech is pretty much the way to go. But you also can't rely on it. On average it's really only increasing your CritMult by 100%, since only half your Crits will have it applied. So big, bursty, swingy damage.

Often units with big AoEs will get Wartech applied. If you're not terribly enamored of your Hero's A1, Supercrit may be the way to go for a damage increase. But how does it compare versus just 2 Dragonfury sets (+50% Power)?

For the sake of simplicity, let's pretend everything else is the same: stat allocations, etc. 100% Crit, no extra CritMult.

If we do 100 base damage, at 100% Crit with a 50% our base damage is 150.
With +50% Power, our base damage changes to 150, which after a crit is 225.
With Supercrits, our base damage is still 100, but a Supercrit is +250% damage, which is 350. But the floor half the time is 150. So an average damage of 250 instead.

So you can see that Wartech increases the average damage dealt a bit, even over +50% Power, but it's swingy. Sometimes you'll do way less, sometimes you'll do way more. If you have less than 100% Crit, the benefits of Wartech also go down. In this particular instance, you need 75% Crit to make Wartech do the same damage on average as 2 Dragonfury sets. Now, even if the average damage is lower on Wartech, it will still have a higher maximum. The maximum damage would still be 350. You'll just see it less often.

The above only holds for units that scale 1x with Power. If they have abilities that scale better with Power, then that 75% Crit inflection raises further. If their abilities scale worse with Power, then the Crit threshold goes down.

Enemy teams that have healers--Magitek Bards and Unicorns especially--basically have a HP reset button every few rounds, so burst becomes very important when fighting those teams, making Wartech attractive.


Analysis
Ironclaw vs. Swiftsteel

Because they're so similar, Ironclaw and Swiftsteel may be an interesting question of which to use.

To ensure the same number of potential A1 procs a round, an Ironclaw unit would have to be attacked 1.14 times a round. However, Ironclaw is always against the attacking unit, whereas Swiftsteel is against the unit being attacked (usually), or a random unit if none is targeted.

Swiftsteel, if you get really lucky on buff procs, can wreck backline units as it ignores taunts. It's not an effect you can count on however, as you'd have to proc it at 40%, and then randomly select the backline unit (25% chance if nothing else is dead), so basically, if you use a buff with Swiftsteel, you have a 10% chance to hit a given enemy unit. It can be deadly to the opposing team, but not something you can build around.

Every unit in the game has an A1 that will end up having to target a Taunter, so if you have an Ironclaw Taunt up, eventually they'll attack your taunter, and you have a little bit better than a 1/3 chance to hit them back.

So basically, Swiftsteel is great for focus fire, and Ironclaw is great for wrecking backline units.

What makes this complicated is Swiftsteel's 50% Critical Strike for Damage bonus. What it means is you can effectively run a +Power or +CritMult weapon instead of a +Crit weapon, so an extra +51% Power or +67% CritMult for a maxed out 5* weapon (which, depending on what your scalars are for your abilites, and what your stats allocations are before that, could be a 50%+ damage increase, or even more, but more likely in the range of +25%ish).

What that means is that the actual "must be attacked this often" value for Ironclaw to match Swiftsteel is:
So the upper bound of Swiftsteel's extra damage output to equal the average damage output of Ironclaw means the Ironclaw unit needs to be attacked an average of 1.71 times a round, which for most tanks is easily hit, even if they aren't tanking (given the prevalence of AoEs). So even with the Swiftsteel +Crit, Ironclaw is actually still fairly powerful in the Tank niche. However, if you have a Counter already built in (ie: Valorborn, or given by Diana), or Rallies, you may be better off going Swiftsteel because you'll have more than one chance to proc Swiftsteel a round, and you'll quickly outstrip Ironclaw with that.


Swiftsteel vs. Wartech

Here's where comparisons get complicated. The two act so very differently, making direct comparisons don't quite work. I'll be making some assumptions/shortcuts to make them easier to compare as DPS increases, but what a Hero's A1 is, and what you're aiming for really dictate this decision.

For DPS purposes, though, Swiftsteel is effectively 40% of an A1 and 50% bonus crit, and Wartech is effectively +100% CritMult.

Let's make some other assumptions: 100% Crit regardless of set; frees up Swiftsteel for a +Power or +CritMult weapon. That assumption means that you'd be getting 2/3rds the CritMult that Wartech gives, which given the 40% extra A1 DPS on average means that if you're only using A1, you're going to do more damage with Swiftsteel. On average.

Average is a dangerous word here, however, because most arena fights are over in a couple rounds (or drag on forever). You might decide to go +Power for more consistent results instead, but similar to the calculations we did for Wartech alone, on both cases Wartech will still burst higher than Swiftsteel.

The other thing that makes "average" dangerous is that Wartech favours AoEs heavily. A single AoE can only proc Swiftsteel once, however, you get the potential benefit of Wartech for all the hits of your AoE.

So once again, if you want consistent output, Swiftsteel may be better, but Wartech will give you better burst capability. And of course, if your A1 is an attack you want going off a lot, Swiftsteel is probably the way to go. If most of your damage is AoE, you probably want to go Wartech still. Anything that gives you extra potential Swiftsteel procs will favour Swiftsteel as well (Counters, Rallies).


The Future of Swiftsteel

The Swiftsteel changes effectively made Wartech niche (where before Wartech was the "best" and Swiftsteel was niche). There's a rumour that Swiftsteel proc rate will be reduced, which will bring it more in line with Ironclaw and Wartech so it's not quite so overwhelmingly powerful, but honestly, it's not the proc rate so much as it's the +50% Crit bonus that allows it to be such a great DPS tool. +51% Power on your weapon is potentially a massive DPS bonus--+67% CritMult is potentially less of a huge bonus unless your Hero doesn't scale well with Power, or you've already got a lot of +Power as the two scale off each other.

As long as that Crit bonus exists, or exists at that level, Swiftsteel will likely be the go to 4P for DPS. To balance it, the Swiftsteel proc rate would have to be reduced to the point where you'd rarely see it proc, defeating the original purpose of the set. +30% Crit would've been more reasonable, as then you could have the question, do I Elderthorn for my 2P? Do I Weapon for +35% Crit? Do I do both? Can I get Jewel slots to make up the deficit of one or the other? Right now it's basically, Swiftsteel, 7 Crit Jewels, go. Or Swiftsteel, Elderthorn, 3 Crit Jewels, go. Swiftsteel makes it way too easy to hit the Crit cap.

With that in mind, I forsee a nerf to Swiftsteel's Crit bonus one day (after the designers try the proc reduction), or a buff to Ironclaw/Wartech, although in the right situation Ironclaw will crush Swiftsteel's output today so I'm not sure about buffing it too much. Similarly, Wartech's upper bound already hits so hard today that buffing it the wrong way could be dangerous to game balance--part of why just buffs only doesn't really work as a game balancing tool despite people constantly suggesting it. The math breaks down eventually. Sometimes you just have to nerf.
#Theorycraft, #AllianceHotS

Wednesday, July 5, 2017

[IndieDev] Checkpoint Saves: Ugh, Why? And How, Part 2

Last week I chatted about the start of Eon Altar's save system, why it didn't work, and how we fixed it. This week I'll go in-depth about Eon Altar's Checkpoint Save system. 

Fast forward nearly a year from our new save system implemention--Aug/Sept 2015--when we finally entered Early Access. The game was probably about 80% functionally complete and 60% content complete. As I like to say, the last 20% of your game will take about 80% of your time, and Eon Altar was no different. We spent 10 months in Early Access, and initially, the biggest point of feedback we got was, "How can I save my game mid-session?". Our sessions were about 30 minutes to 4 hours depending on the players, and in 2015 shipping an RPG without the ability to save mid-session was, well, pretty bad. So began the process to create a checkpoint save system, and retrofit our levels to save data correctly.


Checkpoint Saves: Less Complex?

Why checkpoint saves, though? Why not save anywhere the player wanted? The answer to that is largely to reduce potential complexity. If a player can save anywhere and anytime they want, it means you have effectively an infinite number of states, and good luck testing that. A specific example of this would be Myrth's Court in Episode 1: The Prelude.

Myrth's Court
That "moment" as a whole had the following:
  • A check to see which player characters were available.
  • A dialogue based on that to posit a vote.
  • A vote to decide which character's solution to use.
  • The actual moment where the party implements aforementioned solution.
  • Potentially a combat as a result of the solution.
If players could save at any point in that process, that would significantly increase the testing complexity around that moment. What happens if you reload with different characters mid-moment? What happens if you have fewer characters? More characters? By only allowing saves to occur at specific points in the level, we can avoid having to test those mid-moment saves.

By using checkpoint saves, we could tie them to an existing checkpoint mechanic we had in the game already--Destiny Markers/Stones. Again, not having to worry about partial encounters is a huge complexity save, but also not worrying about how to turn on/off saving in certain locations. What if we had a bug that prevented save from being turned back on? Or a bug that allowed saving in the midst of a complex moment? Also, how do we communicate if we can save or not to players? And what would the save UI look like? By tying it to an existing checkpoint mechanic, it made it very easy to communicate and very easy for players to grok. No special rules or explanations necessary.

So while checkpoint saves aren't as convenient for players, the reduced complexity was enough to make checkpoint saves doable with our small team and budget.


How to Train Your Save System

We already had a method to quickly save data to disk, and the checkpoint save system would continue using it. The questions then became, where do we store that information at runtime so designers could access it, and how do we design it in such a way that required as little designer input/time as possible?

First we had to determine what we would have to save:
  1. Enemy spawner state: were they dead or alive?
  2. Game object state: was it enabled or disabled?
  3. "Usable" object state: was it waiting or already used?
  4. Finite State Machine (FSM) state: what state was it left in?
  5. Specialized game object state: what is the game object's transform (position, rotation)?
  6. Specialized spawner state: what is the enemy's transform (position, rotation)? What is the enemy's AI settings (aggressive, passive, patrolling; allied to players, or enemies; patrol state)
With those 6 items, we could literally save anything and everything in our levels.

I created specialized game components that could track those states and report them to the save subsystem as they changed, so we wouldn't have to trawl through level data to extract information--remember, we wanted to ensure the save system was fast. All the designers had to do was add them to an object they wanted to save that particular state out for, and give it a unique ID (well, my code autopopulated the ID based on a random GUID and the name of the object in the hierarchy, but the designers could override that if they chose).

This worked extremely well. Design quickly retrofitted our existing levels. The vast majority of our save data is items 1, 2, and 3. FSM save data is rarely used unless the FSM is long-lived (our Destiny Markers are the primary users of this tech). Most FSMs would trigger and finish in one go, or at least in one encounter so we'd not have to worry about partial FSM execution by the time we got to hit a save point (yay checkpoint saves!). 5 was almost never used outside of redirecting patrol nodes for NPCs, and 6 was generally only used on super special NPCs: ones that changed their AI based on designer scripts, or NPCs that were used for escort quests.

Wild Checkpoint Data draws near!
The code took about a week to create/test/deploy for design. The lion's share of the time (and bugs) was designers retrofitting levels. I think it was easily a full man-month of time to get the levels up to snuff, and the amount of testing required was still absolutely immense, despite the reduced complexity of checkpoints.


The Bugs
 
A pitfall of this--and I'm not sure there's an easy way to solve this pitfall, I don't believe it's specific to this solution--is when designers forgot to put save components in levels, or they chained components in such a way that would create a problem on game load.

A specific example of this is a door in Episode 2, Session 1. Level design logic had the door with the following states: unopenable, locked, unlocked, open. Depending on the quests you did in the level, it could become locked, unlocked, or open. However, if you saved and quit and reloaded later, then the door would be unopenable because the door wasn't actually saving its state out, and players would become blocked.

Now, when we ran into those issues, we would add the save component in the level data, and then use code that ran on save data load to modify the data before it got applied to the level itself. Basically, we could determine based on what other quests were complete and save object states if the door should be locked, unlocked, or open, and set that state in the upgrade code.

Today we have 10 such save file upgrades that potentially run on a save file to give you an idea of how often we've had to use this, and the lion's share of them are for Episode 2 Session 1. Enough to make me glad we implemented it, but just how different E2S1 was from the rest of the levels really showed how easy it is to screw up save state if you're not careful thinking about it holistically.


The Future: SPARK: Resistance

SPARK won't have need of checkpoint saves, as sessions won't last more than 10-15 minutes at a maximum. Rather, any save data will be related to your "character". Unlocks, experience, statistics, etc. Thankfully, I'll be able to take our save system nearly wholesale from Eon Altar and apply it here, minus the checkpoint stuff.

A Randomly Generated Map and Associated Data
The in-level checkpoint stuff wouldn't work in SPARK anyhow, as the level structures are fairly different to start with thanks to both the procedural nature of the levels as opposed to hand-crafted, and the fact that the levels are networked right from the start, which is very different from a local multiplayer game.


Conclusion

The current save system in Eon Altar is robust, extremely fast, legible, easy to modify, and minimalistic in data requirements aside from the fact that it is XML, but the actual data output is all essential. It requires as little designer input as I could possibly get away with (even most checkpoint save data is attached to prefabs and autopopulates all IDs in the scene at the click of a single button). 

Yes, it took a fair amount of engineering work altogether, but I think that's a result of you just cannot skimp on engineering for a system like this. You get what you pay for, and if you're not willing to put the engineering time in, you're not going to get a great system on the other end. And as mentioned at the beginning, persistence is extremely important to games. A game can't afford to skimp on their persistence systems in my personal opinion.
#IndieDev, #EonAltar

Wednesday, June 28, 2017

[IndieDev] The Nitty Gritty on Save Files, Part 1

Persistence is possibly one of the largest drivers of repeated and extended interaction a game can have. RPGs persist campaign data between sessions; puzzle games persist how far you've been in the game and how well you beat each puzzle; even ye olde arcade games persisted high scores for all to see (until someone rebooted the arcade machine, anyhow). With that in mind, creating a robust save system is one of the most important tasks you could have when developing a video game. For us developing Eon Altar and now SPARK: Resistance, this is no different.

However, even the task of gathering some data, throwing it on disk, and then loading it later comes with a bunch of potential issues, caveats, and work. I'll talk today about our initial attempts at a save system in Eon Altar, why we went that route, why it didn't work, and what the eventual solution came to be.


A Rough Start

When we first created our save system, the primary goal of it was to save character data and what session the players were on. We had a secondary goal of utilizing the same system as our controller reconnect technology, as the character data was originally mirrored on the controllers as it was on the main game: down to character model and everything. We weren't originally planning on having mid-session saves (those came later), so really we only had to worry about saving between levels.

The "easiest" way of doing this, without having to think of any special logic is to copy/paste the state from the main game into the save file, as well as the controllers over the network. In programmer terms, serialize the state, and deserialize it on the other end. Given our time/budget constraints, we thought this was a pretty good idea. Turned out in practice this had some pretty gnarly problems:

  1. The first issue was simply time. The time it took to save out a file or load up a file was in the order of tens of seconds. Serializing character objects and transferring that across the network was measured in minutes, if it succeeded at all.
  2. The second was coupling to code. Since we were serializing objects directly, it meant that any changes to the code could break a save file. If we changed how the object hierarchy worked, or if some fields were deleted and others created, then existing save files would potentially be broken.
  3. The third issue was complexity. The resulting save file was an illegible, uneditable mess. Debugging a broken or corrupt save file was a near impossible task. Editing a broken save file was also quite difficult, if not impossible. Because of this, we couldn't (easily) write save upgrade code to mitigate issue 2. We'd have been locked into some code structures forever.
  4. The fourth was just far too much extraneous data. Because we were performing raw serialization, we were also getting data about textures, character models, what were supposed to be ephemeral objects, hierarchy maintenance objects, and so on
While we had a save system that did what we wanted on the tin, it was untenable. Shipping it would've relegated our small engineering department to an immense amount of time trying to fix or work around those issues. So while this approach was "simple" and "cheap" in terms of up-front engineering cost, it was the wrong solution. We went back to the drawing board.


The Reimagining

About a year after we started development, the team shrank pretty substantially. We'd lost 1/3rd of our engineering team and my time became even more contested as I became the new Lead Programmer. I had to contend with the responsibilities that came with that title, as well as continuing to deliver features and fixes.


However, I had already been noodling on the save and reconnect systems, and had a new plan. The first step was to fix the controller reconnect, which you can read more about here

Given reconnect was taking 8 minutes each time we had to reconnect a controller, it didn't take upper management much convincing that something needed to be done. And since reconnect and save were intimately connected at the time, making a convincing argument to fix save shortly after also wasn't a hard sell. So even though I had to disappear for 2 weeks to fix reconnect, and then another 2 weeks later to fix save files, I think everyone involved believes it was the correct decision.

To fix our 4 issues, it wasn't sufficient that we just be able to save and load character data in any which manner. It needed to be quick, it needed to be decoupled from the code, it needed to be easy to read/edit/maintain, and it needed to be deliberate about what it saved out.



The Reimplementing

For humans, text is easier to read over binary, and a semantic hierarchy is more legible than raw object data. So I knew pretty early on that my save file data was going to be in XML and in plaintext.

Plaintext was important. We often get asked why do not encrypt our save files, and it comes down to maintenance. Human-legible files are easier to read and easier to fix. As an indie studio with extremely limited resources, this was a higher priority for us than preventing people from cheating their save files in a local multiplayer game. If your friend is going to give themselves infinite resources and you catch him, you can dump your coke in his lap. 


Plaintext has saved our bacon multiple times: if there is a bug that is blocking our playerbase, more enterprising players have been able to repair their own save files with careful instructions from us (and a lot of WARNING caveats) until we can get around to fixing it. Also, being able to just quickly get information from broken save files without having to decrypt them.

The benefit of using XML is we could serialize to and deserialize from programmatically without any extra work on our part: tools to do so already existed. In fact, we were already using those tools to do the old save files. The difference was instead of serializing the character object instances directly, I created an intermediate set of data that was decoupled from the objects that made up the character data instances in-game, and this data was going to be organized according to game-play semantics rather than raw object hierarchy.

An example of a simple data class, and the resultant XML.
Having actual data classes meant we could lean on the compiler to ensure data types matched up, and that we could just use existing serialization tools to spit out the save data. It did mean a fair bit of manual work to determine what goes into the save file and where, but the benefits of that work more than made up for the upfront time. Adding new fields to save data is trivial, and populating new fields via upgrade code isn't terribly difficult. Editing existing save files became super easy because the save file format was now extremely legible. Legible enough that we've had users edit their own save files easily. And good news, because the data was decoupled we could actually write save upgrade code!

Collating the data into the data classes at runtime is a super speedy process. Less than 1ms on even the slowest machines. We're only serializing the simplest of objects--data classes are generally only made up of value types, other data classes, or generic Lists of other data classes or value types. And since we weren't serializing a ton of extraneous objects that only were supposed to exist at runtime, the amount of data we'd save out was significantly reduced: 29KB for a file with 2 characters, instead of multiple MBs. We put the actual writing of the save file to disk on a background thread; once we had the data collated, there was no reason to stall the main thread any longer, and disk writes are notoriously slow.

The difficult part was going from the data classes to instanced data. Previously it would get hydrated automatically because that's what deserializing does. However, in this case we hydrated data classes, I had to write a bunch of code that recreated the instanced runtime character data based on those data classes. This required a lot of combing over how we normally generated these object instances, and basically trying to "edit" a base character by programmatically adding abilities, inventory, etc. based on the save data. It wasn't particularly hard, but it was time consuming, and potentially where most of our bugs were going to lie. But by using the same methods we call when adding these things normally at runtime allowed me to reuse a lot of existing code.


Part 2: Checkpoint Saves
 
We had our new save system, and it was pretty awesome. The original save system was done in approximately a week, if my memory serves, maybe a little longer. The new system took a month to implement after research, programming, and testing. Basically, you get what you invest in. Skimping on engineering time on this feature was a bad decision in my 20/20 hindsight, but we fixed it, so all is well today!


Next blog post I'll discuss the next step we took for Eon Altar: Checkpoint saves. Why checkpoints? What did we need to do to retrofit the game to handle checkpoint saves? What implementation?  What pitfalls we ran into? And then, what can we reuse for SPARK: Resistance? #IndieDev, #EonAltar

Monday, May 8, 2017

[Alliance:HotS] Stats and Stat Relationships

Alliance: Heroes of the Spire--like many RPG-derived systems--has a number of statistic on each hero, and they're not really explained in-game. I've had the luck to chat with the developers (they're quite available, which is super cool), and have gotten a few formulas out of them instead of having to reverse-engineer everything on my own, which is fantastic. So here I'll talk about a couple of the major formulas, then talk about what those formulas mean for numeric relationships.

WARNING: SO MUCH MATH AHEAD


Aim vs. Block

Possibly the most asked about, and one of those more misunderstood. Aim and Block affect how often your debuffs land, or how often debuffs land on you. They're directly opposed. The formula is as follows:
Noting that Aim and Block are both percentages, so divide the value you see in the UI by 100.

So an example might be Caelia, who has a 50% base rate to proc a Heal Block debuff on her target with her A1. If she has 73% Aim, and the target has 25% block, the result is:


So the two are linearly opposed, but multiplicative to the base proc rate. If the base rate is low, it'll still likely be low even with oodles of Aim. i.e.: a base 20% would only be 40% with 100 Aim against 0 Block, which is twice as high but no where near a guaranteed proc, but that Aim will prevent a high Block enemy from dumping your proc rate into the toilet, as 100 Block but 0 Aim means multiplying your proc rate by 0.

Basically, if Aim and Block are close, it'll be about your base proc rate. The further apart they are, the greater the effect but the base proc rate is still the biggest factor.


Power vs. Armor

Okay, so let's do some damage. The things to remember about damage rolls is that the only "roll" that occurs is the crit roll. Aim has nothing to do with your "accuracy" in the traditional sense (only for debuffs), and the damage itself is a static number based on your stats; there's no random variation.

The damage formula is just a string of multipliers:

Each individual factor is a little more complex, but not by much.

Raw damage is simply your power multiplied by the scale factor of the ability. In cases where the ability does bonus damage, that bonus damage is generally the stat multiplied by a different scale factor. You can find scale factors on https://spirebarracks-dev.herokuapp.com/ for each hero ability, though I'm unsure how up to date it is.

For example, Akamin's A1, Magic Bolt, has a scale factor of 1, so it's simply just Power in base damage. His Spray of Flame, however, has a scale factor of 1.15, so it does more base damage.

In the case of "bonus damage" such as Otto's A1, Backhand, you have 0.2 * Power + 0.44 * Armor as the RawDamage factor.


The Armor Factor is based on your opponent's armor. All attacks are affected by this factor--unless they penetrate armor, but I'm not covering that today. The relationship ensures what is known as diminishing returns. Basically, after a certain point, each extra 1% mitigation becomes more expensive than the last.
Armor Value vs. Percentage Mitigation
For example, to get 20% mitigation, you need 260 armor. 60% mitigation, you need 1560 armor. 80% mitigation, 4160 armor. 90% mitigation, 9360 armor.

That sounds pretty excessive, but each interval I chose was half the damage taken of the previous interval. However, armor does start to lose a lot of luster after 4000ish unless you can easily net those armor points.

Finally CritFactor:
Pretty simple, don't affect the calc if you don't crit. Increase damage by your Crit Multiplier if you do crit. Edit: Picture should say 1 + CritMult%, not +Crit%. Thanks Packo!


Crit% vs. CritMult%

Critical Strike Rate increases your damage, as does Critical Multiple Factor. However, the two are symbiotic. The more Crit% you have, the more you benefit from CritMult%, and vice versa. The good thing is that since this relationship is static, we can math out the optimal numbers for best performance.

The graphs compare the two look like the following:
Crit% on bottom left X axis, CritMult% on right, Z axis. Ultimate average damage multiplier on the Y axis.
That's a little hard to read, so here's a contour graph instead:


X axis is Crit%; Y axis is CritMult%. Contours from left to right are +0.2 damage multiplier
The darkest blue section represents an average damage increase of 0% - 20% over time. Then we have 20% - 40% in the mid-blue, 40% - 60% damage increase average in the blue-orange, and so on.

From this graph we can easily see that Crit% has the bigger effect on our average overall damage until we start getting close to maximum Crit%. At the 60% Crit% to 80% Crit%, we may actually be better off starting in on CritMult% (assuming you're going for damage, and not say, Witchstone, which cares naught about CritMult%).

A level 25 Weapon gives 35% Crit% or 67% CritMult%, whereas jewels are 5% each making Crit% jewels far more powerful than CritMult% to a certain point. The Crit% weapon is still more powerful than CritMult% unless you're rocking enough Crit% jewels to hit ~60% Crit without the weapon (which is 9 5% Crit% jewels. Attainable, but good luck).


Crit%/CritMult% vs. Power

This is going to be the most complex relationship, and depends entirely on what scale factors your abilities have. However, if we assume a scale factor of 1x your power, life gets a little easier. Then the amount of extra damage you do depends entirely on your percent increase to Power.

If we look at the contour plot in the section above, with the base level of CritMult, we'd need 80% Crit% to maintain a +40% damage for an A1 with a scale factor of 1, whereas a level 20 Power weapon will give you +40% damage by itself. This benefit gets even better for abilities that have better than 1x scaling for Power. Basically, if you go all in on Power, it should be numerically comparable to going all in on Crit% and CritMult%, if not better.

9 5* Power jewels is +99% Power, and Weapon and Gloves at 5* would be another 102%, meaning you'd do triple base damage, versus 9 5* Crit% jewels for +45% Crit% (60% total) and +134% Crit Mult (for a total of 184%), which only sits around double average damage.

Crit% jewels seem to be far more abundant than Power% jewels, though. I'm swimming in 5* Crit% jewels and have...0 Power% 5* jewels. Not sure they even exist. 4* give 8%, which is +72% Power. Which is still better on average than the Crit% route.

But barring Supercrits or mechanics that play off Crit% or Crits, Power seems to be the mathematically superior option here, especially at lower ends of gear. At least for average damage over time. For PvP, especially with Magitek Bards than can reset the health bars of their party every 3 turns, burst is the name of the game and Crit%/CritMult% will give you far better burst than just Power will.

Basically, Crit%/CritMult% makes your DPS swingier but a higher upper bound at low gear levels, whereas Power is good for solid, dependable DPS but not as swingy. But enough Power gems, and even Power can reach the upper bounds of what Crit%/CritMult% can manage.

And again, this goes entirely out the door if your abilities scale poorly off Power. So most tanks or multiattack abilities, like Pistoleers or Free Blades' A1. Or if you have things that proc off Crits, or you're using Supercrit (which I'm not going to do the math on today).


HP vs. Armor

Often people consider something called "Effective Health", which is a combination of factors that basically say: you have effectively this much health. For example, if you have 1000 HP, and 50% mitigation, your "effective health" is 2000.

Think of it this way: if an attack does 500 damage a shot, and you have 50% mitigation, each actually only does 250 damage, and it'll take 4 shots to kill you. Or if you have 2000 health and no mitigation, it'll take you 4 shots to kill you at 500 damage a shot if you have 2000 health. Hence an effective health of 2000.

Armor and HP tend to be diametrically opposed on gear, you either have armor, or HP, and HP generally comes in percentages (I'm ignoring raw value jewels for this), so you can directly compare how much effective health  your armor gives you versus how much health your HP gear gives you. But note that Effective Health scales off both HP and armor, so it's not a strictly 1:1 relationship.

Add to that the fact that the majority of healing is done via percentage heals, there's literally no reason to have more actual HP. Effective health is king here. What complicates this is that armor isn't a linear value. Diminishing returns makes this a lot harder to determine the relationship, and the hero's base armor will play a huge role here.


+HP% on bottom left X axis, Mitigation% on right, Z axis. Effective Health multiplier on the Y axis.

+HP% on the X Axis, Mitigation% on the Y Axis, every contour is +1x Effective Health, starting at +2x
As you can see in the plot graph, mitigation as it approaches 80% increases effective health significantly. 90% even more so. I actually had to cut it off at 80% or the graphs would be barely legible. 90% mitigation is basically 10x effective health, for example, vs. 80% mitigation which is only 5x effective health.

Which is to say, armor has a much larger effect on effective health than raw HP does. And remember that 1560 armor total is enough for 60% mitigation. But since mitigation is static, let's sub in the Armor formula for y in our contour graph:
+HP% on the X Axis, Armor on the Y Axis, every contour is +2x Effective Health
There's no easy off the cuff answer here, unfortunately. Some combination of health and armor is likely to be the best. Here's a closeup of the bottom half of the graph with a higher contour fidelity:
+HP% on the X Axis, Armor on the Y Axis, every contour is +1x Effective Health, starting at +2x
So ~2100 armor but no health is about x3 Effective Health, which is about the same as +200% HP and 0 armor. But if you could manage ~2100 Armor and +50% HP, you're looking at nearly x4.5 Effective Health.

No easy answers, but unless Rumble decides to put in attacks that do static HP in damage instead of percentages, or start converting heals from percentage to amounts that are static, your actual HP doesn't matter. It's all about the Effective Health. The one exception currently is Armor Penetration. This fact makes stacking armor penetration potentially extremely powerful against Tanks, but I haven't run the numbers yet. That's just a hunch.

Edit: There is one other thing: Armor Break. Normal is 50% armor reduction, Witchstone is 75%. For a tank with the 4160 armor for 80% mitigation, that means 2080/1040 Armor after the debuff, which amounts to 66%/50% mitigation. So basically, 70%/150% more damage taken. So HP is a buffer in case of Armor Break.


Conclusion

I don't know how speed works precisely, so that's the one stat I'm missing, but otherwise this is a pretty comprehensive mathematical look at the stats in Alliance. Power in general seems to be undervalued by the community and Crit% overvalued. Armor vs. HP has a correct optimal answer, but depends on how much armor your character can get. Crit% vs. CritMult% also has a correct optimal answer, and Aim vs. Block is pretty straightforward.

The wrinkles that get thrown in these are basically individual ability power scalars, "bonus damage" scalars like armor for some tanks, or Aim/HP/Whatever, or abilities that proc off crits, including armor sets such as Witchstone and Wartech. A lot depends on the individual hero still. And none of this takes into account buffs/debuffs.
#Theorycraft, #AllianceHotS

Sunday, April 23, 2017

[Alliance:HotS] Theorycrafting Critical DoTs

I've been playing a lot of Alliance: Heroes of the Spire the past couple of months. It's a nifty Summoners War clone made by a North American company, which has been pretty cool because we get to talk to the devs relatively often--as a dev myself, I'm pretty appreciative of that access.

So, you summon heroes, level them up, gear them up with up to 6 pieces of gear, and let them loose on other teams and dungeons. The thing about gear is you can wear x number of a type of gear to get a benefit. For example, Bone gear will increase your hero's Health by an extra 20% for every 2 pieces of Bone gear you have on. Some more powerful sets require 4 pieces, making them mutually exclusive with other 4 piece sets--such as Swiftsteel (25% chance of performing an extra ability 1 attack every time you attack), or Titanguard (Transfer 30% of damage done to your party to you, reduce incoming damage by 15%).

My Sunslash's Gear Screen, wear 4-Piece Witchstone and 2-Piece Sharpthorn
However, one of the more interesting mechanics I've found in the game is that using a specific item set--Witchstone--your buffs/debuffs can critical strike, making them either 40% ~ 66%ish more powerful depending on the buff/debuff, or undispellable if it's not a numeric buff.

A prime example of this are Damage Over Time debuffs, aka DoTs. The regular DoT always does 5% of the target's max health in damage each round, but a critical DoT does 7% per round (an increase of 40% total damage). DoTs are great at shredding most PvE bosses, as they tend to be a big target with lots of Armor (damage reduction), and DoTs ignore Armor.

Razormane, Flameclaw, and Icefang are plentiful and, against the right targets, powerful.
As such, lots of people like to use Razormane or Flameclaw to take out bosses. The benefit of these cats is that they're quite common--they're available from the worst pull you can make, and even drop from some dungeons, and their main attack has a 30% chance to apply a DoT (and can apply a second DoT if they were stealthed when they attacked).

Where this becomes really interesting is Swiftsteel (25% chance of an extra attack) vs. Witchstone (If your DoT crits, it'll be 7% DoT). Are the extra attacks better than the critical DoT? Let's do some math.

Making this all slightly more complex is the fact that debuffs may not always land. The to-hit of a debuff is Base Chance + (Aim - Enemy Block). Pretty close to everything has a base 15% to block (Bosses actually hit 25% at the highest floor you can fight them at).

So we have two possible stats for our DoT to scale from: Aim (Hit%), and Crit%. To make this easier, we'll ignore stealth for Flameclaw, and just assume spamming the first ability over and over.


Varying over Crit%

Let's assume we have a 100% chance to apply a DoT (125% Aim against a Floor 6 boss), and vary the critical strike percentage.

For Witchstone, we have an Crit%, aka x, chance to apply a 7% DoT, otherwise it's a 5% DoT:
For Swiftsteel, it never crits, so the amount of damage we can apply is based on the Swiftsteel proc rate:

Now, there's a small flaw in this math that I'm going to glaze over, which is that these expected health percentage damage values are an average over a lot of samples. In a single fight where you might get 5 - 10 turns, the variation is going to be much higher, so take this with a grain of salt. But over a lot of fights, we can work without dealing with that flaw.

In any case, since Witchstone's damage is varying over Crit%, but Swiftsteel doesn't, that suggests there should be a solid inflection point where Witchstone will generally outperform Swiftsteel for DoT damage:
So, assuming we'll always apply a DoT, Witchstone will usually outperform Swiftsteel once you reach 62.5% chance to critical strike your DoT.


Varying over Hit%

Let's assume we have a 100% chance to critical strike, and vary over Hit%, aka y. I'll ignore the Block/Aim/Base Chance portion, and work with the Hit% directly to make life a little easier.

For Witchstone, this is simply, since it always crits:

For Swiftsteel, this means we scaled both the regular attack and the 25% proc attack by Hit%:

Comparing the two, we actually find out that Witchstone simply scales faster than Swiftsteel with Hit%. The only value of y where an inflection point can exist is y = 0:

We'll see this fact crop up again when we try to vary over both Crit% and Hit%.


Varying over both Crit% and Hit%

Remember, x is Crit%, y is Hit%

Witchstone:

Swiftsteel:


Equating the two:
Almost immediately we notice we can divide the entire equation by y, removing the variable. Basically, Hit% is meaningless to how they scale relatively to each other. Which means that 62.5% Crit% is the magic number where the two become equivalent for a really basic scenario.


How Reality Completely Breaks My Model

Of course, Swiftsteel is more interesting than I've allowed for in my modeling. It can actually proc off any attack, meaning that if you use Prowl, you could end up making an extra attack, where the Witchstone build might not get anything except an undispellable Stealth buff. But on the other hand, if you're using Swiftsteel and it procs on a move that has no target, I believe it picks a target at random, so it might be a wash depending on who it targets (unless you only have one target).

This actually makes Swiftsteel significantly more valuable than that 62.5% Crit% inflection point would have you believe, as a fully skilled up cat on auto-battle will only use it's A1 ability every other round based on cooldown rotation, which if you squint kinda makes it like a 50%ish proc rate instead of 25% proc rate--if we count each A1 usage as a double chance to proc instead, which is a small fallacy but close enough for demonstrable purposes--which would actually make the inflection point 130% Crit%, which is absurd as anything above 100% is wasted (also, good luck hitting that much Crit%). It also doesn't take into account the extra initial damage that each Swiftsteel attack would grant as well, though in the case of the cats, it's usually small enough to be negligible.

But Witchstone has other benefits. For example, Sunslash, the Order cat, has an A3 that Marks all targets for 3 rounds, increasing the amount of damage anybody does to that target by 30%; 50% on a critical strike if you have Witchstone, which means a significant chunk of extra damage overall to potentially the entire enemy party, which makes Witchstone a better bet for Sunslash for overall DPS (assuming you can stick those debuffs). He'll likely have fewer DoTs, but critical DoTs will help make up that difference a little.

However, the 62.5% Crit% inflection is something to remember if we run into other DoT classes. Enough Crit%, and Witchstone will outstrip Swiftsteel's performance. But at the end of the day, it also comes down to what other abilities your hero is rocking, and what you need that hero for.

But if you're just using Flameclaw/Razormane for Boss Shredding DoT application, Swiftsteel is the way to go. #Theorycraft, #AllianceHotS