Avatar

I've done sandbox FPS PvP balances before, AMA (Destiny)

by Kahzgul, Saturday, February 11, 2017, 05:59 (2655 days ago) @ Cody Miller

There's a third, sadder option, and that's that bungie is using a metrics-bases approach to game balance. This seems to be the case, since they LOVE to trot out the metrics and show us charts about which classes have the higher K/Ds etc.. The problem with a fully metrics-based approach is that if your metrics are off at all, you screw up the balance. For example, Bungie is showing us that Defenders have the lowest K/D and says "so they need a buff." No. No they don't. Your metrics aren't looking at how having a defender on the team improves the K/D of the surrounding players. K/D is not everything. If you pretend it is, then your balance will suck.


Why not look at WINS then? Especially in certain gametypes, K/D is less meaningful. They should see which subclasses can't win games. Ultimately, that's what matters in the end when it comes to balance.

Wins is a pretty good metric, but the fact of the matter is that no single metric is useful enough on its own to justify a sandbox change, unless the margin is massive. Like, for example, if Defenders had a 0 K/D across the board, they'd obviously need changes. I don't really want to say "a buff" because maybe their teammates never die or something, but it's a clear sign that something is broken (again, unless the design intent is for them to never be able to kill anything). In team gametypes, it's also tricky because unless wins increase with the number of, say, gunslingers on a team, all the way to 6 gunslingers having the highest win percentage, there's usually some break-even point where adding more of a single class and/or spec actually hurts the team rather than helps it. Positive reinforcement for same-class-ness is good, but you also want to counter it with an opportunity cost for combo play between classes and specs. In short, you want your players to feel like they can bring pretty much whatever class and spec they want into a game and be successful. At the highest tier of play, you want teams to be able to plan class and spec composition for ideal synergy, but you don't want that to be more important than basic game skill.

For balance testing, you want to eliminate variables, so a test plan for, say, class balance (and I'm just pulling this out of my ass, but this is basically how I'd go about setting it up), would be to start with one class and make my guys all play that one class, with identical specs, for several games of rumble. This gives us a baseline for "player X is better than player Y in this scenario." Then we take a player who landed in the middle of the pack and have him change his spec to our designed "ideal" spec, and then run the same number of matches again. Did that change affect his standing in a meaningful way? Now let's have everyone go to an ideal spec. Are we back to the original numbers or did we find a magnifying or minimizing result in the variations? This is where the internet is hugely useful, because you can quickly determine what specs people view as the most powerful, regardless of the intent of the design (and you can compare the two if they don't match). We'd want to do that with an intentionally stupid spec as well, and then with some random specs and some "try this and see if it works" specs, too. After running, say, gunslingers, we'd then run this whole test with all bladedancers, and then all nightstalkers (identical weapons for all at every step of this process). Then we'd do the same for titans and warlocks. Only after all of this would we take our testers into class vs class/spec vs. spec games. Again, the internet is helpful for metrics here, but you have to normalize for popularity of classes, whether or not the guns have ideal rolls, etc.. So now you can tweak the classes around to get them all behaving about how you want them to relative to one another.

Next you have to tweak guns. Start with designed perfect rolls on everything and balance the entire weapon class assuming such a roll (because players *want* ideal rolls, and don't want crap rolls, it's safer to start from a place of perfection than to start from a place of mediocrity or crap - in a long enough timeline, everyone will end up with a perfect roll eventually, so that's where balance should be done). For weapons testing you would want to make sure everyone has identical classes, so you can, again, eliminate variables.

Then you need to balance the weapons with the class abilities... This takes a very long time.

Throughout this whole process, I'd probably be reassigning the very best testers (not necessarily the best players) to separate playtest teams, letting them get creative in their builds to see if they could find class and weapons combos that would break balance. And of course we need to stay in constant communication with the pvp sandbox designer to ensure that our sense of balance is meeting the design goals and vice versa.

So there's at least a month of testing for initial balance. BUT... once you attain that, you can make small tweaks on a weekly basis to reduce anomalies. Maybe ARs are getting way more kills than you expected. Adjust the range 1 yard closer. Or very slightly increase the recoil spread. If it's not enough, you can make further tweaks next week, and so forth, until the gun is performing where you want it to.

Generally speaking, I find Bungie's MO to be giant massive huge sandbox changes every few months, which results in essentially a reset of the entire testing process and the meta shifting largely to favor (or not) certain weapons. This would not be my desired approach. Testing for PvP is never good enough to match the real world, which is why I don't like large changes to the meta in any patch nearly as much as small, incremental changes.


Complete thread:

 RSS Feed of thread