Statistics and Analysis
- $VALUE1$ were headline features
- $VALUE1$ were small features or balance changes
- $VALUE1$ were bug fixes
- $VALUE1$ were engineering changes
The most headline features I designed in an update was $VALUE1$ during $UPDATENAME$
The most balance changes and small features I designed in an update was $VALUE1$ during $UPDATENAME$
The most bugs I fixed in an update was $VALUE1$ during $UPDATENAME$
The most engineering changes I made in an update was $VALUE1$ during $UPDATENAME$
I fixed an average of $VALUE1$ of bugs per update cycle!
This graph shows the aggregate amount of changes made in an update divided by the days since the previous update. For example, if an update had 50 total changes across all categories, and came out 50 days after the previous update, that update's aggregate score would be 1. That being said, that's not the full story - this data also takes into account that some changes were launched post-release, as those were being worked on during the next update's release cycle. So the true amount of changes in a given update is actually the number of launch changes + the total number of patch-released changes of the previous update. This means that in our earlier example, if those 50 changes were actually 20 launch changes and 30 patch changes, but the previous update had 5 patch changes, the update's true aggregate score is 0.5, as 20 + 5 / 50 = 0.5.
It should also probably be noted that all of this analysis on the basis of "productivity" is by definition exceptionally flawed because it treats large features that took months of effort and many hands with the same weight as taking two minutes to fix a typo in a decade-old piece of content. It also downplays the tremendous, amazing efforts of my team, without whom I'd never be able to do any of the things that I do. That being said, I like graphs - so here we are!