KPI Programmer
KPI Programmer
Team performance, on the other hand, is far more visible. Perhaps the best way to track it is to ask,
does this team consistently produce useful software on a timescale of weeks to months?
Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shorter timescale.”
A team that produces useful software on a regular basis is productive. A team that doesn’t should be
asked why not.
And since teammates tend to be well aware of each other’s contributions (whether measurable or
not), any serious failings in individual productivity can be discovered by means of good
organizational habits, such as having frequent one-on-one interviews between managers and their
direct reports; regularly gathering honest, anonymous feedback; and encouraging each team
member to exercise personal accountability by reporting their accomplishments and taking
responsibility for their failures.
Productivity tracking tools and incentive programs will never have as great an impact as a positive
culture in the workplace. And when accountability and healthy communication are baked into this
type of culture, critical moments for productivity will quickly become visible to the people most able
to address them.
you should only measure what you really, truly want—whether or not it can be drawn as a line
graph. For some, it can be frustrating to do or manage work that can’t be reduced to a number. But
with work as nuanced and abstract as software development, the further we entrench ourselves in
details, the more we defeat our own purposes. Useful software is our goal, and we shouldn’t settle
for (or measure) anything less.
The advice of Ramaswamy and Davis is consistent with what I feel is the most important agile KPI.
Ultimately, teams should first measure business outcomes and then qualify them with metrics that
illustrate targeted behaviors. Presenting KPIs that combine outcomes and delivery qualifications help
answer the who, what, why, and how to deliver software of high quality. These KPIs are far more
meaningful than focusing on a bunch of low-level metrics.
Once these KPIs are defined, technology and process improvements gain a lot more context
and meaning for both software developers and the business. For example:
A development team asked to improve customer satisfaction metrics as a business
outcome may elect to focus on improving the mean time to repair issues and the
defect escape rates.
As development teams demonstrate improvements to these metrics, they should
express where they invest in productivity improvements such as automation,
developing documentation, or reducing technical debt.
As development teams deliver improvements, they should measure customer
satisfaction and report on their selected metrics.
KPIs that combine business outcomes and developer productivity metrics help answer the
question, “Is the team delivering on prioritized business outcomes while improving their
productivity?” Given that teams can’t improve everything at once, organizations should look
to champion teams that make prudent decisions that deliver outcomes while improving
productivity.
Meeting times
Stick to standard time limits for Scrum meetings. If you find your team is extending the
Standup meeting times on a regular basis, then the stories in the Sprint were not written or
prepared sufficiently before the start of the Sprint. If your Sprint Planning meetings are
taking longer than expected, the team needs to spend more time discussing stories during
Backlog Grooming.
If all team members do not fully participate in all Scrum ceremonies, the length of a Scrum
meeting is not a true indicator of the health of a project. Any concerns a team member has
about meeting times or participation should be discussed in the Retrospective. During the
next Sprint, the team can course correct.
Measuring meeting times: Provide a time-tracking tool that makes it easy to record time
separately for different meeting types. At the end of a Sprint, review the meeting times for
each meeting type. Address the good, bad and the ugly findings during Retrospective.
The best team requires little prodding from the ScrumMaster as the workday clicks by. An
established team should understand the flow of a Sprint board. It’s like playing mini golf if
you putt and only make it half way down the greenway you’re going to have to putt again
until you make it to the hole. The ScrumMaster should not have to nudge a team member to
make that next putt.
Team members should be clear in their role in advancing the subtasks in a story. A good rule
of thumb is that any one subtask should never be estimated at more than 4 hours. That way
the whole team isn’t waiting around until one person completes a subtask. The team should
have standard rules for alerting the team that their subtask has been completed, so the team
knows when to take on the next subtasks.
Measuring time spent on a subtask: Throughout the day the ScrumMaster can use a
Burndown Chart to understand the progress of the Sprint. If things look “off,” they can head
to the task board to see which subtasks may be holding up progress.
The client does not need to have a pulse on the day-to-day productivity of the team. The true
indicator of efficiency and productivity will be if new features are being introduced at or
before the time the client expects them.
A successful team will provide the client with new features to test, play with and discuss on a
regularly scheduled basis. Keep in mind that a client is likely using new features to advance
the product within the company or the marketplace. They should be able to trust at the end of
every Sprint the team will demonstrate one or more features that will improve the client’s
product pitch.
Measuring new feature completion: Maintaining a Version Report provides a clear picture of
the progress of a team and the development of the product. The Product Owner can use the
report to track the projected release date for a defined version. The Product Owner should
work with the client to determine which new features are to be included in a version. The
Version Report can then be used as a tool for discussion with the client at the end of every
Sprint, to show progress and manage expectations.
Burndowns, for instance, measure how many development tasks are completed over
time. The time is usually measured in sprints, which are usually two weeks long.
Sprints are created with a set number of tasks, and the burndown shows whether tasks
are completed to stay on the 2-week schedule.
Agile burndowns help show ROI and progress in smaller bursts instead of long-term
projects.
Another efficiency measurement for applications in production is how frequently
defects are raised and how long they remain unresolved.
shipping software that works but doesn’t create value is not a good measure. The best metrics to
measure the productivity of your software development are the metrics that you use to measure the
business results. And the best measure of how efficient your software development is how quickly
your software improves business results.
To wit:
Customer satisfaction: The most important thing for us is that customers are happy
with the work we are doing. Regular check-ins to ensure that the client feels that we
are making adequate progress are crucial metrics for our team. The Scrum process
that we use at AndPlus ensures that we demonstrate progress for clients every two
weeks, and this gives us a perfect touch point with them.
Peer Code Reviews: Every line of code that gets put into a project at our firm goes
through a peer code review, and our senior-most members of the technical team will
also spot-check projects to ensure that code quality is being maintained. We will also
compare the amount and quality of code written to the amount of time spent and
logged on an issue — this gives us a feeling of how productive (or not) an engineer is
being.
QA Kickback Rate: Once a ticket is dev-complete, we count on our engineers to
ensure that the feature works. Once they are confident, they will push the issue to our
QA team for review. Kickbacks from QA to the engineering team are common, but if
we see a significant number of issues (especially simple issues) being kicked back
more than once, that is a leading indicator of problems with the engineering team’s
effectiveness and productivity.
Time Logs versus Historical Data: After several years of writing custom software,
we have thousands of completed issues in our JIRA instance, all associated with time
logs. We can use this data to track historical time records for story point levels (e.g.,
the median 2-point user story takes n hours from birth to death). While any one user
story may go far longer or take far less time than the median if we see a large number
of stories taking longer than the average it is an indicator that the team may not be as
performant as they should be. On the other hand, if a team is consistently taking less
time than the median, it indicates either a highly-performant team or a team that is
padding estimates.
“The truth is, there’s no good way to measure software development efficiency and productivity.
But you can measure things that have a positive or negative effect on productivity….”
1. Bugs. Not how many, as there will always be some bugs, but rather how much time you’re
spending on them each week. This includes both fixing issues once you’ve identified them or
troubleshooting issues when they come up. If it’s more than 20% of your engineering time,
you might have a quality/architecture problem that is a drain on your productivity.
2. Uptime. Related to the above. If you have a product on the internet, how much of the time
is it unavailable to customers? Every time that happens it’s a distraction to the engineering
team (and a cost to your business!).
3. Time from code complete to done. This is a tough one to measure, but incredibly
impactful if you can improve it. When a developer is done writing the code for whatever new
feature, there’s always some steps before those features are available to your customers.
Those steps could be a code review, running the build including automated tests, QA and/or
User Acceptance Testing, and the actual deployment/release process. Any one of those steps
could result in an issue that requires the developer to go back into the code to fix or change
something. Depending on how long this process takes and how many times the developer has
to get back into the code, it could be a massive drain on productivity as the developer could
have moved on to something else and has to context switch back and forth and rebuild the
context every time.
“Measuring software development efficiency and productivity depends on the type of
organization…”
Consulting firms will tend to measure efficiency and productivity per project more
quantitatively since every hour will be billable. Software product companies might not be
able to measure efficiency and productivity as easily, so different project management
methodologies can help. We use the Scrum methodology at Badger, which includes a built-in
way of measuring software development efficiency and productivity at the team level using a
team’s velocity (Scrum stresses team collaboration). Velocity is basically how much work a
team can do in a period, and over time can become a good average for measuring how
efficient or productive a team is.
With that said, any single metric will never perfectly reflect reality, so at best they are an
estimate instead of a perfect measure. A better measure of software development efficiency
and productivity is simply to look at how well the business goals are being met. Instead of
counting hours or trying to squeeze every last drop from a single hour, you can instead look
at how the software development efforts contribute to meeting the overall business goals.
That measures the efficiency of a whole organization instead of just a single work group.
3. What is the best compensation plan based on your answer #1 and #2? PLEASE BE
SPECIFIC. *