Friday, July 01, 2016

Setting up the role of testing

We have tried assigning testing to three different roles: The developers themselves, a tester test-in-the-loop, a quality assurance officer.

Presupposing that manual testing is a good thing, we have discovered pros and cons for the different setups.

Developers as testers

Having the developers cross-test each others' issues has the benefit of transferring knowledge of the code, of best practices and of improving the art of communication [arguably this same objective can be fully covered with peer-reviews --Ed.].

Developers however are not equally good at testing and probably not as good as a trained tester. It might not be the optimum use of resources. Some even find testing boring and that it breaks up their coding-flow.

Tester in-the-loop

Taking the task of testing from the developers and placing it on a dedicated tester has the benefit of localizing the responsibility of the testing. A tester puts up his/hers own testing environment, and improves his/her processes through greater exposure to testing. The tester has a slightly different approach to the issues, seeking out those special corner cases that lie off the happy-path.

There is a danger of that the developers might grow sloppy, assume that the tester will catch all their errors. And if the throughput of developers and testers are unbalanced, bottlenecks can develop.

Quality assurance

Zooming out, what testing needs to provide is a bug-free customer product. So ultimately what is most important that is tested is the delivered functionality of the product. This reflects back on how requirements are specified, in particular that they are testable. Keeping issue testing with the developers and higher level functional testing with the tester takes the tester out of the loop and places him/her on a parallel track with the developers.

Errors however typically surface later and in Scrum often during another sprint. This of course can be costly and makes tracking issues and burndowns a bit more tricky.


The verdict was not in when I left my last employment, but my feeling was that testing should not be a reactive role (in-the-loop only) but more of a proactive role, and that the tester should be able to proxy the product owner to some extent.

Monday, January 18, 2016

Managing and leading software product development

I have been contemplating the roles of the development manager and the product manager in software development. In particular who has what responsibility?

I recently read something that resonated with me, that leaders are needed to bring about changes and managers are needed to create order out of chaos (complexity). The point being made was that both roles are legit, but require different skills and have different end-goals. The tools of the manager are processes and structures while the leader uses motivation through values and emotions. The goal of the manager is to accomplish the plan and that of the leader to achieve the vision.

In software development it is necessary to decide on doing the right thing and then do it right. To achieve this we both need leadership and management. But how is that best put in place? It is tempting to consider this best done by having a leader-manager. I, however, think it is generally too much to ask for in one person (even though they no doubt exist). In my view the role of the development manager requires more management than leadership, and that the opposite is true for product management.

Development managers should not be pure bureaucrats without vision and the ability to motivate people. They might have the goal of bringing structure to the work but they will not succeed unless what they propose is accepted by the organization and the teams in particular. To accomplish this they might need to find novel solutions and be strong in communicating and negotiate for those. The development manager must not buckle the leadership of the product manager by wielding his official authority (embedded in the org-chart).

What I am getting at is that I don't think that management serves the derogatory conjugation it has had. Leadership and management are both necessary ingredients to successful software product development.

Saturday, January 09, 2016

Team structures

The last 11 years we have tried out different setups for the development teams, looking for the one right setup. Actually, now, I think the right approach here (borrowed from others) is to address team structures as a product development. In particular, to do incremental adjustments ('developments', 'experiments') a keep track of these through release notes and retrospects.

This is an attempt to (posthumously) create these release notes/version history.


Version 1

We were three developers and we split the responsibility based on architecture. There was no over-all lead, but we had an IEEE requirements document that we with others had created.

This was a time where responsibility was very clear and code-ownership strong.

Transition driver: It did not make sense to drive all projects with a single developer owner, it would be too slow. New functionality needed to be implemented by a group.

Version 2

We instigated Scrum, we had the product owner and scrum-master roles. A single team.

I think everyone felt good about having clear task lists. Also clearing responsibility for non-code work, such as doing meeting notes, setting up and maintaining the development stack, taking budging decisions.

Transition driver: We started a new code-base, a new issue tracking project, and dedicated split of the developers. But kept the single Scrum team structure. The thought was that it would create too much over-head having two teams, also we thought the knowledge transfer would be best served for both groups to continue working as a single unit.

Version 3

It became obvious that during planning, demo and retrospects half the group kept silent while the other group went through their development and visa versa. Time was being wasted, people were bored. So we split into two Scrum teams although keeping a single product owner and scrum-master.

Transition driver: We started getting more and more load from the service department and we were starting seeing more customer specific development projects. We started out by allocating fixed amount of time and a dedicated developer for 3rd level support issues. We called this role "batman" at first and later "the hat", it rotated between the developers, one having the role for the full sprint.
It was rather artificial allocating a fixed time slot for 3rd level support since it was rather unpredictable how much time would be needed. Frequently this would mess up the burn-down chart and could deteriorate the commitment of completing sprints.

Version 4

We created a specialized 3rd level support + custom development team, called Quicksilver. We pulled out all bug and custom development issues from the Scrum boards and put it on Quicksilver's Kanban board. We populated the team with the two most senior developers. They also got the go-ahead to do the refactoring they thought necessary to ease the future maintenance burden.

Transition driver: The developers on the Quicksilver team resigned. 

Version 5

We restructured the teams into three. This was actually the second attempt where we went from one product owner to three. We discussed quite a bit if we should do this split based on code-bases/architecture or if we should try to do it based on cross-code-base features. The first option won because we thought code ownership was more important then having more flexibility in creating teams around end-to-end functionality. We were still doing quite allot of maintenance and that usually meant going into closely contained areas of the code. However, the latter option has a greater appeal to top-management because they figure that it will give them greater throughput of new (sell-able) features.


There are some versions missing, one is where we when for the first time from a single product owner to three. And first placing them within the teams and later moving them into the sales department putting greater responsibility on the teams. The distribution of responsibility between product owners and the teams is a topic belonging to a blog of its own.

Also missing are our attempts in incorporating testing into the development, by letting the developers do it, to having a tester picking up issues as they complete within the sprints, to have complete sprints tested. This is also a topic that requires a dedicated blog of its own.

Lastly, there is also how the team organizes its work internally. Should everyone be able to do everything, or does it make sense to have some specialization, if, then to what degree? We have done some experimentation here as well. Blog++.




Sunday, January 03, 2016

Useful measures for the development manager

After establishing a functional structure for the development department this structure needs to be monitored (re-evaluated) to see if it still fits its purpose. For this monitoring task some measure are needed. Being an engineer I tend to gravitate towards quantitative measures rather than qualitative.

At first my focus was internal to each team, in a typical Scrum-master style, basically tracking the burn-downs within the sprints. We used hours to do the estimation, so the focus was very tight on hours delivered, as opposed to functionality delivered.

This way of estimating has been helpful in improving the planning of individual issues before heading into programming, it creates a communication platform for the team. It also gives the product owner some estimate of ETA for the increments to the product. However, it has not functioned well as a motivator, it has always felt rather artificial. Having an actual deadline where the team commits to someone external has been a much bigger motivator. The teams have been moving from these time-based metrics to use story points, the verdict is not in yet on this new approach.

Recently my focus has moved from internal team-metrics to a broader departmental scope. The metrics I track today are also based on hours, but accumulated per projects, teams and individuals. The goal is to track:

i) Time spent vs budgeted (estimated) per new development. Budgets are approved and if over-run need to get re-approved.

ii) Time spent per activity: product maintenance vs new product development vs custom development vs. dev-ops. These statistic can tell a lot regarding that state of the code-base (too much maintenance?). If new development is being starved. Costs of dev-ops should not be confused with the cost of new development and maintenance. Same thing goes for custom development that should generate the right profit margin.

iii) Time spent per team and per individuals is important to know in order to determine the load on employees and if it is being fairly distributed.

These metrics are calculated monthly.