The Age of Agile

ageofagileThis book is about agile management as a way to conduct  business — not just software development.

In the first chapters, the author presents agile management by characterizing it with three “laws”: the law of the customer, the law of the small teams, and the law of the network. Roughly, I would sum them up as: companies should put creating value to customer at the center of all activities, they should embrace decentralisation and autonomous teams that course-correct, and they should embrace fluid communication throughout the organisation as well as leverage network effects.

The key thesis of the book is that agile management is a fundamental shift from traditional management and is thus akin to a revolution. Whereas traditional management is bureaucratic, top-down, short-term-focused, cost-oriented, and seeks to defend existing innovation to stay competitive, agile management is collaborative, decentralized, long-term-oriented, and seeks to create new innovations to stay competitive.

The laws capture many ideas. The concept of iteration is addressed in the law of small teams and I was wondering if it would have benefited a “law of feedback” on its own. The law of network is also covering two things: fluid communication and network effects with platforms like Amazon Marketplace, the Apple App Store, AWS. I also wondered if this idea should have been better extracted in a “law of platform”. But keeping the message simple with three laws make the conceptual framework easier.

After the chapters about the laws of agile management come a few chapters about implementing agile management in the form of a couple experience reports. These are mostly chapters of the form “doing X worked for us”. Each chapter draw its material from one primary source. It makes the ideas presented earlier a bit more concrete but weren’t that memorable.

Finally come a few chapters where Denning returns to the law of customer, this time in the form of a discussion of the shareholder value model and its shortcomings. The shareholder value model, in the words of Denning, leads to financial engineering and short-term cost oriented management strategies which do no create new customer value, but rather exploit the existing customer value. The chapters read mostly like a rant, which I addmittly enjoyed, just like I enjoy a good rant from Steve Yegge. But in the context of the book, it felt a bit too one-sided or simply too long.

This book was for me like watching a science-fiction movie with a good idea but lots of plot holes. I liked the framework with the three laws, which presents agile principles in a new way and gives new insights.  I also liked that Denning, who doesn’t come from the tech world but form the business world, is ambitious about agile management and sees it at a macro scale through the lense of economy theory and business strategy, not just at the operational level. The conclusion, for that matter, is a good summary of his view. On the other hand, the chapters were lacking a clear connection. There are grandious annoucements about the promise of Agile and these new ideas we should embrace, or return to, but sometimes the content lacks substance. The emphasis on creating customer value felt right though. It’s very much in the spirit of Amazon’s principle of “customer-obession” or Y-Combinators’s motto “make something people want”. I will remember the book for this emphasis.

Related

No More QA

Companies have traditionally organized software-related activites in three silos: Dev, Test/QA, Operations.

The QA effort is realized after a long phase of development resulting in bug spikes and difficulties to plan the work for the development teams during this time.

When companies were engineering software “piecewise” this was the only way. Only when all pieces were finished could you integrate them and test features end-to-end. We’ve however now moved to an approach where products and teams are organized so that features can be delivered end-to-end incrementally. The whole product is engineered iteratively.

Evidences suggest that a centralized QA phase does not bring additional quality in this case, but rather actively harm quality.

As a result, they hired a VP of QA who set up a QA division. The net result of this, counterintuitively, was to increase the number of bugs. One of the major causes of this was that developers felt that they were no longer responsible for quality, and instead focussed on getting their features into “test” as quickly as they could.

There is no such thing as a devops team, Jez Humble

A similar story is explained in The Age of Agile about implementing agile organization at Microsoft.

There was a lot of learning at the start of the Agile transformation at Microsoft. “In the first sprints,“ says Bjork, “there was agreement on doing three-week sprints. The leadership signed off on the idea of Agile, but they were anxious as to how it was going to work. They planned for ‘a stabilization sprint’ after five sprints. However, that encouraged some teams to think, ‘No need to worry about bugs, because we have the stabilization sprint!’ A lot of bugs were generated and all the teams had to pitch in to help fix them.

“in effect,“ he says, “we had told people to do one thing, but we created an environment that prompted some teams to do the opposite. Who could blame them? The teams told us. ‘Don’t ever do that to us again!’ It was an example of unintended consequences.”

The Age of Agile, Stephen Denning,

For once, fixing the problem is easy. Just get rid of you QA phase (not the testers!). Make it clear that there is no additional safety net and that teams must ship features that are “done, done, done.”

Autonomy and Microservices

Discussions about monolith vs microservice are hotter than ever. Usually, a monolith is synonym for “big ball of mud” in these discussions. It of course needn’t be so. A modular monolith is perfectly possible. Also, microservices isn’t an entirely new idea either. As some says, it’s SOA done right.

The usual argument in favor of microservices is that autonomy is a good thing: teams can pick the best appropriate tools, develop in parallel without friction, and scale services independently of each other. The main drawback is an increased complexity of the overall system, primary on the operations side but also on the tools side.

The usual argument in favor of a modular monolith is that it’s simple: the code base can be modularised to enable parallel development, the tech stack is standardized for everyone which reduces complexity. The main drawback is that the release cycle is the same for everyone which implies some coordination and possibly reduces the release cadence. The risk of inadvertent coupling is also higher since modularisation boundaries are internal and not external as with microservices.

The distinction microservices vs monolith is a continuum though. You can for instance have microservices with a standardized tech stack or a distributed monolith with the ability to scale some parts independently.

It’s up to you to decide which levels of autonomy you want.

Autonomy
Benefits Perils
Internal quality standards
  • Better fitness of design principles, coding conventions, or testing strategies to the problem domain
  • Increased productivity
  • Code and people “mobility” is weakened
  • Adherence to conventions is weakened because there are many of them
  • Best practices keep being reinvented; each team goes through the same path of failure and lesson learned.
  • Best-practices in place turn up to be sub-optimal.
Scaling
  • Scaling of individual parts of the system
  • Elasticity
  • Performance of the system harder to comprehend
  • Overall operations gets harder
Techstack
  • Better fitness of the technologies to the problem domain
  • Increased productivity
  • Code and people “mobility” is limited
  • Strategy for long term support of technologies is harder
  • More fragility to changes of licence models
  • No economy of scale for lifecycle activities; everybody must do its own lifecycle
Release cycle
  • Shorter Time-to-Market
  • Shorter Feedback loops
  • Versioning hell

The mindset that lead to large monoliths is a mindset rooted in economy of scale. Development, testing, database and operations work is organised in silos. The idea is that the effort is reduced if the product is large and infrequently released. You do things once, at large scale, with specialists.

With microservices, the effort for a microservice is small enough that one cross functional team can undertake development, testing, database and operations work all by itself. There is less economy of scale but also less coordination needed.

“Because you can doesn’t mean you should.” Deviations from established practices or technologies can have attractive payoffs, but also come with some risk. Teams with lots of autonomy should be aware of the long term consequences of their choice and balance them against short-term benefits.

Services need complete teams when they are actively developed. With the time, some modules will stabilize and their maintenance concentrated to fewer teams. Inversely, services might grow and require splitting in multiple teams. In either case, teams ownership might change over time. If the technologies are very heterogeneous this might be more challenging.

Ultimately how much autonomy you want to give to the team is an organizational choice, not a technical choice. If you trust your organisation to be able to work with autonomous teams yet converge toward shared goals, microservices might work for you. If the organization maturity isn’t there, don’t go for microservices: you’ve translated your technical issues into people issues, which are even harder to solve.

Links

10x

Fred Brooks started it all. In The Mythical Man Month, he quotes a study saying

individual difference between low and high performers can vary by an order of magnitude

Since then this myth of 10x productivity difference has persisted in our industry.

Nowadays it’s best seen in the use of words like rockstar, guru or wizard in job descriptions.

But is it really a myth, or reality?

It’s undeniable that individual differences exist. Not everybody can write an operating system kernel, a concurrent collection library, or cryptocurrency protocol. These achievements are examples of outstanding technical expertise.

Like in sports, the distribution of talent is skewed, and there are outliers that outperform others.

But here’s the catch: the 10x developer isn’t working 10x faster, he’s thinking differently. The 10x developer finds new way to address problems.

He doesn’t deal with complexity better. He finds way to avoid complexity altogether. Not occasionally, but systematically, as part of his work ethics.

A 10x developer is also a force multiplier. His actions make the work of several people easier. He inspires other to achieve excellence and clone his habits. The payoff can go above 10x.

So, myth or reality?

For me, reality. But such developers are very rare. Over the last 10 years I’ve only met one.

Links

The Essence of Scrum

The essence of Scrum is to ensure progress. The formal elements of the framework –the restrospective, reviews, daily standup, etc.– are not ends in themselves but ways to ensure that progress happens.

It may seem simplistic to reduce Scrum to the mere fact of ensuring progress, but ensuring progress is not that easy, and Scrum is an effective tool to do it.

To prove this point, just think of what the opposite of progress means: to be stuck. A project can stuck for many reasons. Some symptoms include:

  • Work is half done or needs to be redone
  • Work is unclear and time is spent discussing it rather than doing it
  • Work wasn’t needed (people work on the wrong stuff)
  • Work can’t be done (because of dependencies, knowledge, etc.)

When a project is stucked, people work, but the overall project doesn’t move forward. The time is wasted.

Scrum prevent waisting time by maintaining a constant pressure on delivery and keeping the amount of work in progress low (“start finishing and stop starting”). It doesn’t matter how small the work item is. Actually the smaller the better, since it favors focus and quality.

Scrum is a framework for micromanagement, but without a micromanager. The team micromanages itself (i.e. “self-organisation”) and decides itself of the tasks to perform. Taskification happens mostly during the Scrum planning but then throughout the entire Sprint as the team actualises and refines the tasks to be done. And then does them.

The goal is to move forward, to overcome difficulties, to get concrete results, to make progress. For this you want the whole team to engage and people to help each other. You want your team to be more than the sum of its individuals.

I want teams emerging from the daily standup saying things like, “Let’s nail this. Let’s do this.”       — The Origins of the Daily Standup, Jeff Sutherland

People want to make progress fast, but software development is so complex that the risk is not to make progress too slowly but no progress at all. As long as you can ensure that some progress happens and you’re not compromising quality, you’re on a good track.

Gall’s Law

Gall’s law states that complex systems can only be the result of an evolutionary process, and not the result of a design from scratch:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.  – John Gall (1975, p.71)

A complex system evolves from simpler systems by adding successive deltas of complexity. The only way to build a complex a system is through iteration. That’s what evoluation is about.

Iterations enable us to get feedback, correct and improve the system. See what works and what doesn’t. Fix mistakes.

The system must be working after each iteration. You can add new features, as long as it refines the existing system and keeps it running.

A tadpole becomes a frog by developing its legs, then its arms, and finally shrinking its tail. The frog’s legs, arms and body aren’t developed individually and assembled at the end. That’s not how evolution works.

last_thumb1367178271

Also, you can not evolve everything at once, since in the meantime the system might not work. A tadpole develops its legs, then its arms, and finally shrink its tail. Each iteration needs focus.

Gall’s law is relieving. It’s OK to not be able to handle all the complexity at once. And it’s not only you–it’s everybody.

A complex system can not be built using only theory and first principles, because there will always be details of the environment that we were not aware of. The only way to make sure something will work is to test it for real. Practice trumps theory.

Obsessing with getting it right the first time is counter productive. Just start somewhere and iterate. Too much unknown blocks our creativity. But once we have something concrete, ideas to improve come easily.

The tadpole also teaches us a lesson here: it first develops a tail, which then disappears later on. The tail is a good idea in the water, but not so much on the ground. You will have to reinvent yourself occasionally.

Feedback Loops

Making sure that your project will be successful really boils down to two things: 1) get things done, 2) get feedback.

It’s obvious why getting things done matters: if you want to move forward, you need to get things done. Getting things done is however not a sufficient condition to be successful. You could be moving in the wrong direction! To be successful you need to constantly get feedback and steer the ongoing progress towards the goal.

These two principles are the heart of the agile manifesto: Move one step forward, adjust, and repeat. That’s the best strategy to ensure that what’s produced is really helpful for the project.

A step can be small or big. It can be the implementation of a single method with a peer review as feedback. It can be a refactoring with the automated execution of unit tests as feedback. It can be the implementation of a feature with the customer demo as feedback.

Scrum and XP are very different but are both considered “implementations” of the agile manifesto, since both promote moving forward and getting feedback in their own way.

Scrum is a technology-agnostic process to get things done. The work is split in stories and tasks, which are small actionable items. To keep the momentum high, team members should focus on one task at a time. Feedback is obtained during the daily standup, the sprint review and sprint retrospective. You can use Scrum to conduct any project, not just software development.

XP on the hand is organised around technical software practices. It emphasizes pair programming, unit testing, continuous integration, refactoring, collective code ownership. The first three practices are nothing else than ways to get feedback about the code. Refactoring and collective code ownership are ways to guarantee that the team can always move forward.

With little surprise, XP and Scrum are good complements to each other. But they can also be complemented with other elements of your own. If something works for you to improve getting things done or getting feedback, add it. 

Make sure that feedback doesn’t turn into noise though. If feedback is not actionable it’s not truly feedback. What you want is feedback that help you get your next thing done in a better way. That’s the loop.

More:

Software And Tactics

The image of a software engineer is that of a quiet and analytical guy working in isolation on some green-on-black code. There is some truth in this image. Studies have shown that interruptions are bad for programming, and that engineers need long streaks of uninterrupted time to fully immerse into a development activity. Projects are frequently structured in modules that are owned by individual programmers. In this solo view of software engineering, the less communication, the better.

In the agile view of software engineering, people and communication are at the center. You succeed as a team, or fail as a team. The code is not owned by individuals, but collectively by the team (chapter 10, Extreme Programming Explained). Work is organized into short actionable tasks, which prevents multi-tasking, ensures high focus, and high reactivity. These strict rules are the key ingredients to hyper-productivity.

Agility is the result of the combination of several elements, such as unit-testing, continuous integration, refactoring, etc. Amongst these elements, collective ownership is one of the hardest to implement. In contrast to the other elements, collective ownership requires a change of attitude, not just a change of technical practices. It requires moving from a solo mindset to a collective mindset.

To better understand how a collective mindset can be implemented, one should look at other professions were a collective mindset is critical. This is the case for instance for sport teams, firefighters, police officers, or SWAT team (Special Weapons And Tactics). 

A SWAT team’s effectiveness depends on its excellence in several practices:

Communication

SWAT team members communicate the action they engage in, the risks, and impediments. Communication must be concise, and adhere to a common vocabulary. The team lead oversees and coordinates the activities if necessary.

These considerations apply pretty much as is for software engineering:

“I’m about to launch the stress test of the web portal. Do you copy?”
“Copy that. I’m monitoring the logs.”

Execution

SWAT team members train together standard practices and procedures to improve execution, such as the manipulation of weapon or hardware. Only Practice makes perfect. 

Software engineers should similarly train standard practices to improve execution, and master their tools. Sample practices to train include:

  • Navigating in the IDE
  • Synchronizing and merging code
  • Updating database (scripts, data, etc.)
  • Deploying software
  • Running various kinds of tests
  • Assessing code quality
  • Gathering performance metrics
  • Keeping the wiki up-to-date
  • Organizing release notes

Pairing

SWAT teams operate in dangerous environments. Mistakes are usually fatal and threats abound. By working in pairs, members can watch one others to prevent mistakes and protect themselves.

Pairing is great for software engineering, too. It reduces the risk of mistakes during coding, deployment, database updates. “given enough eyeballs, all bugs are shallow” as Linus’s Law says. While there are no external threats to software development, pairing favors knowledge transfer, and if a member is sick or leaves the team, the work can still go on smoothly.

The highly dynamic view of collective software engineering is as a complete clash against the highly analytical view of solo software engineering.

There are definitively parts of software engineering, such a design, that require quietness and thinking. But a large part of daily software engineering activities aren’t so: small refactorings, writing unit tests, fixing integration issues, measuring load and response times, etc. do not involve much thinking. They just need to be done.

There is scientific evidence that 80% of what a software developer does in a day—different steps and small microsteps— is not brain work. They do what they have done 50, 100, 1,000 times before. They just apply a pattern to new situations. — Mastermind of Programming, p.336

Lastly, collective software engineering requires redefining working time. In most working environments, individuals can work with their own schedule (hours, rhythm, pace). This is perfectly fine in the solo view of software engineering; however, it breaks the dynamics in collective software engineering. Ideally, team members always work together towards the team’s objective.

Software engineering is not always a creative endeavour. It is a fight against time and code rot. To win this fight, you need clever tactics. The challenge is to work as a an effective SWAT task force — where SWAT stands for Software And Tactics.

Simplicity Prevails

We engineers are masters of self-deception when it comes to our aptitudes to handle complexity. We believe we are way better at handling complexity than we actually are. In practice the level of complexity that people can master (me including) is disappointing low.

Instead of self-deceiving ourselves, we should embrace our limitations and aim for simplicity. This is what all geniuses of our time have been preaching for. Simplicity prevails.

“Simplicity is the ultimate sophistication.” — Leonardo da Vinci

It is common for tools to be too complex. We all know that only a tiny fraction of the features of Word are used, still the usual trend for a product is towards adding more and more features. Often, the best thing for a product is taking something away from it. Only simple tools prevail.

One problem with simplicity is that it is often confused with easiness or triviality. Easiness and triviallity are subjective properties relative to a user. They refer to the sense of familiarity or ordinarity. Simplicity is an objective property that refers to purity, the absence of mixing distinct elements.

Let us look at some simple features that are clear wins.

Using Venn Diagrams for Access Control

Have you ever been confused by the security settings of an application, not knowing what would be accessible or not to other users? Venn diagrams are simple and can be used to make an interface intuitive. Win!

Using Pictures instead of Texts to Log

Have you ever found yourself overwhelmed by the difficutly to compare tens of print statements to understand state mutations over time? Comparing texts is hard; comparing images is easy. Using a visual log is simple and supports better debugging. Win!

Using Examples to Test Programs

To test your code you exercise it with chosen inputs that serve as examples? Well, you could have invented unit testing. As Fowler says: “Kent’s framework had a nice combination of absurd simplicity and just the right features for me”. Win!

Using Live Code instead of Static Code to Understand Programs

Have you ever felts exhausted trying to mentally run code in your head when inspecting static sources? The underlying question is: what prevented you from runinng it and see it live? Finding a suitable unit test and breaking at the start of the method you are inspecting can be automated to become a one-click operation. Win!

Optionless search

Have you ever been repelled by the sheer amount of options in a search form? For instance this one. That’s tackling the problem of search wrongly. Google, Airbnb, and Facebook got it right, offering essentially one unique text input and hiding the magic of relevance matching. Win!

These examples show what I consider to be the level of complexity we can handle, and should aim at. These features are so simple to use they will immediately look easy and trivial. This is a good thing.

Scrum Wall vs. Issue Tracker

Last year, Mascha Kurpicz and I conducted interviews and ran a survey to better understand the dynamics of Scrum teams and their use of tools to support agile development. We wrote a paper. Sadly, it was rejected at XP 2012, mostly due to a lack of data to support our claims. Here is the abstract:

Scrum is a lightweight iterative process that favors interac- tion between team members. Software development is however a complex activity and there exist many software tools aimed at supporting it. This research studies the role of software tools within Scrum practices. It focuses more specifically on comparing the strengths and weaknesses of the Scrum Wall and issue trackers, as they are frequently used together within projects. This paper presents findings from interviews that have been further validated with a survey. Results show that the Scrum Wall is highly appreciated by Scrum practitioners. It encourages positive dynamics and supports well most of the work organization. People tend to consider software tools as impediments, but use them nevertheless to control information that would otherwise remain tacit. Synchronizing information across tools is reported to be a source of troubles.

I think there are several interesting findings in our study. Team productivity and team dynamics are challenging issues to understand, but very fascinating.