Thursday, December 31, 2009

Agile Software Construction is Like Neighborhood Construction

In my last article, I commented on Dr. Jan Chong's dissertation presentation "Software Development Practices and Knowledge Sharing: A Comparison of XP and Waterfall Team Behaviors". I raised some questions about the utility of long-term design documents and provided links to Scott Ambler and Mary Poppendieck's writings about unit tests and automated tests as a form of long-term executable specifications.

In this article, I'm going to give a better version of an often heard analogy that "Software construction is like building construction". Usually you hear this when product owners, project managers, or architects say things like, "We can't do that until we've got the foundation in place. Think of a building. You have to have a solid foundation first."

OK. Maybe so, but you only need it for the very first independent feature, and you should be building features as independently as you can. Here's why:

The Simple Building to Software Analogy

Here is a simplified one-to-one comparison that maps the "layers" of a building to the layers of a software application:

Building Component Software Component
Furnishings User Interface Layer
Internal Structure Business Logic Layer
External Structure Data Access Layer
Foundation Database
Blueprints Requirements Specifications

Traditional Versus Agile Software Development

To prepare my final point, I present the following two slides come from the Autumn of Agile screen-cast series mentioned in prior posts.

This first slide depicts the traditional sequential waterfall approach to building an application. The main theme is that waterfall projects typically result in teams developing horizontal layers that end up delivering business value only after all layers are complete.

[image[3].png]

 

This next slide depicts how agile (or simply iterative & incremental) projects develop systems. The main idea here is that features are developed vertically. This means that the features are independently valuable, and can represent a portion of the total desired value at any time should the project schedule get cut short.

 

image

 

Neighborhoods Get Built Using Agile, Not Waterfall

Let's assume we have 5 total features in the software system depicted in slides one and two. In the first slide, all 5 features are "finished" at the same time when the user-interface is built on top of the lower layers. In the second slide, a single feature is built from "top to bottom" during each iteration.

Can You Say Fffired?

Imagine a team of developers building a neighborhood like the traditional sequential waterfall approach to building software. Let's assume each iteration is 5 weeks for simplicity's sake. After 5 weeks, the developers would have built 5 separate sets of blueprints for the 5 houses to be built. Imagine that these are 5 or more separate parts of an overall software requirements specification in a software development project. They might describe database table schemas, relationships, multiplicities, etc.

 

Week 1

Week 2

Week 3

Week 4

Week 5

Elapsed Weeks

Iteration 5

25

Iteration 4

20

Iteration 3

 

15

Iteration 2

10

Iteration 1

Blueprints

Blueprints

Blueprints

Blueprints

Blueprints

5

I'm sure you already see how asinine this approach to neighborhood development is.

Nevertheless, let's continue with the next 5 weeks and see how far our team has gotten toward delivering the first "potentially habitable housing increment":

 

 

Week 1

Week 2

Week 3

Week 4

Week 5

Elapsed Weeks

Iteration 5

 

 

 

 

 

25

Iteration 4

 

 

 

 

 

20

Iteration 3

 

 

 

 

 

15

Iteration 2

Foundation

Foundation

Foundation

Foundation

Foundation

10

Iteration 1

Blueprints

Blueprints

Blueprints

Blueprints

Blueprints

5

After 10 weeks of work, our construction company has failed to complete a single inhabitable house. What are we paying these people for? Why are they building a foundation, then moving to the next lot to build the next one? I guess they're just going to get back to it sometime later.

Let's just speed ahead and see what we have after 25 weeks on the project:

 

Week 1

Week 2

Week 3

Week 4

Week 5

Elapsed Weeks

Iteration 5

Furnishings

Furnishings

Furnishings

Furnishings

Furnishings

25

Iteration 4

Internal Structure

Internal Structure

Internal Structure

Internal Structure

Internal Structure

20

Iteration 3

External Structure

External Structure

External Structure

External Structure

External Structure

15

Iteration 2

Foundation

Foundation

Foundation

Foundation

Foundation

10

Iteration 1

Blueprints

Blueprints

Blueprints

Blueprints

Blueprints

5

This is great, finally. After 25 weeks we finally have inhabitable houses! Actually we had one inhabitable house after week 20, then one more per week until 25. That almost feels like a privilege at this point to get so many houses finished so quickly in succession.

Not in My Backyard!

Do construction companies ever really build houses like this? Not that I've ever seen. They try to complete a house as quickly as they can after the foundation concrete sets. Of course, that means they may indeed start on a second foundation before they build the remainder of the first house, but you get my point.

Suppose it were 30 houses instead of 5. Could you imagine if this were a new neighborhood being built just behind your backyard. What would you think if you saw the crews building 30 separate foundations, then coming back to the first to frame it, then moving to the next to frame it, etc, etc. You'd think they had last their minds or have absolutely no respect or understanding of the Time-Value-of-Money or concern for a Return-On-Investment.

Neighborhood Construction is More Vertical, Like Agile

As you can see below, our construction crew progresses through each house from "the ground up", which means that they finish one house completely every five weeks. This, in agile terminology, is their velocity per iteration. The learn more about agile teams and velocity, read my article "D = V * T : The formula in software DeVelopmenT to get features DONE".

 

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Iteration 5

Weeks Elapsed / Houses Completed

5 : 1

10 : 2

15 : 3

20 : 4

25 : 5

Week  5

Furnishings

Furnishings

Furnishings

Furnishings

Furnishings

Week 4

Internal Structure

Internal Structure

Internal Structure

Internal Structure

Internal Structure

Week 3

External Structure

External Structure

External Structure

External Structure

External Structure

Week 2

Foundation

Foundation

Foundation

Foundation

Foundation

Week 1

Blueprints

Blueprints

Blueprints

Blueprints

Blueprints

Comparing Velocity between Waterfall and Agile Approaches

In the following table, we're looking at how many houses we complete per iteration.

So, the calculation is number of iterations passed / number of houses completed.

Thus, the agile velocity is constant in terms of 5 weeks. We can always count on 1 house becoming complete every 5 weeks.

  Total Completed Under Waterfall Waterfall Velocity per Iteration Total Completed Under Agile Agile Velocity per Iteration
Week 5 0 0 1 1
Week 10 0 0 2 1
Week 15 0 0 3 1
Week 20 0 0 4 1
Week 25 5 1 5 1

Combining Return on Investment with Velocity

Finally, since we know that nobody can live in a house until it's completed, we can calculate the potential accrued Return-on-Investment (ROI) after each iteration for actually having sold houses.

Let's assume we can sell each house for $100,000.

  Potential Waterfall ROI Potential Agile ROI
Week 5 0 $100,000
Week 10 0 $200,000
Week 15 0 $300,000
Week 20 0 $400,000
Week 25 $500,000 $500,000

Taking into account the Time-Value-of-Money (TVM) we understand that it's certainly more valuable to realize returns via the agile approach to building neighborhoods as opposed to the waterfall method!

Similarly, when we talk about ROI and TVM regarding software projects developed using agile methods, certainly we sometimes mean that we can gain monetary returns earlier by producing "potentially shippable product increments" faster and getting them to the market. However, what we also mean is that by focusing on building features top-to-bottom, we can get the complete experience and feedback necessary to then apply toward the next feature and we prevent the pains of having to refactor code later when it's no longer fresh and when requirements have been changing all round.

Here is a good video on YouTube which depicts this relationship between time and value accrual very well:

Until next time, stay agile, not fragile!

Review: Software Development Practices and Knowledge Sharing: A Comparison of XP and Waterfall Team Behaviors

Jan Chong, Ph.D., wrote her dissertation on "Software Development Practices and Knowledge Sharing: A Comparison of XP and Waterfall Team Behaviors". I came across a recorded presentation of her discussing her dissertation at The Research Channel here:

http://www.researchchannel.org/prog/displayevent.aspx?rID=16075&fID=345

Here is the description of the recorded presentation:

My dissertation research explores knowledge sharing behaviors among two teams of software developers, looking at how knowledge sharing may be effected by a team's choice of software development methodology. I conducted ethnographic observations with two product teams, one which adopted eXtreme Programming (XP) and one which used waterfall methods. Through analysis of 808 knowledge sharing events witnessed over 9 months in the field, I demonstrate differences across the two teams in what knowledge is formally captured (say, in tools or written documents) and what knowledge is communicated explicitly between team members. I then discuss how the practices employed by the programmers and the configuration of their work setting influenced these knowledge sharing behaviors. I then suggest implications of these differences, for both software development practice and for systems that might support software development work.

Jan's full biography at the time of her dissertation:

Jan is a doctoral candidate in the Department of Management Science and Engineering at Stanford University.  She is affiliated with the Center for Work, Technology and Organization.  Her research interests include collaborative software engineering, agile methods, knowledge management and computer supported collaborative work.  Jan holds a B.S. and an M.S. in Computer Science from Stanford University.

Her complete dissertation is available here: http://www.amazon.com/Knowledge-Sharing-Software-Development-Comparing/dp/3639100840

Highlighted Slides from Recorded Presentation

It's nice to see a formal study that compares these different styles of work. Jan spent time observing both teams for about 9 months. Here is a bit more about her methodology.

Study Methodology

image

Team Communication Styles

First, for the XP team, communication is more open, faciltated by information radiators and an open workspace:

image

The waterfall team has team members who work alone, in their own cubicles, and who communicate primarily through an online chat program. Thus, they sometimes "broadcast" information to others inside of the chatroom:

image

Observed Events and Data Coding

Through analyzing her recordings and notes, Jan classified all the different knowledge-transfer events into the following categories:

image

Knowledge Seeking Behaviors

This slide compares the actions taken by teammates in explicitly asking for knowledge transfer from others:

image

Knowledge Offering and Relevance

Below Jan has categorized types of communication that are offered and their respective relevancies.

image

Recorded Knowledge

It's very interesting to note that the waterfall team recorded knowledge far more for the "long-term", percentage-wise than on the XP team. What is not clear, however, is whether this means that the XP teams simply recorded more personal items in addition to the same type of long-term items or whether they leave out certain long-term items that waterfall teams recorded properly.

image

Summary Slides

The following three slides are her concluding slides:

image

image

image 

 

Review and Analysis

I'm about to join a great team for an agile project that will be built using Scrum & XP practices. This opportunity is very exciting for me. That excitement is born out of my own "in the trenches" experience and observations on both Agile/Scrum/XP and waterfall. If you're reading this and unfamiliar with the differences between agile and waterfall, I recommend you take both of them for a "test drive" by reading or listening to my article entitled "From Waterfall to Agile Development in 10 Minutes: An Introduction for Everyone". Also read "Don't Get Drowned by Waterfall: Break out the Delusion" and  "D = V * T : The formula in software DeVelopmenT to get features DONE"

If you cannot tell by now, my preference is for agile development and not waterfall. Waterfall, at least in its pure form, as you can read in the first article, is a mistake and always has been a mistake for software systems development. While Jan stops short of claiming a preference for one style of development or the other in her analysis, it's important to note that this is because that was not her intention. She is working on improving software methodology as a whole, and seeks to synthesize best practices from actual empirically observed behavior and data.

How Useful is the Long-Term Design Documentation by the Waterfall Team?

Jan observed that the waterfall teams had members who worked alone and went "back to the code" when they needed to understand something or had to work on a new module they had not worked on before. What I'm not clear on is whether she meant "automated test case code" or "implementation code". She also noted that the XP team members consulted one another more often about how things work prior to looking at code. She also noted that the waterfall teams created a higher percentage of recorded knowledge about the long-term aspects of the project. I am assuming this meant written documentation judging by her comments on video.

Question: How often the waterfall developers actually refer back to those long-term design documents, and which of those documents did they originally anticipate being useful to other developers?

The reason I would ask this is that she already noted that the waterfall team members spent a lot of time reading code, and also reading CVS check in messages when others checked in changes, but she didn't address whether they (or anyone) reads the long-term design documents for any useful purpose.

Experience Highest Communication Bandwidth via Face-to-Face at Whiteboard Collaboration

It has been my personal experience as a developer and architect that when working more closely, in a collaborative, open workspace, with other members of a team, I do not need to refer to implementation code or documentation as often anyway. Instead, we rely more a constantly evolved shared language, basic metaphors, automated test cases, and whiteboards to communicate the "gist" of how something works, then we refer to detailed implementation code as soon as we need the details or pinpoint a trouble area. Scott Ambler has written extensively on the subject on agile documentation and communication. Here is a chart he produced based on a survey about the most effective forms of communication:

 

Source: http://www.agilemodeling.com/essays/agileDocumentation.htm 

As you can see, paper documentation is by far the absolute worst form of communication. Face-to-face whiteboard is the most effective. Kevin Skibbe, a friend that I used to work with, is the most effective whiteboard communicator I know. What he explained to me was that when two, or more, people try to communicate via the whiteboard they must focus on developing a shared mental model. Even when it's just face-to-face communication, without a whiteboard, both people are still maintaining independent, non-shared mental models of what the other person is thinking.

 

Building Long-Term Executable Knowledge / Documentation via Automated Tests

One measurement I don't explicitly noted is the notion of "Executable Knowledge", or more frequently called "Executable Specifications". Scott Ambler writes about this here: http://www.agilemodeling.com/essays/executableSpecifications.htm, and Mary Poppendieck writes about it in her books and presentations: http://www.poppendieck.com/

I suggest that teams, whether waterfall or agile, incorporate this practice into their development in order to produce fewer defects, increase explicit knowledge in the code-base, and reduce the need to continually read implementation code.

We should first think about software systems in light of their intrinsic nature and in terms of the change process required to modify or enhance them with maximum ability to produce the desired result. That result might be faster ROI in the market, increased product quality, or what have you.

What we never want to do however, is result in broken functionality after we release. This is why we build regression test suites that can be executed at will to confirm as much as computationally possible of the encoded knowledge about the business domain and user requirements.

This is how Test-Driven-Development works. Here is how Ambler depicts this view:

 

That is a nice technical view of things, but what about the view from the business side? Ambler presents a written form of the requirement that would be provided to developers from a business analyst in the article linked above.

 

Waterfall is Always Wrong Compared to Agile for Building a New System

I'm going to assume here a definition of waterfall that is primarily the standard sequential approach where requirements come first, followed by, detailed design, development, (test, drop scope, rework) (repeat prior three as necessary until completion), and deployment and maintenance.

If a project is about developing a completely new product from the ground up, then adopting the sequential model for a system of any degree of complexity beyond perhaps an estimated month of duration is simply calling for pain. I'm not going to explain why in this article, but I refer you to the three articles I wrote above for complete explanation. But, to summarize:

When you begin a moderate to large custom development project, there are many requirements you simply cannot know until you have started to develop a subset of the entire envisioned project and that subset makes its way into the hands of the critical stakeholders like the project owner and the target users.

For more evidence, refer to Craig Larman's research and explanation about the history of waterfall in this article: http://www.highproductivity.org/r6047.pdf


Waterfall is Still Wrong for Enhancing an Existing System

However, suppose a software system is already released and "in production", should a team then use waterfall techniques to build additional features?

I believe the answer is no. The reason is reflected above in the TDD diagram. You want to reduce the impact of changes and build up a suite of regression tests as you develop the solution. And, you want to seek feedback as early as possible to reduce time spent working on incorrect or undesired functionality.

 

Waterfall is Especially Wrong for Rebuilding an Existing System

In my experience, and I've been through it a couple of times now, waterfall is notoriously wrong when you are asked to rebuild an existing system in a new technology. The reason is that often the project sponsor will state little more than the fact that they want the existing system functionality essentially duplicated. Unfortunately, much time will be wasted by the team if it tries to clone piece-by-piece the existing product.

A much better approach is to use the existing system as a very high fidelity model. It should then use iterative and incremental agile practices to deliver features that are focused on making and keeping the project potentially shippable as soon as possible. This ensures a complete vertical "slice" of functionality through all the application's layers gets built as soon as possible. To learn more about these practices, see the screen-cast series Autumn of Agile at http://www.autumnofagile.net. The first episode gives a comprehensive and very compelling explanation of why agile presents better business value than waterfall. My article "Don't Get Drowned by Waterfall: Break out the Delusion" references a few key slides from that series.

 

The Embarrassment of Waterfall's Persistence

Waterfall continues to exist in the technology world because it sounds easy to understand on the surface, but everyone also realizes the intrinsic contradiction in software development: requirements constantly change. I don't blame product owners or business people for waterfall's persistence. I blame ourselves, the developers and project managers. It is our fault for not being more responsive to changing environmental conditions and changing requirements.

But, balancing the ability to change on demand with the requirement to remain stable at all other times is what agility is all about. That is why agile focuses on rigorous empirical testing, visual monitoring, and continuous feedback. That is why automated tests as executable specifications speed up the ability to change while simultaneously increasing quality and confidence.

Stay tuned for my next article which will update the "age old" building construction metaphor that says building is software is like building a building. In many ways it is, but I will make key distinctions that enable agility!

Until then, stay agile, not fragile.

Monday, December 7, 2009

Don’t Get Drowned by Waterfall: Break out of the Delusion

Many development projects are built with an approach traditionally called “Waterfall”, or “Big bang all at once”. This means that the entire system is defined, designed, developed, tested, then released in that precise order. In its purest form, a system built using waterfall techniques is utterly useless until the time when nearly all of its features are claimed to be “feature complete” and “ready for testing”.

To put it very simply: if a project has 50 features, a team will attempt to build all 50 and only then test all 50 at the same time without any formal, professional QA and automated testing done during the construction of the features. This is a recipe for wildly missed delivery dates, at best, and utter disaster, at worst.

Irrational Justifications for Waterfall

Why in the world would a team try to build a system this way? You might hear something like “The whole system must work before we go live, so we’re going to test everything together”. On the surface, this sounds reasonable. Of course the whole system needs functional, end-to-end testing. However, in any system of non-triviality, this is, at best, an ignorant statement that betrays a lack of experience in developing complex systems. At worst, it’s a statement of learned helplessness or laziness born out of a lack of desire or perceived lack of time to break down the system into smaller, independently testable parts.

The Fantasy of Waterfall

The fantasy vision that people developing a waterfall project have is something like the blue line in the chart below from Steve Bohlen’s Autumn of Agile screencast series. In this vision, late in the project all the components that have not yet worked together in completion during the first 80% of the project get tested all at once and suddenly start to work and the project is released on time on an budget.

Yeah. Right.

Here’s the fantasy vision, compared with an Agile approach in terms of business value accrued over time, especially if the project suddenly is called to halt:

image

Continuing with the Waterfall delusion, the idea is that all aspects of the system can be developed in isolation from each other, never needing feedback or rework:

 

The Reality of Waterfall

The reality is shown below. The sharp decrease in the component stability represents the big “Oops, wish we would have thought of that sooner” moments that we all know and love during waterfall projects.

I can think of two instances off the top of my head during recent real-world projects where I’ve seen this:

  1. While working on a large electronic commerce application rewrite, our team warned our senior management that the legacy COM architecture would not scale well under .NET. Despite our warnings and evidence, our management wanted us to “just forge ahead” as if it was more brave or honorable to continue doing something completely stupid rather than sit down with all of us and think through the difficulties necessary to properly solve the problem and devise a plan to reach the market and achieve positive ROI. Psychologists call this cognitive dissonance. I call it lack of planning, ignorant, or maybe just cognitionless dumbness. I eventually left that position and have kept in contact with my old colleagues who eventually had to rewrite the entire COM layer in pure .NET to attain the desired business value (release). 
  2. On another recent project I’ve worked on, I was assigned the responsibility of designing and implementing a complex security model on top of the existing custom entity model the new application was already using. Hundreds of entities had already been created in this system, and a large, wide horizontal stretch of pages and controls had been created already. But, no features had yet been designed vertically deep enough to use security. As a result, the system was thus far built entirely with static stored-procedures, but the complex security model required that all SQL statements be appended with additional where clauses and custom filters. Fixing this took weeks to refactor the data-access layer and rework static stored procedures to use more of a Query Object pattern.

Special Note: Microsoft’s Advice for Systems Re-architecture and Migration

Microsoft has a document entitled “Microsoft .NET/COM Migration and Interoperability'”, located here: http://msdn.microsoft.com/en-us/library/ee817653.aspx. It will serve all .NET or COM / C++ developers to read this document.

Microsoft recommends that when you are re-architecting a system using .NET from an older technology, such as COM / C++ that you do not attempt a big-bang, horizontal migration. Instead, they recommend that you create a completely functional, vertical slice of the application first, before expanding horiztonally.

Here is an excerpt:

“You might choose to adopt a vertical migration strategy for a number of reasons:

  • Planning to re-architect
    If you plan to re-architect your application, vertically migrating part of your application to the new architecture provides a good test bed for the new design. The .NET Framework also makes it easier to provide the functionality that newer architectures are built on. For example, you can use HttpHandlers to perform many of the tasks for which you would previously use ISAPI extensions, but with a much simpler programming model.”

Here is a diagram from the same document depicting a vertical migration:

Ee817653.cominterop03(en-us,PandP.10).gif

How to Stop The Horizontal Waterfall Madness

If you or your project is on a path of waterfall, horizontal development, then you have your work cut out for you, but it’s not too late. It takes discipline, honesty, and courage to set things upright and vertical.

Here are a few key practical steps:

  1. Stop adding code to the system that is not scheduled for testing in the current or next month. If you do not have a product road-map and product back-log prioritized by business-value and thus don’t know when a feature is going to need that code, then stop adding it, immediately.
  2. Focus instead on the fact that you need a road-map and a prioritized product back-log and define this in terms of users’ needs. If your product owner cannot or won’t prioritize the backlog or features, then simply list them in the order that your users encounter the features or the order that your help-desk team tells you needs most improvement.
  3. Now, identify a vertical slice of the application that you can focus everyone on the team on implementing from top-to-bottom. Determine how to get this slice as close 100% functional as possible.
  4. Work daily with your users, test team, developers, and other stakeholders to create, together, a strategy for standing up and ruthlessly testing that vertical slice.
  5. Once you’ve gotten critical user feedback for usability and functionality, apply those lessons learned to the next vertical slice, and so forth!

Whose Responsibility is it to Stop Waterfall Thinking?

It’s far too easy for us as developers or architects to say that we’re just doing what we’re told by our managers or our business owners. After all, they “Just want the product done”, right?

Certainly, we must listen to what our managers and owners ask of us. It’s very important that we do so. However, what’s more important, to them and to the success of projects, is that we as professionals bring our expertise and our knowledge to the problem-at-hand and that we act with resolve and courage to do the job correctly. Often times this means sitting down with our managers and owners and explaining to them, however uncomfortable it makes them or you, what it really means to do iterative development with high quality. Often we must educate them about the history of waterfall and its miserable rate of success.

Thus, the responsibility is yours. The responsibility is mine. The responsibility is the team’s

Craig Larman on the History of Waterfall and Iterative & Incremental Development

Horizontal waterfall approaches, despite being popular and widespread today, came after early practitioners used iterative & incremental development techniques to greater success. The sad but true history of waterfall is that the U.S. Department of Defense (DOD) misinterpreted Winston W. Royce's paper on systems development and went on to enforce it as a government standard to which large contractors then adhered. Commercial industry followed suit from the government contractors. The DOD went on the revise their standards, and within the last twenty years they have recommended iterative & incremental practices.

Object-oriented development guru Craig Larman has written several books about iterative & increment project management over the years. He also wrote an extensive article about the history of iterative & increment development (IID) here: http://www.highproductivity.org/r6047.pdf

Here’s a crucial excerpt history from that article:

In the 1970s and 1980s, some IID projects still incorporated a preliminary major specification stage, although their teams developed them in iterations with minor feedback. In the 1990s, in contrast, methods tended to avoid this model, preferring less early specification work and a stronger evolutionary analysis approach.

The DoD was still experiencing many failures with “waterfall-mentality” projects. To correct this and to reemphasize the need to replace the waterfall model with IID, the Defense Science Board Task Force on Acquiring Defense Software Commercially, chaired by Paul Kaminski, issued a report in June 1994 that stated simply, “DoD must manage programs using iterative development. Apply evolutionary development with rapid deployment of initial functional capability.”

Consequently, in December 1994, Mil-Std-498 replaced 2167A. An article by Maj. George Newberry summarizing the changes included a section titled “Removing the Waterfall Bias,” in which he described the goal of encouraging evolutionary acquisition and IID:

Mil-Std-498 describes software development in one or more incremental builds. Each build implements a specified subset of the planned capabilities. The process steps are repeated for each build, and within each build, steps may be overlapping and iterative.

Mil-Std-498 itself clearly states the core IID practices of evolving requirements and design incrementally with implementation:

If a system is developed in multiple builds, its requirements may not be fully defined until the final build…. If a system is designed in multiple builds, its design may not be fully defined until the final build.

Tom Gilb’s Principles of Software Engineering Management was the first book with substantial chapters dedicated to IID discussion and promotion. Computer Meanwhile, in the commercial realm, Jeff Sutherland and Ken Schwaber at Easel Corp. had started to apply what would become known as the Scrum method, which employed time-boxed 30-day iterations. The method took inspiration from a Japanese IID approach used for non-software products at Honda, Canon, and Fujitsu in the 1980s; from Shashimi (“slices” or iterations); and from a version of Scrum described in 1986. A 1999 article described their later refinements to Scrum.

Stop laying down flat. Stand up straight. Have more fun, and kick waterfall to the curb!