Hands-On Software Engineering with Python
上QQ阅读APP看书,第一时间看更新

Waterfall

Waterfall's ancestry can probably be traced back to manufacturing and/or construction planning. In many respects, it's a very simple approach to planning and implementing a development effort, and is essentially broken down into defining and designing what to build, building it, testing it, and deploying it.

More formally, it's six separate phases, intended to be executed in this order:

  • Requirements
  • Analysis
  • Design
  • Implementation
  • Testing
  • Installation and Operation:

These phases correspond fairly neatly with the sequence of phases in the SDLC. They are very similar, whether by accident or design, and are intended to accomplish many of the same goals. Their focus is probably best summarized as an effort to design, document, and define everything that's needed for development to succeed, before handing that design off to development for implementation. In an ideal execution, the design and requirement information will give developers everything they need, and the project manager may be completely hands-off once implementation starts.

Conceptually, there is some merit to the approach—if everything is thoroughly and accurately documented, then developers will have everything that they need, and they can focus entirely on writing code to accomplish the requirements. Documentation, as part of the initial project specifications, is already created, so once the software is deployed, anyone managing the resulting system will have access to that, and some of that documentation may even be user-oriented and available to them.

If done well, it almost certainly captures and allows for dependencies during implementation, and it provides an easily followed sequence of events. Overall, the methodology is very easily understood. It's almost a reflexive approach to building something: decide what to do, plan how to do it, do it, check that what was done is what was wanted, and then it's done.

In practice, though, a good Waterfall plan and execution is not an easy thing to accomplish unless the people executing the Requirements, Analysis, and Design phases are really good, or sufficient time is taken (maybe a lot of time) to arrive at and review those details. This assumes that the requirements are all identifiable to begin with, which is frequently not the case, and that they don't change mid-stream, which happens more often than might be obvious. Since its focus is on documentation first, it also tends to slow down over long-term application to large or complex systems—the ongoing updating of a growing collection of documentation takes time, after all—and additional (and growing) expenditure of time is almost always required to keep unmanageable bloat from creeping in to other support structures around the system.

The first three phases of a Waterfall process (Requirements, Analysis, and Design) encompass the first five phases of the SDLC model:

  • Initial concept/vision
  • Concept development
  • Project management planning
  • Requirements analysis and definition
  • System architecture and design

These would ideally include any of the documentation/artifacts from those phases, as well as any System Modeling items (Chapter 3System Modeling), all packaged up for developers to use and refer to. Typically, these processes will involve a dedicated Project planner, who is responsible for talking to and coordinating with the various stakeholders, architects, and so on, in order to assemble the whole thing.

In a well-defined and managed Waterfall process, the artifact that comes out of these three phases and gets handed off to development and quality assurance is a document or collection of documents that make up a Project plan. Such a plan can be very long, since it should ideally capture all of the output from all of the pre-development efforts that's of use in and after development:

  • Objectives and goals (probably at a high level)
  • What's included, and expected of the finished efforts:
    • Complete requirement breakdowns
    • Any risks, issues, or dependencies that need to be mitigated, or at least watched for
  • Architecture, design, and system model considerations (new structures or changes to existing structures):
    • Logical and/or physical architecture items
    • Use cases
    • Data structure and flow
    • Interprocess communication
  • Development plan(s)
  • Quality assurance/testing plan(s)
  • Change management plans
  • Installation/Distribution plans
  • Decommissioning plans

The Implementation and Testing phases of a Waterfall process, apart from having the Project plan as a starting point reference, are probably going to follow a simple and very typical process:

  • Developer writes code
  • Developer tests code (writing and executing unit tests), fixing any functional issues and retesting until it's complete
  • Developer hands finished code off to quality assurance for further testing
  • Quality assurance tests code, handing it back to the developer if issues are found
  • Tested/approved code is promoted to the live system

This process is common enough across all development efforts and methodologies that it will not be mentioned again later unless there is a significant deviation from it.

Waterfall's Installation and Operation phase incorporates the Installation/Distribution and Operations/Use and Maintenance phases from the SDLC model. It may also incorporate the Decommissioning phase as well, since that may be considered as a special Operation situation. Like the Implementation and Testing phases, chances are that these will progress in an easily anticipated manner—again, apart from the presence of whatever relevant information might exist in the Project plan documentation, there's not really anything to dictate any deviation from a simple, common-sense approach to those, for whatever value of common-sense applies in the context of the system.

While Waterfall is generally dismissed as an outdated methodology, one that tends to be implemented in a too-rigid fashion, and that more or less requires rock-star personnel to work well on a long-term basis, it can still work, provided that one or more conditions exist:

  • Requirements and scope are accurately analyzed, and completely accounted for
  • Requirements and scope will not change significantly during execution
  • The system is not too large or too complex for the methodology to manage
  • Changes to a system are not too large or too complex for the methodology to manage

Of these, the first is usually not something that can be relied upon without policy and procedure support that is usually well outside the control of a development team. The latter two will, almost inevitably, be insurmountable given a long enough period of time, if only because it's rare for systems to become smaller or less complex over time, and changes to larger and more complex systems tend to become larger and more complex themselves.