Friday, August 03, 2007

Minimal Software Development Process

It's not uncommon for me to find myself in debates regarding what constitutes a sufficient software development process. On one side, there are the Agile folks who argue that code-and-fix, with user representatives determining what should be fixed. Of course they have lots of rules to regulate the code-and-fix, some of which are quite sensible and some of which are quite dubious, to make it appeal to closet process engineers. On the other side you have old-school developers who believe in such rarities as requirements signed in blood and requiring human sacrifice to alter. Ok, so perhaps I'm exaggerating a little bit. But that's how it often feels. So I'm going to propose my own process that is every bit as simple as what the Agile crowd pushes while being more all-encompassing than traditionalists. Here it is:

  1. Define the problem
  2. Define the solution
  3. Apply the solution
  4. Validate the result
Now you are probably thinking that this is nothing novel. I'm just using weird words for:
  1. Develop requirements
  2. Design and Code to the Requirements
  3. uhh....Test?
  4. Customer Acceptance Testing
Wrong! Even if you were more creative than me, or pulled out RUP's phases or anything like that. But that's expected, as I rarely see steps 1 and 4 completed. Let me explain. Define the Problem This is where most projects get in trouble. Their problem definitions looks kind of like this:
  • We need to rollout a corporate standard ERP system.
  • We need to web-based foo tracking database accessible to the entire corporation.
  • We need an automated workflow for the bar process.
I could go on-and-on. Sometimes these are followed by phrases like "will save X million dollars" or "will reduce cycle time by X%." Really the problem is someone said that a competitor was saving money by applying an automated workflow to the bar process in order to track foos in the ERP system with a web-based frontend, the company at hand doesn't have one, so that's a problem. Anyway, often times these statements are really masking genuine problems such as:
  • My old college roommate is an ERP salesmen and wants a new boat
  • We have a ton of foo, but no one really knows where it is or if it's being used. So we keep on buying more foo, even though we probably have enough. The problem is when we find so foo, the people with the foo always claim that they need all of it, even though often times they clearly aren't using it. We have some data, but it's massive and impossible to interpret. We need a way to find unused foo and prove that it is unused so that we can transfer it where it is needed.
  • Some cowboys in department X keep on ignoring the bar process. They think they are heros because it saves time and money upfront, but really they just end up creating costly, time consuming problems down the line (for me). I need a way for force them to follow the bar process, but it can't be too hard otherwise they'll convince upper management to let them ignore it.
So why is the difference important? Don't you just end up with the first set of statements as your project charter, anyway? No. A couple months ago I faced a situation similar to the second item. A consultant had (unfortunetly) convinced a couple senior managers that they wanted a big, fancy database integrated to half of our internal systems and requiring a whole bunch of data maintenance. Fortunately the consultant also directed them to me. They had tons of data, and had spent countless hours fiddling with spreadsheets trying to turn it into actionable information. Having failed, they decided they needed a huge database that would cost hundreds of thousands of dollars to develop and then require staff to keep up-to-date. They also needed something in about a couple weeks. So I poked a proded until I finally understood what they needed to know, what data they had, and what they needed to decide based on that data. Then I wrote a few hundred lines a Python to analyze the data and make pretty graphs, along with a couple dozen lines of VBA to stick the outputs of the Python program into a PowerPoint presentation. They were thrilled with the result. Hundreds of thousands of datapoints were transformed into actionable charts that even the most impatient executive could correctly interpret. This took me about 2 weeks effort. Their original requirements would have taken a couple man years effort to implement, and the result would not have solved their problem. Traditionalists would have wasted the time to implement the requirements (which were actually fairly well developed), or at least a portion of them. Agilists would have fiddled around for a while and achieved the same result. Now I'll admit that on the majority of projects it's the other way around. Understanding the problem makes that cost of the solution grow by an order-of-magnitude, rather than shrink. My guess is only 1 in 4 can actually be simplified by understanding the problem, and 2 in 4 become significantly more complex. But solid estimates that can be tied to solid business cases are extremely important. Delivering a cheap solution that doesn't deliver value is a waste of money. In my experience development teams assume that "the customer" or "the business" already understand their problem and are defining requirements that will solve it. When in reality, the problem is usually vaguely understood at best, and the description of requirements is every bit a design activity as coding is. Define the Solution This is where most of the traditional software engineering activities occur. You have a customer (or marketing) who has given you some high level requirements defining the general shape of the system, and then you go about gathering more detailed requirements, followed by architecture, design, code, and test. Or maybe you are agile so you do all of those activities at once and only bother writing down code (in the form of application code and test code). Either way, knowing the problem really helps. Some people probably object to lumping all of these activities under one heading because they take so much time. I would agree, but they rarely done entirely sequentially. Prototyping is every bit as valid of a method for eliciting requirements as interviews. Sometimes it is a lot more effective. Also, there are strong feedback loops among all of the elements. So really, they are all done pretty much at the same time. It just happens that requirements kick off the process and testing finishes it up. Others would object because "the completed system is the solution." Well, no. It's not. You don't really know if you have a solution until after you've deployed and run the system long enough for the business to adjust itself. Apply the Solution This is just another way to saying "deploy," plus all the other things you have to do like training. If you think of it at a really high level (too high for effective engineering), the organization is the problem, and you apply the software to the organization to see if the organization gets better. Validate the Result This is where you measure the impact of deploying the software on the problem so see if indeed the problem has been solved. I don't think anyone will disagree that a piece of software can meet all of its requirements and fail to make a positive impact. So you need to measure the impact of the deployed system. In practice, success is declared as soon as a piece of software that meets "enough" of its requirements is rolled out. This puts the organization in a very bad position, because if the software subsequently fails to deliver value, then the declaration of success and those who made it are called into question. In most cases the organization will just end up either ignoring the system or limping along until enough time has passed to call the system a problem.

Sphere: Related Content

No comments: