Time - Bane or Innovation Catalyst?

Time. What time is it? How much time do we have? When do you want/need it? What's the deadline? I need more time!

If we had all the time in the world for software development, would the delivered results really be of better quality?

A co-worker at a past employer wrote the following when someone sent an email submission for a fun, internal contest the day after the deadline:
The contest ended a long time ago. Trying to submit something now is like submitting your late university assignment.
One of my profs told me:
"I don't care if you have something that's better than all the works of Shakespeare. If you can't get it in before the deadline it's worth nothing to me."

Ha, ha. It was intended as a funny remark at the time but there's some truth in there too.

So, if someone submits an assignment "on time" but of lesser value/quality than they might produce if they had more time, would they still continue to work on their opus or would they give it up to move onto the next project? Do we (as a collective group of intelligent human beings) lose out by putting Time ahead of Quality?


The traditional "Project Management Triangle" puts the emphasis on: functionality/scope, cost and schedule. An experienced consultant can tell the employer: pick/fix any two and we can estimate the third.

I noticed years ago that "quality" isn't in this "triangle". As a novice, I took "scope" and "quality" to be part of the same point. Clearly I was mistaken. When people are focussed on delivering something, on time, at a fixed cost, everyone interprets "quality" in different ways.

I think the Agile manifesto/movement is an interesting response to the "traditional" (a.k.a. Waterfall) approach to software development. It takes the same 3 constraints (of scope, cost and time) and changes up the order of activities to integrate quality into the deliverable products. This is done by embedding customer involvement (via collaboration, user stories, automated acceptance tests) and rapid delivery releases to allow for quicker feedback into the design and implementation. For example, in a traditional/waterfall project, it may take anywhere from 6-18 months to find out your interface/implementation fails to meet the needs of the customers. Or, using agile methods, it might take anywhere from 2-14 days. Your choice.

So what about software testing?

In every waterfall project I have worked on, development always delivered software late into the "test" phase. This meant less time to provide feedback, because the release deadline was fixed. Time is my bane here. I've got less of it and need more of it! ... or do I?

If I stick to a waterfall approach to testing - i.e. develop & document test plans, test strategies, test cases, execute the tests, log the results and communicate the summaries - then, no, time is not my friend here.

But is it a requirement to do testing this way? Whose requirement? How much does their opinion really matter?

I watched my son play a game recently and describe the "glitches" (his lingo, not mine) to his younger brother so that he could try and work around them. I'm pretty sure my boys don't care whether the software team used waterfall or agile methods, or how well their test cases and processes were documented. They found bugs in their game, are annoyed by them, and figured out ways to work around them. Sometimes they just give up on a game altogether.

Personally, I'd say that the customer doesn't really care about how you do your testing - as long as the end result has good enough quality that doesn't interfere with their intended use of the software or system.

Here's a secret: Nobody cares.

Some lawyers may pretend to care when they are paid to do so, but the reality is that I don't know of a single tester who has ever been charged with manslaughter for failure to document critical test cases that may have caught the bugs that resulted in loss of life.

The FDA doesn't care. Their lawyers tell them that they should care about documented tests and results, so they impose regulations. But the FDA doesn't really care about your documented test cases or test processes. What they really care about is that a minimum standard of due diligence has been performed to demonstrate that a particular product will not harm anyone. That's it in a nutshell. You may not even need testers to achieve that level of quality either.

I could go on, but I think I made the point - nobody cares how you do your testing as long as the collective development effort produces a quality product. You remember "quality" - it's that thing that project managers leave off their project management triangle.

So, if we disregard the premise that testing needs to happen in a "waterfall" fashion, what's left? Well, what do we know? We know that (1) we don't have a lot of time, and (2) we have a lot of features to cover. Oh, and it's also very likely that (3) you have a limited number of resources and people - most likely less than what you'd probably like. (Hey, if we're screwed on the 'time' factor, why not get screwed on the 'cost' factor too, right? ;))

So where does that leave us? Time to innovate! Time to become agile! Talk to your customers; collaborate with your developers and business analysts/product managers; learn the software and functionality as you design and execute the tests because there really isn't time to do those things separately.

Risk-based testing (RBT) works on the premise that there might be something bad/undesirable that could happen, so why don't we start by looking in those places first. RBT is also an appropriate response to the statistical impossibility of complete testing coverage for any useful software program with more than 2 lines of code. That is, if it will take an infinite amount of time to test something, how about if we narrow it down to just some of the areas that we think might be risky in some way (i.e. popular, critical, complex, and so on).

What else can you do? You have a lot of features to cover in a short amount of time. Well, start by ditching all the test documentation requirements and focus on: what is necessary to establish a minimum level of understanding of what's going on.

Do you really need all those documented steps for every test case? No, you don't. Unintelligent automated systems and robots need step-by-step instructions, humans don't. And most humans don't follow the steps consistently either, so just let that one go. Instead, describe the scope of the testing you want to do using checklists and decision tables. The important things need to be discussed in person to ensure clarity of requirements and information, but everything else should be fine with using point form.

Worried about how you will capture the test results if you are denied the Pass/Fail test status column? Work it out! Figure out a solution that fits your project's (and organisation's) needs. There are a number of far more useful alternatives out there - e.g. application logging, screen captures, note taking, and so on.

If you don't have enough time to complete a project using the same approach you've used in the past, it's time to try something new. Time to think up of new solutions, new processes, and identify/create new tools to help you reach those goals.

The end goal is a high quality product.. or maybe just "good enough" quality depending on your situation. The end goal is not to produce sparkling, publishable test documentation. (If it is, consider changing your title from "tester" to "test biographer")

Don't lose sight of what's important. What will you do with the time you've been given? How will you choose to react to the situation?

No comments:

Post a Comment