Competitive Testing: The Good, The Bad and The Ugly

The notion of competitive testing has been spinning around in my brain for the past few days and I’m not sure how I feel about it. Maybe putting it in writing will help.

I’m naturally adverse to competition as a catalyst for self-importance. While I may feel good having done well in a competition, I feel the results don’t define me as a person. I think this goes back to high-school where a B was just as good as an A in my opinion. “So what if I didn’t know the date of <insert historical event>, at least I understood the concept of what happened and why it’s important!”

I suppose we should start with “The Good”. Competitive testing in the form of bug battles is a great way to sharpen your skills, meet other testers and maybe learn a thing or two along the way. I have no problem with this.

So we move on to “The Bad” and, of course, this is completely subjective depending on the person.

Recently I’ve been involved in a crowdsourcing movement where most of the pay is based on finding bugs through exploratory testing. Basically throw a bunch of testers into an arena with a thin test plan and broad scope and say “Go find bugs”. While I believe this is a great way for a company to find problems with its software for a much cheaper price than having a salaried QA department I’m looking at it from the other side.

Now, I don’t want to come across as being against the crowdsourcing movement. Quite the contrary in fact, it has been a boon to my personal finances and networking connections in a time when I truly needed it. I was able to navigate the system successfully in a short time and make it work for me. But I have concerns for those truly skilled testers that may be in a similar position that may not possess the ability to navigate as successfully as I did.

The problem I see is that the client will pay for the bug, but the tester needs to be the first to find the bug and report it to get paid. All others may also find the bug but they weren’t quick enough, or their internet speed wasn’t fast enough. Miss by a millisecond and you get nothing for the effort and in some cases will be reprimanded for entering a duplicate bug.  Does this mean that the tester who was faster is a better tester? I don’t think so and I can’t help but wonder how many testers take this situation as a hit to their credibility?

Now for “The Ugly”. I am a firm believer of being clear and concise in reporting a bug, building a test case, etc. Reporting “Got an error” doesn’t cut it for me. In my opinion a tester needs to give the maximum possible information about an issue that he/she can find within the time constraints forced upon them by the scope.

I have seen many bugs reported in these crowdsourcing projects where the first bug reported did not contain quality information about the problem. I’m sure the client or PM had to go back and request more information. But still this bug is accepted where someone else may have found the same bug but submitted it later with quality information that can be acted upon. Yet, this tester gets nothing for the effort of creating a quality bug report.

Now, of course these are the extreme examples and the exception to the rule. For the most part I’ve seen crowdsourcing work very well when it’s well-executed, witnessed the community self-regulate and provide an overall quality report on the maturity and market-readiness of an application. It’s these edge cases that have had me thinking.

Additionally, I should also mention that I’ve only been involved with this type of crowdsourced movement for just under two months now. And as I had suspected, more and more “high profile” projects are being made available based on my performance here. Even for a veteran tester, it’s great to have such exposure to a diverse set of projects (for example, I’m about to get a taste of mobile testing in the coming week).

I have raised my concerns with managers in the crowdsourced testing company I am working with prior to posting this blog entry and I’m pleased that they recognize the need for continued improvement, and have concrete plans to level the playing field for all testers, allow for quality testing to rise to the top thus improving the overall quality of the service they are selling. I truly believe that they care about quality and, just as importantly, fairness between testers and customers.  It’s refreshing to know that they genuinely strive to raise the bar.

Advertisements

3 Responses to “Competitive Testing: The Good, The Bad and The Ugly”

  1. Inder P Singh Says:

    Great post on the competitiveness of finding bugs.

  2. John B Says:

    Crowdsourcing takes away one benefit… repeatable tests. So what happens during the next release? They reinvent the wheel and start over with the tests. So while the initial savings may be enough to make this attractive, the overall concept does not lend itself to long term sustainable testing methodology and the retaining of knowledge.

    Nice Read. John B.

    • John Kotzian Says:

      That all depends on the company and how they utilize the service. Many companies I’ve seen provide test plans that must be completed each sprint. These companies “get it”. Not only are they getting testing across a broad spectrum of technologies their software is supposed to perform on, but they also get wide-scale regression by providing the test plans as part of the scope.

      Again, it really depends on the Test Manager at the client company and how they choose to run the project. If they care, they will provide tests. If not, then you may be correct.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: