Friday, June 29, 2012

(Dark) Secret of a Great Tester


I recently read a Finnish article (http://www.taloussanomat.fi/ihmiset/2012/06/26/hyvan-tyontekijan-synkka-salaisuus-ilmainen-ylityo/201232164/137) from a news paper which said the “dark secret of great employees is free overtime work”. This started a chain of thoughts in my mind that I would like to open up in this blog post. In the same time, I am hoping I will clarify my thoughts while writing.

The article is referring to another article where is estimated the workers of a Finnish labor union Pro are making about 2 million free hours yearly. Specialists and managerial roles are told to be with most unpaid extra hours. Some questions that popped up my mind:
-          Are they working at their fullest through the whole day or do they take additional “breaks” (like check their Facebook messages)?
-          Is this a problem because of bad management or for example employee’s own time management skills?
-          Do these people record their hours or this is based on their gut feeling? If they record the hours, how accurately it’s done?
-          What are the reasons they work extra hours? Why they don’t get paid for them?
-          Is anyone looking after what gets done instead of how many hours are done?

When I look back at my work history, I see myself being one of those doing long days, being enthusiastic about the product we have been building/testing, having very late and early meetings/e-mails/discussions with customers, taking responsibility for doing a great job and working for the team. So, did I work extra hours? Yes, always when I saw it was needed. So, did I get appreciation from it? Yes, always when it was seen by the team/management/customer. So, did I get paid for those hours? Yes, always when I reported the hours in the ERP. Obviously, I could not be paid for hours I didn’t mark.

Do I study “on my own time”? Of course! Do I think my employer should pay for this? No, because it’s something I am doing mostly for myself. I don’t except them to compensate for what they didn’t ask me to do, but I will greatly appreciate if they do so. I don’t want to be someone who is just hanging around there and executing test cases someone else has written. I don’t want mediocrity. I want to carry kittens from burning houses. I want to throw myself over an exploding grenade. I want to become great in whatever I am going to do.

So, what’s the problem here? According to the article, this should not be the case and even a person who is not going to do all that should be considered a great member of staff.

The article does rise up also the concern of people not getting any kind of compensation for their work. I don’t see this as relevant to the “dark secret” because clearly it’s not a secret of someone who is successful. That is a person who is being abused. That is about bad management instead of greatness. Maybe this is what the journalist even meant, but in this case she should re-write the article.


I am not saying everyone has to do it (= work hard; really hard). I am saying you have to do it if you want to be great. It’s not a dark secret all great people have worked huge amounts. It’s easy to say “oh but Einstein was so intelligent” and do nothing to achieve even a bit of what he did. If you don’t want to do it, don’t do it. But don’t come and ask for the same recognition those get who do it.

Thursday, June 28, 2012

Testing Challenge - Puzzle #1, part 2


This is the second part of the first (http://jarilaakso.blogspot.com/2012/06/testing-challenge-puzzle-1.html) puzzle I wrote. 


The reason for the second part is that James Marcus Bach (http://www.satisfice.com/blog) found a solution to the original one. His solution was much different than what I had in mind and he adviced me how to change the wording. Nicely solved and thanks for the tip!


Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks! 


So here goes!


What comes next in the series (you need to replace X's with correct letters to continue the series) and present your logic:
gra, avar, rvtXX

Wednesday, June 27, 2012

15 CAPTCHA Testing Ideas







This is a continuation to my friend Santhosh Tuppad's blog entry (http://tuppad.com/blog/2012/06/26/captcha-testing-dedicated-to-andy-glover/). He decided to make a fantastic list what could be used when testing a CAPTCHA. I read it, liked it, and decided I want to add my few cents to the topic! Before I go to the test ideas, I want to note CAPTCHA will not increase security and it should not be used for such purpose.

Now I am sure some of these things will overlap with either each other and/or with Santhosh's list. This is fine by me because I wrote the list as things were coming to my mind:

1. Not to pass the information to the server in plain text.
2. Not to have the same CAPTCHA repeat (in a reasonable interval).
3. Try sending a decoded old CAPTCHA value with an old CAPTCHA/Session ID.
4. Catch the HTTP(S) request and check all parameters (for example possible ecnrypted CAPTCHA ID).
5. Server-side validation for inputs (for ex. injections like here http://osvdb.org/show/osvdb/82267).
6. Using dynamic noise in the CAPTCHA is harder to break automatically than static noise (however, obscurity is not security).
7. Avoid possibility to "random success" like selecting an answer from a list.
8. If you are using a visual CAPTCHA, check out from here if it's of any good http://www2.cs.sfu.ca/~mori/research/gimpy/#results.
9. Test the CAPTCHA with some CAPTCHA breaking tools to see if it's any good.
10. If you have access to the code (or someone can tell you technical details), verify from Open Source Vulnerability DataBase if the CAPTCHA has known issues http://osvdb.org/search/advsearch.
11. Test if the session is destroyed after a correct phrase is entered. Reusing session ID of a known image could make it possible to automate requests to the page.
12. Should try to avoid using I, l, 1, 0, o, O etc. because users have problems with them.
13. Can your granny register? Should she be able to?
14. Will the CAPTCHA make people disappear from the service?
15. Do you actually need a CAPTCHA or could you use another means for the purpose you had on your mind?





Saturday, June 23, 2012

Challenging claims, part 1 – Test Automation


I was reading yesterday this http://qatestlab.com/knowledge-center/qa-testing-materials/can-automated-testing-replace-the-smart-software-tester/ article and noticed it contains many claims I don’t (fully) agree with. Before I started writing this post, I chose to read a few more from them to have a better idea what they are saying – maybe I had missed something. The more I read, the more I am convinced this is a company selling fake testing (inputting “QA” in the name of the company doesn’t help at all with this feeling). I hope I am wrong with that and there is some misunderstanding somewhere. However, in the meantime, I will challenge some claims from their blog, starting from the article I quoted above.

“Beginning the automated testing before the software can support the testing will only make more work through additional maintenance and rework.”

Firstly, sentences like this are usually of no value to testing, in my opinion. To me, it’s like saying “don’t write code too early” or “wasting money on irrelevant documentation is useless”. Nonetheless, there is a mistake also. Test automation can be written before any product code is done and it can still be rather maintenance and rework free. If you don’t believe me, ask from a TDD evangelist.
(Yes, I am making an assumption here that the author meant “beginning writing the test automation” when he wrote “beginning the automated testing”. I am also making an assumption he means test automation scripts when he writes about test automation. I wish to be corrected if this is not what he meant.)

“Eventually, automated testing takes the daily monotonous work of conducting the same action over and over away from software testers.”

I have heard this multiple times. It’s usually said by testers who execute test cases someone else told them to do. Sometimes it’s said by testers who for example want to create a lot of user accounts at the start of each sprint.
Testing is not monotonous. It’s monotonous only when done wrong (in my opinion). Think of it like sex. It most likely gets boring if you only do the missionary position under the blanket on Saturday evenings when the lights are off. Maybe not my best analogy, but surely you see the connection?
What I would like to ask in this case is why I am doing the same things over and over again, could I stop doing it, what other means there are etc. I would also like to ask from the author what he considers as “automated testing” in this context.

“Automation will repeat test after test for days on end, never failing to conduct them in exactly the equal way.”

I’ve never seen this happen in real life. I’ve never heard this happens in real life. Automation fails because of source code changes, infrastructure updates, porting to different systems, timing errors… Even for the same reasons the code fails it’s supposed to test! There are false positives, false negatives, blockages, crashes etc. And just to mention it, in this universe I am seeing around myself, it’s pretty much impossible to conduct a test in the exact same way as it was done previously. Sounds philosophical? Good, it means you started thinking of it!

“Automated testing never gets tired or burnt out or forgets to do a step.”

Indeed, automation doesn’t get tired or forget, but it does, however, fail. We don’t call it computer getting tired; we call it for example a memory leak or taking over CPU. We don’t call it forgetting to do a step; we call it missing a step, having a step that was changed, getting a timeout for a step… Testing is (in most cases I am aware of) not about endless struggle to repeat the same things and hoping something will (not) break at some point.

“Automated testing can just confirm that the software is as good today as it was yesterday.”

Automated testing cannot confirm that. Just like testers don’t assure quality. This was the initial claim which led me to believe you are selling fake testing. If you really claim you/your automation can confirm this, you are either lying or ignorant. Good test automation can "provide some confidence that nothing really big and obvious broke", like Matt Heusser wrote.

Wednesday, June 20, 2012

Metrics - reply to Mike Talks

This post was originally supposed to be a comment in Mike Talks' blog (http://testsheepnz.blogspot.co.nz/2012/02/are-we-there-yet-metrics-of-destination.html) but it became a bit lengthy so I decided to post it here...



Hi Mike,

There are great posts about metrics already available so I don't want to dig in too deeply in the subject. Nonetheless, I would like to comment your interesting blog post!

I like the start with "are we there yet"! I wish more bloggers would make such associations between "real life" and software projects / testing.

When you mentioned you are estimating how long testing will take, do you mean a case where you and the project manager have a long history together and you both know what kind of testing you will do in that time? I am asking because testing is never done and I don't think that is too secure way for a PM to create a testing budget. Will he/you have the code done at this part? Do you (as in, you with the PM) have a lot of history with the same product?

"I know some people hate recording hours on a project. I personally think it's vital, because it helps a manager to determine spend on a project."

Yes, in some (possibly even most) cases this is very good. From testing point of view, I am usually interested to know how much of time (e.g. 90 min sessions) is used to test certain features/functionalities. For example, if there are no bugs found in a week, I could start asking questions like why we are not finding bugs and could we use testing somewhere else more effectively. (If the goal would be to raise bugs.)

"What about the number of test requirements tested and the number passed? Personally I like this metric, as it gives a good feel of how many paths and features we've tested, and I do think it's useful to keep track of this (as long as it's relatively painless)."

In many cases this can be a good thing to track. There can be for example legal requirements that need to "pass". However, like you point out in your text, it can be also very misleading. Even if 100% of the requirements pass, it doesn't mean a product is good or doesn't have critical bugs.

"Another metric I've seen is simple number of test cases run, and number passed. ... However it's more than likely a lot easier to track this number than the number of requirements if you're running manual test scripts which are just written up in Word (unless you're an Excel wizard)."

This is easy to track, but I don't see it telling really anything interesting. What would be interesting to know is how you use this metric. Is it used to raise up questions or make decisions (inquiry or control)?

One thing I want to stress. Passed tests can be dangerous to follow because they don't tell us much (if any) of the product. They might even give false confidence of the stability/quality of the product to management.

"What about measuring defects encountered for each build?"

I like to see how these change over time and what questions the change might raise up. Just like your text explains a situation where earlier bugs were blocking further testing.

When it comes to regularity of metrics, I think automated/scripted systems would be great to give numbers in many cases. Numbers can raise up good questions, but I would prefer not to use too much testing time to collect them. Depending for example on project size of course.

"I had to keep daily progress number updates. After day 5 I had still not finished script 1. In fact after day 8 I still wasn't done on script 1. On day 10 all 3 of my scripts were completed."

Maybe the progress should have been communicated differently? Sounds like there was "too big piece" to be reported for 1 progress step. I don't know what the script involved, but here is an example for a UI check:
- Needed (navigation) controls are coded
- The flow between controls is automated
- The assertions/verifications/requirements of each successful step are automated
- Minor changes to produce script #2
- Minor changes to produce script #3
- Testing, fixing and re-factoring the scripts

As we saw, because your comments helped the manager to understand there is nothing to worry, the metric was (close to) useless. Your explanation was a better report and the number could have been tossed away.

Thursday, June 14, 2012

Testing Challenge - Puzzle #6

My mind was full of riddles when I wrote up these puzzles!

Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks!

10 philosophers meet and one decides to make a bet. He says he will put all his ten 100 euro bills in a basket. Then each of the philosophers will take one and only one 100 euro bill. He will be the last one. After he takes his bill, the basket will still have a 100 euro bill inside. If he succeeds, the other philosophers will need to pay him 100 euro each. If he doesn't succeed, he needs to give them the money. What is he going to do?

Testing Challenge - Puzzle #5

So here we go again with a puzzle that will require you to send me questions in order to solve this one. I'll start these with an easy one so you might get this even with the first question.

Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks!

There was a road construction and a lot of people didn’t like it. After a while of constructions, people on pension started calling an old lady and complain to her about the construction, nevertheless she wasn’t part of the firm or had anything to do with them. Can you explain why this happened?

Testing Challenge - Puzzle #4

This time we will talk about trains. Some of you are more familiar with them than others which might give a helping edge, but anyone with good questioning skills will solve this.

Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks!

Two trains are heading each others and will crash in a matter of seconds. There are no secondary tracks and the brakes don't work. How can the accident be avoided?

Testing Challenge - Puzzle #3

This is the second lateral puzzle. I got huge help from Ilari Henrik Aegerter (www.ilari.com/blog/), James Bach (www.satisfice.com/blog/), Pekka Marjamäki (www.how-do-i-test.blogspot.com/) and Michael Bolton (www.developsense.com/blog/). I'd like to thank them for helping with the setup, clarifying a lot of questions, bringing insights and of course a lot of good time!

Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks!

There is a 15 year old boy studying in a high school. He loves ice hockey and is the best of the team from his year. The team has been excellent in the high school championships. Recently, the dean and the teacher’s council had a meeting where they decided he is so good they must dismiss him from the team. Explain why.

Testing Challenge - Puzzle #2

This is the first (they might get a bit harder after the easy start) of the "yes/no/not relevant" kind of lateral puzzle I am publishing. More will follow. Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks!

In the world championships of  relay running in 2654, the Chinese team will be the last to cross over the finish line (as in, the slowest team). However, they still won. Explain why this happened.

Testing Challenge - Puzzle #1

After thinking about this for a long time, I decided I will start publishing puzzles I have made. Because I keep coming up with new ones also, most likely I will add them here every now and then.

I have not yet fully decided, but my initial idea was to have problem solving/mathematical/logical puzzles in the blog so that everyone can try to solve them here and lateral/creative puzzles only presented with the setup. If a reader would be interested to solve a puzzle of the latter kind, we could do it for example over Skype or Twitter. I am also planning to add these to the TdT Cluj-Napoca (if you don't know what that is, check out http://tabaradetestare.ro/) workshops, but maybe more about that later.

Please remember, I don't want you to ruin the puzzle for anyone in the comments section so if you want to solve this, send me an e-mail or ping over Twitter so we can sort out the details. Thanks!

So here is the first logical one!

Continue the series (as in, replace X's with correct letters) and present your logic:
gra avar rvtXX