Saturday, July 30, 2016

Quality and Solo work

With Agile2016 conference, I had a chance on a yet another glimpse into how a group of programmers work together, and how that shows up in results. Llewellyn Falco wrote a blogpost on 5 ways to do decision trees in C#. I feel that nicely shows how you can have many different solutions, and how some of them have more limitations than others, but it doesn't show what I always find fascinating: how the deeper understanding of having the five options was born.

Each morning, there would be some group working on some coding kata. For the piece of code in question, the mob in the morning came up only with one solution. We often talk about mobs as things where bunch of great ideas emerge, but in this particular time-limited mob, they got to one solution.

What then created the others was that the limitations of the first solution left a nagging feel for one of the participants. And with a little sink-in time, solution 2 emerged. Another mob or pair came together to implement that, and then the magic started.

From two competing solutions, they quickly got to five competing solutions and a grounded discussion on which of the actual implementations they would consider the best.

The rule in mob that when you have two competing solutions, you do both just made even more sense. With two available, creativity of the group steps into play and produces more.

My lessons learned on watching this unfold:
  • Actively try to find several ways to solve a problem
  • Individuals with a nagging feeling are a powerful driving force for improvement
The niceness of the solution (an aspect of technical quality) improved because of a persistent individual. But as that idea of different solution was again fed into a group, the group got a lot better solutions out than the individual.

With mobbing, we keep repeating the idea that it is not about the most you can do, but about the best you can do. We need to recognize more often how much of an impact the passing idea around through different minds actually has on what ends up in the code. And individual is a trigger, but the group is needed to take that trigger to its full potential. Solo work misses a lot of this. Except of course for the wonderful people who keep talking about the five voices in their own heads. 

Thursday, July 28, 2016

I paid to speak. Again.

Sometimes I think I have learning disabilities. I again submitted to Agile201x. I again got accepted. I again paid 1200 euros for the flights. I again gave up a week of my vacation to be here. Why do I do that?

I wonder this in particular with having 15 amazing, wonderful and brilliant people in my session. Out of 2500.

I've grown to realize that I'm seriously introverted, to a point where I experience social anxiety if I don't manage my situations well. I've just paid to live a week in what I could frame as a nightmare: 2500 people, mostly strangers and very few of my peers I would find easier to connect with. All moving around in a mass of people.

I usually speak at conferences, because the personal connections to people who I can learn with are valuable. Speaking enables people to approach me, to mention they share some of my interests (yay!) and after that all the connection problems vanish for me. I love talking about things I love, and that is probably why no one ever believes me being introverted.

With that said, I believe there is a big problem we have on conferences: a lot of them make the speakers pay to speak. This means that only those of us with privilege to afford paying all this and taking time away from work get our voices into the conferences have a chance of being heard.

I believe we are losing two main categories of voices:
  • People who have done this enough to know that the not paying is just an excuse and you can choose the conferences you speak at more smartly. 
  • People who can't take the financial burden, because their companies don't support them and they are not self-employed needing to keep themselves visible for sales purposes
As participants of conferences, you should care about this. It means your conference has selected to give you people who have something to sell. There are some pretty amazing consultants selling their knowledge to you out there so it might not be a problem. But it skews what we get to hear about and who we get to hear from.

The big conferences could afford to pay their speakers. I don't think they will, because this works for them. 

Here's the score from the session I did with Llewellyn Falco on Monday. 



Llewellyn is a consultant. He has incentive to sell his services. I'm not. I'm an amazing tester with real, daily long-term experience pairing and mobbing with people who are hesitant to all of this. For the session, our differences made it great.  At least I'd like to believe so. 

My company does not pay for my travel. My company gives me 5 days away from "real work" a year, and all the other conferences I'm on my own time. I *know* this is expensive. 

The #PayToSpeak model is erasing voices close to mine. Pretty much all with less insistence and privilege than I have. We have a lot to work to real diversity and representation of the industry. 

(Fortunately, the rest of my year is not pay to speak. I warmly recommend seeking speaking engagements with TestBash family of conferences or European Testing Conference. Agile Testing Days family of conferences is not bad either, but is likely to cap you're hotel stay inconveniently. Also, ask to be paid for expenses. There's still often the case of loaning the money to the conference, as they pay back after.)

Exploratory Testing: It's not what you think it is

I just finished my small session on "Exploratory Testing an API" at Agile2016 -conference. The session lead to two insights I feel compelled to share.

Imagine being handed google search box and being asked how you would test it. You come up all sorts of different inputs quickly,  Some of the very basic searches (character sets, number of words, types of search criteria) might lead to think about something more, like exact matches or searching for something that has a connected functionality like airline codes or currencies or vocabulary definitions. You know that's now all the testing for Google, and probably consider it is not the best way to test Google search all in all. But ideas start flowing quickly.

When presented an API, with a "fill in the blank" type of place, we freeze. The same things we could easily come up with while looking at a GUI, we don't and end up blankly staring at the IDE that is really just providing us a GUI to get using an API.

But the more relevant thing I started thinking is that how much of a disservice it is for all of us testers to let exploration examples always be guided to "fill in the blank" types of exercises. I had someone in the audience who clearly decided that was the purpose of the exploration, and surely, it can be one of the purposes of exploration.

But as a professional exploratory tester, I must say that I find that the examples we're presented at the conferences on quick tests poorly reflect my work. I look at an API, and the only reason for me to fill in the blanks is to acquire information - about the environment where this all sits in, about why would anyone want to use this, what is this for?

As my session gyrated this time to filling in the blanks and less on the actual exploration to learn, I feel more strongly about this. I need better ways of helping testers on the right track.

Because the testers who stay on the "fill in the blanks" track thinking just about inputs, they should have been out of job already a while ago.

It's not about finding any problems, it's about finding problems that matter.

I find it much more relevant to find out why a product isn't selling (well, it's free, so selling is not really what I mean here...) and how it could be easier for new people to get started with it, than figuring out that a combination "a§" ends up incorrectly saved in a text file.



A Quality Conundrum

This post is a goodbye to my latest addiction of the week: Pokemon Go. Like so many others, I've played it and enjoyed it.

I've had three typical scenarios of use to keep me engaged in the game.

  1. Connection errors on all devices 
  2. Crash on opening on iPhone 4s
  3. Restarting the game with every sight of a pokemon and after every catch of a pokemon in iPad2
The first two days, I tried connecting every now and then, and never made it into the game. But the game was still interesting, and I kept coming back to it. And at some point, it actually allowed me to log in. 

My joy of it working was premature though, as I was naturally thinking I'd like to play it on my phone. It just seems it does not agree with my phone, and with every opening, first click in the actual game and it closes with a crash. 

In the last five days, I've been able to level up to 11, but all of this comes with a great cost. A minute of gameplay means at least a minute of wait time.

Thinking about this has lead me to understand that there is a quality conundrum. Clearly it hasn't mattered much to me that the game is completely of half unavailable. The only thing I recognize the problems had done to me is that it postponed my commitment to the game, as in paying real money for something. And now, bringing out the rational side of the that deletes the game as a time-waster. 

The game apparently has loads of users regardless of its problems. Released with problems, it has made its creators millions (missing a reference) that it wouldn't have yet made, if it was kept on development without releasing. People are clearly enjoying it. The millions they might be losing in addition to the millions they are already making seem irrelevant. 

My time did not cost them anything. It did not cost me enough to consider it relevant for the first week. But multiplied with the number of players, the cost has been high, yet distributed to come out of many pockets in small streams. 

I can only hope that acceptance of this level of "it works" has not come here to stay. But I fear that changing is already too late. We accept problems without compensation. Our lost time is not out of the creators pockets. And in some contexts like this, the lost revenue does not matter in relation to the revenue that is already flowing in. 

Contexts have never been as clearly different to me as they are today. And I worry about this becoming the best practice that starts defining the quality of my life. 



Sunday, July 24, 2016

Calling automators good testers

I could have seen this post coming as I was writing my previous post - a longer explanation of why would I go about calling some automators good tester, as they show no understanding of what good manual (exploratory) testing actually does.

The core of it is that I believe there's two kinds of testing: testing as artifact creation (automation, documentation) and testing as performance (exploration). When you're good in one, you might not be good in both. But both are testing. The first focuses more on known knowns and known unknowns, whereas the latter focuses more on unknown unknowns and unknown knowns. 

I believe you can't spend decades automating a process (with good results) without learning to understand it and it's problems and solutions. While your understanding might not be perfect or complete, it can still be good. And looking at exploratory testers not actively going into the realm of automation (pairing with programmers qualify - friends with pickup trucks is a valid approach), that understanding is not perfect or complete either.

When test automators give the impression of believing in 100% automation, they often compare only things within testing as artifact creation. You learn stuff around creating those artifacts, and you include them. The manual testing they compare to is the manual testing exploratory testers loathe - the test case driven dumbed down commodity testing that drives down skill. 

I've come to think I understand this a little better these days through two friends. 

Pekka Klärck is a good friend of mine, and the creator of the Robot Framework. We've been members of the same local community long time, and in Finland we're lucky to get along even if we disagree heavily. Where I've been an exploratory tester most of my career,  he's been a test automator most of his career. We can easily get into the arguments around manual / automation, and respectfully agree to disagree. But with all of that, he has taught me to respect test automators as a different specialty, who are not any less of tester than likes of me (exploratory testers). 

Llewellyn Falco is another friend, and the creator of ApprovalTests. Working side by side with him, I've come to realize that there's generalist developers who are really good at testing, and who become better when exposed to good exploratory testing. I've also learned that the testers he's met over the years are nothing like what I perceive a skilled tester to be. 

Hanging out with people different than you can be exhausting and frustrating. They might walk over you, fail to hear you and you will need to try again. 

So we have four types of testers (at least):
  • commodity testers
  • exploratory testers
  • test automators
  • programmers becoming good at testing
  • (test managers / architects - working in scale)
When we argue about what a tester must know and learn next, we should look at the balance. In organizations with loads of programmers becoming good at testing, you could use an exploratory tester. The ratio is more like 1:10 or 1:100 than 1:1. In organizations where programmers are bad at testing, even commodity testers provide value, but a mix of test automators and again a bit of exploratory tester skillset could be better.

Not everyone can hire the guru. Most organizations need to have homegrown gurus. The good homegrown gurus look around and learn a little more each day - regardless of what corner of testing / programming they end up starting with. 

I repeat this a lot but it's worth mentioning again. If this industry doubles in size every five years as uncle Bob has mentioned, half of the industry has less than five years of experience. Let's spread out the learning so that all relevant corners end up covered. More time is more learning, and there's no reason for any of us to stay in an assigned box, instead we should move as our interests guide us. 

Saturday, July 23, 2016

Mind the style - sharing vs. correcting

Today I learned something that is useful to me as a context-driven tester. My responsibility is to teach myself and share, not to teach others. It's pull, not push for others too. Understanding other contexts and viewpoints starts with listening, not telling what I have to add.

Let me elaborate this a little.

There was a tweet.
Seeing this, I felt the need to respond that while I agree that (good) automated testing was much needed there, they could have used any good testing.

I've come to understand that part of automation magic of people like Bret Pettichord, Noah Sussman and Jason Huggings is that they are strong testers and automators. There are a lot of people who are strong in one and weak in other, and to do the great things, you need strength in both. So far, I've come to know personally many amazing programmers who are also great at testing, but I've followed a lot less people I'd identify as programmers primarily in the area of test automation (automation testers).

So bringing in someone like Jason Huggins, you bring in both good testing and good automation. The statement saying "too much manual testing derailed" can include the idea that bad testing and lack of automation together derailed, and that fixing problems of bad testing and lack of automation can happen both at the same time.

As soon as I had commented on the tweet, I read more tweets in my twitter stream. I realized that Anna Royzman and Dan Ashby had also commented, and I felt a surge of empathy. Imagine I was saying good stuff, and the masses focus was on correcting me - how would I feel? I deleted the tweet that had existed for 5 minutes, and made a commitment to pay attention to my interaction on Twitter.

A good heuristic is that Twitter is for making new friends (Facebook is for keeping touch with the ones you have). Making new friends by correcting them is an awful approach in the online world.

If you're wondering why context-driven people come off as anti-automation even though we're not, this could be one of the reasons. We see we're adding new data points when we're correcting. The other person is not in a place to accept the information we're pushing. Focus on the good of automation. Feed what you want to see grow. 

Origin stories

I listened to the latest of the Let’s talk about Tests -podcast which was on the topic of origin stories. How did you end up as tester, was there a moment that defined your path and can you pinpoint it? Listening to the podcaster’s story, I felt compelled thinking about mine in writing.

I think there’s three defining key moments for me. The first is about getting started and the second is about finding freedom and third about making a commitment. 

Getting Started

It seems a lot of us kind of fall into testing. For me this happened through direct recruitment. Someone knew two details about me: I had been accepted to study computer science in Helsinki University of Technology (must mean some interest in computers and software) and I had studied the basics of the Greek language before university (must mean can recognize some Greek words). There was a localization project in 6 languages starting up in Helsinki back then, and Greek was one of the languages given to this location. 

I just went with the flow. I scheduled a test of my natural testing abilities and observation skills with the company (seeded bugs, testing localized version against an English reference) and got sucked in to part time work. 

The localization stuff was pretty routine, with Microsoft supporting many sites and projects. We had test cases and we had QA - the contractual idea of someone testing after us with same test cases and telling if we’re missing stuff that could be found around those cases. The feedback was great, although contractually quite intimidating at that point of career. 

Finding Freedom

I changed jobs, and did more localization testing with a Finnish product company with the experience I had. Then it became time to extend from localization to functional testing. I remember the moment when instead of test cases I was handed a (bad) specification with the responsibility to test a mail server on Solaris. Getting out of the box of someone else’s test cases that I would carefully tread through doing enough but not too much, I was now set free. I learned to love what I was doing. I did not know the name of it then, but exploratory testing made me an active learner back then. The difference was really just on how I perceived my responsibility area. 

Making a commitment

A while into the testing work, I came to the conclusion that everyone just hates testers and that I would be wasting my life sticking to it whether I like it or not. I had the youthful bloated ego thinking I could be so much more, and more meant being a developer. Developers command respect, right? I had already experienced that (junior) testers don’t get their voices heard and it frustrated me. 

I moved into a developer job and learned within that job that I could take a step forward and come up with five steps while testing my own stuff that would take me backwards. I learned that average developers don’t automatically command any more respect, and that there’s such an idea as programming as assembly line, where you’re just handed pieces to blindly implement. I did not last long, I came to the conclusion there was no reason for me to be unhappy in one job when I could be happy in another. And that some (many) developers commands just as little respect from the “important people” as testers do and that we could unite to make the world a better place.

I committed to being a great tester. I took a job that enabled me to read and think about testing, and teach testing, I started my journey on the road of learning every day, for the purposes of recognizing and filling relevant gaps in software (product) development. 

Eventful career

There’s been many moments of joy and frustration and life-changing insight for me. I think particularly fondly the moments that make me completely change my mind like realizing that I was teaching test cases while doing exploratory testing; like realizing continuous integration was actually a better idea than controlling change so that I could test “full builds”; like understanding that while smart manual testing can cover more ground that test automators give it credit for, I like automation too; like realizing that over planning and thinking things through, experimentation is giving a chance for “bad” things to turn out good in collaboration. 

I’ve changed jobs often, every 2-3 years. I’ve been a tester, test manager, project manager, developer, teacher and trainer, and a consultant. I’ve been given a great platform to learn, and I feel privileged having been allowed to share stuff throughout my career. I’ve had some amazing managers, and worked with mostly wonderful developers. The local Finnish testing community has been my lifeline, and I’ve learned a lot through published authors and more recently, blogs. 


I love being a tester and helping developers being more productive. But most of all, I love how we are allowed to learn and change, and have many forms. 

Friday, July 22, 2016

The new terminology wars

I kind of wish you could be unaware of what is going on in the online world of testing in the last few weeks. My summary is that some people got tired of quiet approaches to dealing with negative behaviors and started campaigning to make it more visible. And as with campaigns often, there can be casualties.

If you end up somewhere in the outskirts of this, there seems to be a new vocabulary in play that I wasted to write about - a vocabulary of abuse.

First of all, people will talk about privilege. There's all kinds of privileges, that is, special rights available only to a particular person or group. A great example of privilege is that not everyone gets to talk 1:1 to the so-called community leaders to resolve their conflicts.  Or that there's plenty of experiences white males don't have that people who are not white males do have. And that what doesn't happen to me might (and probably does) happen to others anyway.

We all come with privilege, and it's a great practice to learn to recognize yours. Other people tend to see that better than you.

The other words people have started bringing in are from the world of abuse terminology. I was told in private a few weeks back that a friend of mine is using gaslighting on me. Gaslighting is form of abuse, where the abused is made question their own beliefs and role in the abuse, as in "perhaps I deserved this, perhaps I did something". I don't think that was the case though, but I appreciate being made aware that such terms exist.

Yesterday, I was told I use classic derailing techniques in my attempt to explain that while more people became vocal on this argumentation, it could be that as many or more are stepping out of these discussions and the online media completely. Derailing is a form making someone else's experience about you.

Also, there's sealioning. That's about hogging discussions.

All of this starts from the discussions of bullying. It's one of these concepts where the vocabulary definition helps you see that it has something to do with status (superior strength or influence) for intimidation.

With months of therapy out of a bad experience people using the last word on me some years back (see the post on my thoughts about this 2 years ago) I learned that these words and their definitions don't belong to me. They belong to the victim. And when someone tells me I'm doing any of these, the best I can do is step back and stop doing it. It's not about how the other perceives me. Being called out can be a very destructive experience, and I would advice doing it with care.

The reason I'm writing this blog post is that I feel that now that these words have been found, there is a risk I see that we use them without understanding their power, especially on people like myself. They should have lot of power for the victims, and as such, they can be used as a new form of bullying: overpowering the sensitive ones who are conflict-averse to shy away from discussions or one's with existing triggers on these topics.

My call for action is: let's just try to be kind and considerate. We fail and learn. I hope. There's so much work to do in the field of software that we shouldn't alienate people. You see the new voices joining, but you don't see the ones who never joined or the ones that stepped away.

(Note: in case it was unclear, I do not think lying about these makes these exist. So I choose to start from good faith. And if the other would choose to lie to me, there's no use in me continuing in the discussion anyway.)

Wednesday, July 13, 2016

Thinking at the keyboard in Strong-Style pairing

For the last two weeks, a little researcher in me has enjoyed the opportunity to eavesdrop on two programmers pairing. Don't get this post confused with stuff I talk about when I speak of my work, as for the past two weeks, I've been on vacation. That means I do whatever I feel like, without having to consider what my company expects or would benefit from.

This pair of programmers is an interesting mix. They shared an interest to a problem that needs an open source solution (a testing tool). They are pairing on a problem one knows deeply, and on language the other knows deeply. In fact, the first developer had his first experience on the language three weeks ago in a code retreat trying it out on pairing on game of life.

They pair remotely, sharing voice and screen and I can recognize that they are doing strong-style pair programming, without ever agreeing specifically on the style.

In strong-style pair programming, for an idea from your head to get to the keyboard must go through someone else's hands. A phrase often used around teaching this style is "no thinking at the keyboard" as driver, and this dynamic was particularly fascinating to monitor.

It's clear in this pair that the one who knows the problem deeply is the one navigating - hands off keyboard. This is probably also a result of the fact that he does not have a development environment in the language set up at all, so they will be working on the other's computer. He is navigating by concepts. Also, it's clear that over the two weeks, the navigator who did not know the language before has become very comfortable with the language, picking up ways of how to do the conceptual things as they go on with the implementation.

Listening to them, I feel more that the simplification of saying "no thinking at the keyboard" is harmful one. It's "no decisions of direction at the keyboard". There's plenty of thinking happening, and in this pair, it's very obvious that both programmers bring in a piece of the puzzle and neither could implement the solution all by themselves.

Eavesdropping on programmers also reminds me on the stuff I think while eavesdropping. It makes me generate ideas of what I would test if I would test that stuff. Some of it would be stuff that in a mob I would correct/contribute right away. Others I would intentionally part for later, as I feel they would just divert focus now. 

Teaching developers to explore

After RST namespace announced deprecation of the term "exploratory testing", I've been using it even more. For me, in the beginning there was just testing. Later, there was an understanding that there are two kinds of testing: one that starts with performance and learning in mind (exploratory) and the other just as important that starts with artifacts and automation in mind (unit testing / test automation). Both can end up in the same place, because around both learning and thinking happen. It just happens with a different kind of emphasis. This way of thinking around words is akin to the way we think of guitars. There was only one kind of guitar, but with the birth of an electric guitar, the other one became acoustic.

When talk with people who are strong in testing with the artifacts view, on the level of discussion of our perceptions and ideas, we can easily end up in an argument. I get to hear I will be replaced by automation and that there's nothing I do that would provide value with respect to what they have already put into the automation.  The other gets to hear that the automation he's created with love is missing something abstract. Unhappiness on  both sides is ready.  I've left numerous agile sessions with the feeling I just will leave the industry not to feel this bad.

Instead I've grown to realize that if I get to pair up on real and practice problems with one of these people instead of the abstract conceptual ideas, very different results emerge. I quoted one of those from a private discussion (with permission) yesterday:
For me it's been clear that in order to build things, developers explore around their problem and solution. It seems to me that sometimes their need of coming to a solution within all the constraints takes so much of their focus, that they can't let their mind wonder to the extent I can, as the tester.  Being able to see what developers think enables me to identify parallel thought tracks that lead me to insights of problems and ideas of how we could be better. It's like I'm backtracking tasks, and only now feeding back ones that are directly applicable for the ongoing flow. Yet the rest might come back, as soon as I have empirical evidence of their existence or a chance to set up a task to go find that empirical evidence.

As we test together, things stick. Some of the things that used to be testing to me are now design and discovery for the developer. We both learn on, every day picking up new stuff. And with the differences in focus, there's again something different I can contribute, often from the side of product ownership in a very practical, empirical and hands-on way.

In exploratory testing, I actively and knowingly build conditions that enable learning while testing. I avoid repetition, I vary my paths, my approaches, my pace, my tools, everything. When given a spec, I see what it says but most of the pay attention to is between the lines. I focus wider than what I think I already know.

Seeing developers find appreciation of discovery through experiencing exploratory testing is great.  But this only comes from shared experiences, not from sharing theories of relevance. Test together. And learn to fight to get your voice heard - it was one of my big challenges on this path, to convince the developers to just do things my way instead of immediately transforming them to "automation" and missing out on what I actually do.

Tuesday, July 12, 2016

A theory of risk - when sample size is relevant

I've been having these discussions about how words may be very different but interests very similar between programmer and tester communities. One of my favorite examples of that is that the testing community talks of oracles, heuristic oracles and partial oracles. It includes two main ideas: how would I recognize a problem and how I could create a program that recognizes problems? The same thing with focus on creating programs in programmer communities seems to go under the names of theory testing or property-based testing. 

I feel a part of the reason we're not making enough progress in the field of smarter testing is that our worlds of solutions and ideas don't meet enough. We too rarely make the effort to understand what people mean, and in particular, we too often work with abstractions: concepts and discussions, instead of pairing up and doing stuff, and learning from that. 

Looking through Cem Kaner's oracle post with a programmer today made me think back to an interesting experience, and the programmer to find interest into working in a common kata so that all unit tests would be theories. 

One of the very cool tests I performed in one organization came out of a consideration of risks. We were replacing an existing system with a new one, and of course while replacing, we were also changing things. We could compare some stuff between the old and the new system (and did), but that was not very straightforward. The system was generating decisions ending up in letters, and a lot of time the letters were barely if at all reviewed by a person. 

I remember one of my big insights about the old system from having sat with users using the system that while the new system was fully automatic in sending the letters, the old system included a minor person interaction. With a lot of tacit knowledge, people using the old system would review the decision for rough correctness and interestingly, could change any inputs if they'd spot problems. The insight was that with the old system, the problems that would leak out of the organization were things were decisions were off only by a little, so that the person applying her knowledge would have hard time spotting them. That would not be the case with the new system, that targeted to still lighten the load of people involved in the process. 

Speaking with various people, I came to the realization that one of my main concerns was that with the old system, the organization had over decades turned a fully manual handling process to a mostly automated process. There were very few people using the system. We were expecting to need less people with the new system. Anything that would increase need of manual work would be bad. 

As I could state my theory of risk of increasing need of manual work, I could start working on how I could test that. So asking around, I figured that the new system would have specific error codes automatically moving the letters from automatic generation and mailing to manual processing - one perspective of the thing I was concerned about. 

I set up a little script to take names and identifiers from an excel file with thousands of pairs of info, and generated messages into the decision engine and captured the return messages as files in a folder. I could very easily just search the files and count the number of instances of the error code I was looking for. 

Within a short amount of time, I went from the contractor telling that the system was ready to production very soon, to understanding that this system in production would require us to manually work on more than half of the decisions we'd be using it for. The expectation was a small percentage and empirical evidence of a relevant dataset showed that it was not the case.

It was just one test amongst many others, but it had a lot of power as it revealed information in a wider scale. From a theory of risk of manual processing, one test took me to understanding how much processing could be expected.  Individual data points could have hinted on the same, but it would never have been as powerful as the wider sample set. 

Kaner's post mentions business models as partial oracles. We all probably have our stories and experiences on how we figure out stuff like this. Are we sharing that so that others could learn from it? 


Monday, July 11, 2016

Programming for practice

There's tons of little things listed out there to take up as practice problems when you're fine-tuning your programming skills. The things you find as programming katas are not really just problems to solve to completion, but the idea is to focus on deliberate practice.

Some weeks back I did my fourth coding dojo, that keeps repeating the same problem: game of life. The idea of deliberate practice in that is on getting new pairs and various constraints to the same problem, and learn without trying to deliver anything. In coding dojos, the end result gets deleted at the end of the session. The deleting did not bother me, but the fact that I still never finished the problem, even outside the sessions did. So I ended up spending a late Sunday afternoon pairing with the main intent to not just get to a good start, but actually finish.

Our final version was to choose one of the more complicated oscillator patterns to display on a little GUI.  The "finished" product has still a ton of features I know we should add - things for scrolling for spaceships and this fun idea of color coding age of cells into the animation among other things.

But getting to this point on a deliberate practice session was relevant. Having done so many that are time boxed instead of focusing on the result, this was great.

We did the implementation test-driven with approvals. With any deliberate practice thing, there's always the doing and the meta parts, and we had a great discussion on the meta after the doing. In particular, we talked about how the ApprovalTest style tests drove us and where the focus was.

Our first test was on a cell having no neighbors and thus dying. The very first test already set us up for deciding how we'd want to represent our tests as before and after ascii art in approvals. Ascii art sounds weird and fancy, but it's the best way for me to describe this - the visual of the board in two stages.

All the tests that drove the logic creation followed the same format, so I took an example from a later tests approved file of what this looks like.
There was this one part of the doing, where we were looking at the need of having something other than an empty board on the second snapshot. As we were working with Approvals, there were two choices on how to set up the test: we could manually create the expected file before any code that creates it exists (what I thought we should do, red first) or we could just use the power of recognition to stop and think about the result we were creating and locking it down by accepting it when we saw it was good. With the latter, the idea of the test was held in our head, and a green test actually was driving us to turn it to red, to be again turned into green through acceptance.

In code retreats, you never get to see other people's code, so we ended up publishing our solution of yesterday to github.

(If you're puzzled on the text file, here's an attempt to explain that. With ApprovalTests, we serialize objects into file and can verify them in various file formats. To create the file, we're overriding the toString() to make the stuff returned on object more sensible, and to call a check on the objects we just say Approvals.Verify(object). Approvals are available on a number of languages, Go being most recent and I take joy in giving the framework some exploratory testing in various conference sessions showing how unit testing misses stuff.)

Saturday, July 9, 2016

Test environments and organizational aspects

Once upon a time there was this project I was on. It was a significant system, multi-million euros for each layer. To simplify, there were three main layers.

The bottom layer had a bunch of legacy services for accessing decades of data from massive databases owned by various parties. There was decades of complicated data, which represents history of lifetime of millions of people.

The middle layer was a calculator. It accessed the legacy services to get to the data, and twisted it around for particular types of decisions and summary information.

The top layer had extensive business process logic and all sorts of user interface and batch processing functionality, representing what the organization I worked with wanted to use the data on.

The system had a lot of contractors. A lot, as in tens of them. The middle layer was contracted from one, and top layer from another.

As contractors tend to come, everyone was very much into their responsibility only. I routinely negotiated the amount of testing that the middle layer and top layer contractor could do with mocks/stubs, as they would have loved to keep their life isolated from everyone else. Requiring that the contractors would partially test with an integrated system usually meant we paid more, unless we had understood to specifically require it early on. And often we had not.

When we'd allow testing with mocks/stubs only, we experienced two main problems: 1) the parts of the system wouldn't work integrated as some changes were not compatible after all, and we'd learn this late. 2) the mocks/stubs required maintenance, and the contractors would minimize the testing they did with those, making the testing not representative in the first place.

The stubs gave a lot of good too. They helped keep systems isolated and tested in isolation. They allowed for scrambling data so that a lot of the work could be done outside EU area. They made for very practical ways of noticing when the interface contract was unexpectedly changing, comparing the messages that were expected and received.

We put a lot of effort into figuring out ways of making this easier. Thinking about it I remember in particular design discussions we had around a hardware router box with a 1M price tag, that could run us fast transformations with more complicated logic on what to get and when.

There was a lot of negotiating in trying to get the pieces to match, with everyone sticking to the idea that they knew exactly what their corner was supposed to be. It often felt we were the only ones actually concerned the system as a whole would work.

Sometimes when I look at the stuff people talk around test automation, there's this idea that whatever is needed is immediately possible. Or that there's massive amount of people you can somehow get on the problems.

Context-driven testing means to me that I end up with all sorts of situations. Sometimes I can't buy new hardware.

It took me 3 years to get extra IBM mainframe for our test environment, but I got it. Another 1M. Just installing and configuring it was almost a year-long project. Meanwhile, we did magic with choosing our risks and working around anything we couldn't isolate.

I still don't have a duplicated production environment for my current project, and seems 3 years will not be enough to win that argument. Believe me, I've tried.

It took me 6 months in one of the more agile companies I've worked with to get the test lab upgraded to be available with fastest available network connection (the switches cost some money, people are already paid for), regardless of how much time moving the test images over slower network would eat up.

When people talk about all the cool new technical ways of delivering software without downtime, I still work on organization being willing to invest in the infra that would enable this and organization being willing to find new ways of understanding how good testing could get done, over following the responsibility aversive plan they made 2 years ago.

Companies and what is possible are hugely different. That's why context is relevant. We can still do great job when the companies are not all the way in to the technological advances the field has to offer.


Improving our features

The core premise of skilled testing, I feel, is to recognize that the smart individuals doing the work are in the center of the whole thing. Every day we do the best we can with what we know at that point, and every day is a chance of learning something more. Every day is a chance of improving our features as the core tool to great testing, extended with abilities of automation (or persuasion, when you're using developers as the friends with pickup trucks to do stuff you don't invest on personally - right now).

Throughout my career, I've felt every day at the office is a chance to know something that I did not know yesterday. The life I've lived as a tester with this attitude has given me a multitude of lessons. I've been looking at the core lessons that made me the tester I am today, and appreciating the fact that there are other brilliant testers with very different choices. I've learned that results matter, yet my way is not the only way.

Improving our features is a personal choice: what's the next thing for *me*?  The next thing you need to learn is the next thing *you* need to learn. I find that for me, it is likely to be something the others around me aren't yet bringing to the whole. I get out to meetups and conferences, and figure out ways of improving our results. I pick ideas from people I meet, talks I listen to and learn deeper through trying things out at office.

Helena Jeret-Mae made a point at a conference talk saying "Nothing happens when nothing happens" that is really insightful. If you want to develop yourself, go out and do stuff outside your own organization. When things happen elsewhere, they also feed stuff inside your organization.

There's a great conference coming up in autumn in New York - Test Master's Academy Reinventing Testers. I get to open the conference sharing my lessons learned on improving my features. My choices and lessons will give you ideas. The core is in continuous, deliberate learning in style that suits you.

We're all work in progress. We use what we have to do the best we can and stretch a little more each day. Conferences are great way of finding the things to stretch on, and I would love to meet you in the conference in end of September. Make something happen, for yourself, and show up.



Thursday, July 7, 2016

The tester and technical debt

Technical debt is one of my favorite topics, as I feel that the fact that I as a tester have heavily addressed it in support for my team for the last four years is the reason why we're able to release on a daily basis even without test automation.  So whenever I get a chance to eavesdrop and start to think about it, I do.

Yesterday, I tweeted an insight.
Since we call technical debt with that term, there's a lot of talk around that keeps thinking of the mechanism of how it's born as if it was a deliberate choice, like walking into a bank and taking debt. But that is really not my experience. Technical debt is much more like the infinite amount of small decisions you do on how you eat to either gain weight or not. It's not really deliberate, it's more of an accident. And just like weight, when you gain some technical debt it can be really easy, but the work on losing it will be significant and continuous, and needs to focus on changing the habits for sustainable results. With every relapse, it will get more difficult to find the willpower to stay on it. There's a difference on something being a choice or a consequence of your choices. You can influence  both, but the latter is harder.

As a tester in my team, I've been a beacon to remind the team to stay fit on technical debt, because when we don't, I see the consequences first hand on the surprise bugs that messy code gives us. I see the slowness of adding something seemingly simple. And unlike a lot of managers, I won't be fooled with the "coding is magic and magic just takes time without explanations", because I'm all in with the team.

I speak to the managers about letting the team address technical debt as they recognize it. Recognition comes through learning more about what is and is not good. I provide information that enables making the room for this through continuous positive feedback and evidence that I see while testing that we're on the right route. And I act as someone who keeps reminding that none of us is ever alone with this in a team.







Sunday, July 3, 2016

From podcast to mob testing ideas

For the #30daysoftesting challenge, I sampled four podcasts today. The last one was The Testing Show, and the episode was interviewing James Bach around his Reinventing testers course some time ago.

There was one piece that stuck in particular. James was talking about skill and the idea that to recognize a skilled tester, the testers need to either build up a vocabulary to make the tacit knowledge transferrable or he needs to see them test so that he can then call out what they are doing.

This reminded me of two things. Firstly, of research I need to look back to from a local university, I remember the researchers doing work to label stuff that they saw testers doing. Secondly, it reminded me why I appreciate Mob Testing so much.

With Mob Testing, I don't need to rely on a vocabulary to transfer the skills, there's finally a way of transferring something as complex as exploratory testing through being in a shared experience of testing. Sometimes the group might recognize a pattern you wouldn't pay attention to yourself, and a name for it may emerge. The names are often group specific, and that is perfectly ok.

The interview emphasized also another thing where Mob Testing excels. Instead of having a long discussion about how something should be done, you do both. You will notice that when done in one way (take turns on what goes first), a lot of times you see it is good enough or that a third option emerges.

Deep testing is possible and desired even in collaborative settings. 

Saturday, July 2, 2016

The appearance of anti-automation


I've been thinking about the perception of anti-automation in the context-driven testing. Knowing people who identify as context-driven, it just seems weird we're labeled as anti-automation.

However, there would appear to be two things going on that can generate this kind of perception. 1) Believing automating is not something every tester must do 2) Believing in discuss first intellectual process of deciding.

Believing automating is not something every tester must do

Our systems and applications are insights turned into code. To build a system that solves a problem with code, the code is a must. I don't think anyone is denying that.

If there were people who can really well do all kinds of things in the process of turning insights into code, we'd probably love to fill in our positions with those people. Agile community has been a place for me to meet some of these exceptional individuals, who work well both in the business and technical domain - deep and wide into both.

Most of the individuals, myself included, don't seem to have all the bits together in one package. Especially people who are just getting started in software development, there's many corners from which to start tacking things.

To simplify, I call non-programming testing a corner. I call programming another corner. I call business analysis, UX, performance, security, younameit also corners. Each corner has their deep knowledge with almost a lifetime worth of stuff to look into.

And if the software industry doubles every five years, half of us have less than 5 years of experience. Half of us have just a start of a corner. We need to bring a team together to have a full view into the puzzle.
Even with years behind us, we're different individuals with different interests. The joy of discovery through manual testing can be definitive, and the struggles related to automation might not feel like stuff I need to personally do.

I've seen exceptional business testers who never write a line of code or even read code. It's a shame if people who want to do code want to take away the people who care for the system  but not the code it consists of. But its also a shame if we don't let testers who want to code do just that.

People who want to be commodity are different story. Unskilled people who can come and go are just not the people I think of here.

Believing in Discuss First intellectual process of deciding

The other thing that I feel I'm seeing feels like a bigger obstacle. A lot of people seem to still be missing a core agile lesson about experimentation with regards to the way we work.

I see too much of someone suggesting automation, and getting tramped on with intellectual arguments showing how it is not worth it and how there are other options to do the same thing. We discuss first, analyze to death. We think we know, when we actually haven't done it.

"It's in the doing of the work we discover the work that needs to be done" -Woody Quill
There's a lot of great working examples. They are not "automate all testing", but some relevant helpful part of it. With all the choices, sometimes we just don't get started. And sometimes to get started, we start a discussion first that will eat away the energy of doing things.

There's no shame in experimenting: trying something, learning it's not helping quite as we thought and learning other ideas that would still take us forward.

Like testing isn't one thing, test automation isn't one thing. And good stuff emerges only if we let it grow. We need to let go of the need of analysis, and learn through discovery.

That would seem to be how my brain works. I think I know more than I do. I need to give things a chance. Time boxing, doing something small is great.

Testing without action - a story of performance

I'm reading Daniel Knott's Hands-on Mobile App Testing and at 33 percent, I'm disappointed. I reserve the right to change my perception in the next 67 percent but so far it hasn't lived up to my expectation from Cem Kaner suggesting it as a context-driven approach to test automation.

There's so far a main theme that seems to bug me: there's a lot of "you must" advice, including advice that would be very application specific. Following the advice for some of the apps, I'd end up testing a lot of the OS features instead of our application. But even more I'm bugged about the total lack of discussion about the idea that testing provides information and for that information to be valuable, it should be something someone wants to act on. The action could be fixing, or knowing what consequences to face when the time comes. And if there is information we know from experience that our developers and product owners just don't care about, at least we should be advised to be careful on how much of the limited time and effort we spend on finding that types of stuff.

Reading the book made me think of an experience on performance testing that illustrates more of context for me.

I was working with a C# web application and we knew the performance experience was not up to par. We had no performance tests, but you did not need tests to know this: hands on the application was enough. We had already spend a lot of time and energy optimizing whatever we could, but we just had an architectural issue and a lot of code based on that architecture.

No amount of testing or no sophistication of testing would have helped us solve that problem. We knew the solution: it would be moving from one technology stack to another. With the current, all data was going back and forth. With the other, we could update only the information that was being touched at the moment.

But there was always something more relevant to work on with implementation. So instead of testing for performance, I used my time on advocating for performance, helping negotiate a time box in which we could start the change. The action was more important than the testing. The shallow information was enough, we did not need details.

Finally, we got to change the technology. And now, having actual possibility to design things to improve the performance in use, we cared for measuring it. At first we just had someone time manually basic workflows, to learn what we cared about. Very soon the programmers would jump in to say no human should suffer that assignment, and automated the task so that the person could just focus on analyzing changes in the numbers.

So, when I read a testing book, I would like to see more discussion about how much time and effort we invest on what type of information. And if there really is information that we won't be interested in acting on, perhaps we should think twice why we spend so much time and advice on telling it must be tested.