Friday, February 16, 2018

Things the Frog Did Not Notice Before It Was Late


Looking at my quarter of a decade in the software testing industry, I can look back at what I am now and what I've been earlier, to realize I've gone through some major learning experiences. Having gone through those learnings, they all now seem evident and obvious. But I've made myself a favor over the years, clarifying my stances in writing, allowing myself to see how my views change. At first I worried about sharing anything I thought was true because none of it might be, but writing more helped me deal with the concern and just embrace the positive. 

Many of the foundational changes are things I did not see coming before, they sunk in slowly over longer periods of time. They are foundational in hindsight, and could easily be things I thought I always knew. Here's four that I think are ones that caused my whole belief system to pivot to find new possibilities. 

1. Test Cases aren't What Good Testing is About

Early in my career, I taught testing at university. The course had 120 students a year while it was part of a major, and I reached a substantial amount of local young minds. Part of the course was four-phase hands-on lab where the students would write a test plan, test cases, execute test cases and report their testing as well as automate a subset of their tests. 

A few years passed, and some of my students taught me my first foundational lesson. We met in a real world testing project, where I guided them into exploratory testing and cautioned against premature documentation of test cases at times when we know the least, and opportunity cost of the documentation work in relation to time we could spend actually testing the products. My students reminded me of my teachings early on with words I've come to cherish: "Great to hear you're doing this in a smart way, not the way you taught us it must be done at the university course". 

I used to believe test cases were what good testing is about. But good testing is about finding relevant information with a limited budget, with consideration of best for today and for the future. Test cases have little to do with best for either time. 

2. Continuous Integration and Delivery Wins Over Change Management

I grew up with testing in mostly waterfall projects. By the time we were testing, many moons had passed without us seeing the software, hearing only rumors through requirements documents. We tested in phases, with a huge scope and a fixed schedule. And when it finally reached us, we needed to be careful with change management. Every fix would take us back to testing again, and we only wanted fixes that were absolutely mandatory. And we did not want them to come to us whenever they were available, after all we were approving a build, not making sure that the end system was the best possible for the users by enabling change. Because change was a risk, that usually realized as something even worse.

I remember the person who first tried to talk me into the idea that continuous integration was a good thing, and how fiercely I resisted. Later, living through continuous integration and delivery, I can't imagine wanting to control change in the way we used to. Small changes with small impacts, and lots of them over time makes life much simpler. 

3. Test Managers Can Make Testing Worse

With a some years into testing, someone decided I could lead testing efforts and made me a test manager. I created strategies and plans, discussed with the testers I was working with on how we'd follow through those plans and coached people into being better testers. I sat through meetings, building a great holistic picture of what was expected of us. And I tested some, usually something less time critical because my attention could be taken elsewhere. Then agile hit us. External managers trying to manage small self-organized teams did not make so much sense anymore.

I stepped down from my "career path" and became a tester again. The testers I used to manage became better because they did not leave "my work" for me to do, but found better ways of doing it all. They became better testers when part of their work wasn't expected from "manager". When I knew things others didn't, I could contribute just as much if not more as a colleague. 

4. Test Automation is a Core Part of Testing

I've spent years honing the thinking part of testing. I've learned to work with software and hear what it tells me, combining all sorts of information while using the software to test as my external imagination. The thinking part and the manual execution part supported each other, and automation in testing was something that helped me reach things I wouldn't be able to do manually. But a lot of the automation was throwaway code. I had colleagues with a different focus in testing, creating test automation scripts that could run reliably over time, detecting unwanted changes. And I thought of those as separate things - the artifact creation and the performance.

Then I learned to do the part I used to look at others doing, and it changed the way I looked at it. It made me realize my previous company would have been better off investing three years into me if they got both the great exploratory testing results and a piece of automation that documents, in an executable format, some of my lessons that the team could use to hold their stuff together. 

I realized the only reason for me to hire someone who does not do both exploratory testing and test automation and intertwine them is that people have not yet learned the other. And we have lots of test automation specialists who are bad at testing, we even have lots of test automation specialists who are bad at coding. But they leave behind, in long term, something that could help when they are not around. Those who don't automate make their impact in the quality as we see it NOW.


There's the story of a frog not noticing when it is boiling, moving to a different purpose as food. The frog story might be a fable without a foundation in empiricism, but as fable it describes the feeling of how things change. Many of the things that changed my views are like that. I did not notice them while I was in middle of the process. But where I started and where I ended up are very different states. 

Saturday, February 10, 2018

Test Automation Legacy Code

10 years ago I left an organization, that was top-notch in exploratory testing but had no test automation. With exploratory testing alone, I helped introduce the foundation for what would become more aimed for continuous delivery, introducing continuously delivering (with a lot of manual steps) to a beta program. The technology preview concepts and ideas still are easily recognizable, a decade later, without memories of history: it's just a way things are and have been.

While I was away, test automation got a foothold. Looking at it in hindsight, I'm happy I wasn't there to mess it up: great testers favoring the thinking part of testing and speaking up a lot about it are one of the most relevant blockers useful automation has, stopping automation being born while it's still learning its place and form. Lesson learned: give room for things to grow you don't believe in, and they may grow into things you do believe in.

Now that I'm back, and I look at the test automation generated and feel joy on the accomplishment of introducing that there. I did not do it. Or maybe I did, by stepping away and leaving the battle of opinions unbalanced, for the automation side to win. But it is there, it is doing real testing and while it has many many problems, it is a cornerstone of the way we build and release products.

In the 10 years, I've changed. I've come to remember that I was 12 when I wrote my first program. I've come to appreciate internal code quality, and recognize when its lacking. I've stopped looking at testing testers do, and started to look at programming productivity to produce the right quality.  I've trained with Llewellyn Falco, a legacy code expert, and re-learned programming legacy code first, test-driven development second and always driven by hands-on work over reading about it.

This week brought me new appreciation in the role of legacy code in what I do now for our test automation system. I'm helping us clean up the mess, without removing the value so that we can add more value. I draw from lessons on legacy code, lessons on (test) product ownership, and intertwine those so that the automation we have would better serve a product line.

I look at this as lifecycle. There's someone to select (or create) the framework. There's someone to use that framework, adding tests to the best abilities they have, doing real useful work. And still, there's the time when the code running the test automation is legacy, still living and breathing, and needing attention to not block us from our future enhancement aspirations.

We're inclined towards a rewrite, while refactor is a better option. When the existing structure emerges from the mess of duplicated details, changing pieces becomes timely. Mending the systems, not making them.


Friday, February 9, 2018

The War of Ownership


Agency. It's the fancy new word introduced to coin an insight in a war I don't want to be fighting. The idea that it's referred to as war is the first hint that what this says has little to do with the collaborative non-violent software development we aspire to.

This is a war to say that in the kinder, collaborative way, testers don't feel safe to believe their existence is founded. They're struggling for their life as they know it. I don't feel like I want to join that war, I want the war to stop. And the way to stop war isn't my specialty, but I suspect it has something to do with finding options and making working agreements. And when the party in war isn't willing to make any agreements, the stronger wins. Newsflash: tester profession isn't winning in this war. It is taking steps further into alienating itself from the tables where decisions of future of software are made.

I'm selecting a few points on twitter to emphasize my takes, words by Michael Bolton.
"Testing doesn't make your code better. Testing doesn't make your code testable either. YOU make your code better, and YOU make it more testable, and those are fine things.
Testing isn't an abstract thing that happens. There's someone who does it. And there's two clear choices of who that someone might be in the jobs we have in the industry. It could be a tester. It could be a programmer.

The programmer is YOU in the clipping above. The programmer makes the code better., the programmer makes it more testable. And the programmer tests. The programmer makes makes the code better. And in my experience, programmers who don't test rarely create good software.

There's the other option of why it might be. The tester. It is in the tester's interest to create a clear line and separation. But the trend is to remove that clear line. Many organizations report great results blurring the lines. They aren't making everyone the same, but they are stopping man-made absolutes of lines between who does and what. They are saying everyone does what their skills allow them. Everyone learns. And that everyone learns also about testing and ways to build software that makes users awesome.
"...make explicit a central theme of our Rapid Software Testing classes and consulting work: agency. We want to help empower people; shine light on what they do; help to liberate them."
What I read in this statement is that people = testers. Testers that fit the Rapid Software Testing methodology requirements. I've been told in the past I'm not a tester (as per the RST terms at least). Yet, that is exactly the position I hold, and have been holding, hands on working with products for the last 25 years.

Empowering people by creating a clear distinction on roles that are job functions and don't need as much clear distinction in the world of collaboration would include allowing them to see world their way, and mediating. But that is not the world RST seems to serve. It serves the world that I don't see as a practitioner in the companies I work with.

I recognize I'm selective. I choose product companies. I choose agile methodologies. I choose ones that believe in empowering and listening to all their experts.
"I don't think there's enough salt"
I can notice lack of salt, and add it. I don't remove myself as an actor when the salt needs adding if (and when) I know what is the appropriate amount. I don't need to remove myself as an actor on fixing as someone hired as a tester.

When we talk of the concerns about limited time and choices that we make on splitting to roles to ensure different concerns get covered, we are using concepts from time before continuous delivery. The world has changed. Quoting Necromancer from memory: "The future is already here. It's just not evenly distributed.". We don't need and can't have one true way anymore.

Software development is a process of transforming ideas into code. Which of the ideas are labeled what isn't as relevant as we think they are for reasons other than having the profession we love. What could be the ways to add meaning to this conversation that is stuck on violence and war? Isn't there a more constructive way to build a profession that draw the lines around testing for the purpose of understanding the tester?


Note added later: I did not need to read JBs article to pick up words from the title. This is not a response to his article. This is a response to the tone MB runs on twitter. 

Thursday, February 8, 2018

It's just semantics

I work with product development, building and testing a product. The product is a Software-as-a-Service type product extending beyond the idea of renting an app from the cloud. Some parts of the product change as much as 20 times a day introducing new functionality to provide the service the product provides. When I test, I don't test only the software components, but the whole customer journey and experience dealing with our product. And with some millions of customers, long-term commitment with them, striving for better for them is a fun area to work with. There's no projects. There's the product that lives on. 

So I wrote a piece of my mind talking about test automation as a product. It too has users, long-term commitment with them and is intertwined in appropriate ways with the way we develop. And an ex-colleague decides to comment on twitter:


My first reaction is to to say "it's just semantics" - "wordplay". Semantics is meaning of words, and surely meaning of words matter? In this case, I don't care of the difference between "product" and "ecosystem". I don't care for the focus in a single word, when I've just used many to explain a lot more than just that word. 

To say "it's just semantics" is to say that in this conversation, I'm done. The way you approach the discussion with me just turned sour, and I'm  not committed in continuing. You're derailing me. 

I read a wonderful book called Crucial Conversations, that talks about these types of dynamics in conversations that matter. And conversations around the nature of testing matter a lot to us testers. The book introduces the ideas of two ways of closing the flow of meaning to a pool: violence and silence. Correcting words is a form of violence. My default reaction is silence, keeping the violence option of "it's just semantics" hidden in the back of my mind. 

As we would want to add meaning to the pool when discussing, closing communication isn't a good thing. We can choose to stop and ponder on our reactions, and work back towards a place of trust. We can learn more, add more meaning to the pool, if we just keep at it.

I know Valera as an ex-colleague I have utmost respect for, and explaining myself other possible meanings of his corrective statement isn't hard. He means well, just playing on my triggers. I've needed the same reminder for myself on good intentions a lot with men who explain things to me, without me knowing them or them knowing me. 




Counting test cases

Confession: I count test cases. Before you get all riled up, read further. I count test cases for the purpose of understanding how much of something there is. A typical example for my counting is "30 test cases in our test automation" for understanding how many conceptual program pieces there are or "100 lines of functionality added, yet number of unit tests stays the same". Counting things is useful, but it is not all there is.

On the other hand, it's been at least 8 years since I last counted test cases as in understanding how many there are in a manual test set, how many of them have been passed and how many failed, and how many yet to be discovered for our list through exploring. To be more precise, it's been 8  years since anyone managed to coerce me to write down a test case, or guide anyone close to me to write them down. Instead, I write test cases into automation and free up majority of my time in freeform exploratory testing. 

It is also 6 years since I last did session based test management, counting sessions or time in functional areas as a means of progress. And even then, I did it for two weeks to prove a point: I was worth trusting to do good testing without paying extra in time to impose a visibility framework of this sort.

These became irrelevant to me, as I helped my teams move to continuous delivery. When we manage scope of hours or days instead of weeks or months, the numbers no longer matter. Quality of testing we do matters. And we learn about that as we deliver continuously, carefully tuning so that our customers could forget we ever updated their software. 

I started this post with an idea of examining my views on counting test cases, if I was asked to do them again. With all the experience I have, would I? When would I? And is there anything I would advise to those who still do?

Finding the Least Amount of Meaning

A core principle in testing is one coined by Dijkstra quite a while ago: we can't prove absence of bugs, just show the presence of those. So even if a million test cases passed, the tests that are worthwhile are the ones failing.

Twitter brings me a haha moment:
The image with texts coins the least amount of meaning in counting test cases: counting the ones passed. Counting the ones that did not find bugs. Counting only the ones that pass. Forgetting that each change for the bugs found invalidates the ones already passed introducing a new test target.

Adding More Meaning

Thinking back 8 years for the time I last counted test cases, I remember a futile battle turning into a productive negotiation. I started off with the premise that the way things had always been done - counting passes and failed tests - was a way to take us to bad testing and bad relationship with management. I was faced with the fact that with a 30-days acceptance testing project after multi-year delivery project, no one was comfortable without a way to see how testing was progressing. I couldn't go full on session-based test management. I find that it was a poor choice to replace what was in place with the amount of work needed to ramp up skills of business specialist in the methodology.

I approached the problem at hand with experimentation. Experiments are a way of asking to try something different, just this time without commitment to doing it again because it may go bad too. We started off where the organization was before me: writing test cases in advance, and following pass/fail numbers throughout the 30 days.

In the first 30-day acceptance testing, I started stretching with what I perceived as the biggest risk of using test cases as measure in a traditional way: quality of testing that gets performed. With pre-designed test cases, you create the ideas of what to test when you know the least. You have no software in your hands just the promiseware of requirements. The way test cases were created was looking at an old version of the product, imagining how the promises change that and making those scenarios that walk us through to see the changes in action.

With my lead, we introduced two kinds of test cases. The first batch was just like it had always been. Details of where to go, what to look for. The second batch was different. We used HP toolset to create  template test case, an idea of reusable steps for test cases. The template test case steps were a high level of the process the system was supporting us through, no details. The actual test cases were test data: people whose data we could use to walk through the process, in different ways. We split the time available so that we first tested with the traditional type of tests for half the time available, and the other half was left for what was essentially exploratory testing.

All the bugs we found - and we did find quite a few - were found with the latter type of tests. We learned the mix was really good for us at that point of time. Jumping directly to freedom would have made people nervous. Mix of the old and the new allowed us to do great, stretching people not too far away from their current skills and comfort zones. We reported tests planned, passed, failed, and started-yet-not-finished across both types of test cases.

In the second 30-day acceptance testing I lead for a different product, we stretched further into exploratory testing. The system we were testing had a complex processing logic with one step reaching to a third party system including manual processing. We again created test data as test cases and template test cases as reusable steps, and step 7 in the 12 step process was the information the 3rd party system needed to pass us. The group testing was seasoned in the business process, and had never used test cases before and this was a perfect fit for them.

Results in what testing found before going live were equally great. The test numbers showed us that a big portion of tests were in started-yet-not-finished state, and helped us encourage the 3rd party system in tracking whether our requests of info arrived on both ends.

The third 30-day acceptance testing I lead experimented in the secondary risk of using test cases as measure of progress: conveying the nature of testing as activity. In the first two efforts, I was aware of the illusion tests marked passed or failed were creating. As we found a problem, a new version of the system was introduced. When we found a critical cross-system change-introducing bug when 80% of tests were passed, the remaining 20% wasn't really enough. The idea of the metric was not only founded on guidance that lowered the quality of testing that could happen, but also encouraged lying on the coverage assuming there was no change.

We still used test case counts, but we changed our graphs and communication to a metaphor of a Progress Bar. We all know how progress bars are. Time waited for something to update and the number shown on the screen have often some connection, but it is not predictable and reliable. It's something to just say 'hold on, wait, be patient - working on something'.  With the progress bar, we introduced a 30% "invisible tests" number, showing the allocation we expected for repeating tests or introducing tests while testing. By the time we were at old 100% of tests passed, we really needed the extra 30% to run tests again for change and we avoided the old stupid ways of non-testing managers deciding that we were done when things planned were done once.

Why Would a Project Need Test Case Counts?

I'm not for test case counts. However, when I have to deal with them, I've learned the core of playing them for the goals of doing a good job testing:
  • Free the "test case". It's just any placeholder of things to do. It could be an exploratory testing charter. They don't need to be the same size. Trying to make them same size is just foolish. 
  • Communicate 'best before' idea for results. A test passed today can be not executed tomorrow. And how quickly the 'best before' date hits you, depends a lot on the organization.  
The projects need test case counts if they have no other measure of progress and are not ready to place trust in getting a spoken reliable measure of progress without a forced test case counting methodology. 

When I started looking at testing as time investment and reporting against time, things got more straightforward for me. Given a week, I can always say that on 4 days used, I have only one left. While exploring, I can explain what I've discovered in that time, and what I would use the next week on. I can do that, teams of exploratory testers can do that, but not all business specialists temporarily assigned as testers can do that. 

I know counting test cases is meaningless. I know the same test case done early on can take more time because I can't stop myself from exploring around whatever I was given. I know the same test case done again later can find a problem that was there in the first place, but I was just not in a state with my learning that enabled me to see it.   Constraining on test cases when the process is about learning makes absolutely no sense.

But I accept that sometimes I have to do things that make little sense to me, because they help others. I also know that I can experiment and offer alternatives that slowly take people towards where I am with understanding the dynamics around testing. Sometimes, asking people to trust me on my perceptions of status is enough. I've learned to be away enough to build ways of working that don't crumble on my absence. 

A great option to take people towards is more frequent deliveries. When meeting an organization that counts test cases, that is now my default change I would go about introducing. 



Wednesday, February 7, 2018

Driving test automation forward as a product

I'm in a middle of a very complicated relationship, best defined by love-hate. On some aspects of it, I just LOVE what we've done. Yet on other aspects, I HATE where we are. It feels both a little schizophrenic and balanced. And I'm talking about the test automation I work with.

I work with it by being on the sidelines. I know I can step in whenever I feel like it, but no one requires me to. I can look at it both as an insider and an outsider. My place and position is unique. I find that I see things others don't pay attention, and my attention brings out things others wouldn't be paying attention otherwise. And I share about this position for you, my dear reader, because there's something you could consider here:

  • if you are deep into automation, what a step back can give you as perspective
  • if you are not deep into automation, what you can make sense of just by seeing concepts and reading code "as if it was English"
I'm working out my relationship with test automation because I'm no longer ok with test automation doing a bad job at testing or myself being a blocker for others by focusing on what it cannot do over what it can do. 

There's things that I love, and where other people's appreciation helps me appreciate things more. 
  • Our ability to run automation that kicks off 14 000 clean OS instances up and down a day is quite an achievement, and that from "I want a clean OS to install on" to "I can start installing", it is a matter of a few seconds. 
  • When a new person joins and isn't left to discover the environment on their own, it takes a day to get started. Comparing this to new person joining discovering it on their own being weeks, basic proficiency being closer to 6 months I'm even a keener fan of pairing new hires for their first tasks. 
  • It runs and it is kept running. It enables releasing in a way products of this complexity could not be released without it.

    There's things that I hate, that others seem to hate much less.
    • It guides new hires to create a corner of their own over sharing common assets
    • It has tons of embedded decisions over time that allows others to be judgmental about "not doing things right" for later hires
    • Reuse of things has a manual coding element, taking days of coding to just introduce a concept like "same tests to another environment".  And people rather spend the days on the manual task than create an abstraction. 
    • People think of it as "testing a lot" because it runs often even if for a very limited set of things to test. It distorts *managers* concepts of how well we've tested, when same thing 1000 times is not 1000 times more testing for real.
    So when I said I will reframe myself as an architect, I find I reframe myself first as test automation architect. I choose to work on things that drive the overall structures for the better. And just expressing things I would like to see us work on brings me to an interesting place of shining a light on things that have been that way. 

    Since I don't still end up dwelling in the code and implementation details all my days, I see concepts. I see that there's tests that are small (that I want more of) and tests that are large (that I want less of) - and I see that the structure does not help me see them. I see tests, test specific methods and common methods, that again the structure does not help me see them. I see products, applications and components, and that again the structure does not help me see them. I see similar use of resources, like having malware samples, temporary data and persistent data, and I see that the use of those isn't consistent.

    I'm in a place where I have the vision of where we might head to for good or better, with limited ability to implement it all by myself. I might be paralyzed by my abilities alone, but others with different abilities may be paralyzed by not seeing things I see, or requiring things I require. In the last three years, I've acquired a superpower that allows me to still do much about this: pairing and mobbing.  That superpower, in addition to making it possible to turn my great ideas into code, gives us all a chance of learning together. And I'm looking forward to it.

    Test automation is a product that tests our other products. Caring for overall quality of it is just as necessary as caring for the details of each test. 

    Tuesday, February 6, 2018

    Security, Testing and My Place in All of This

    Where I work, we have this constant struggle of us (the good guys - cyber security specialists) and them (the bad guys - in their various forms). The struggle is kind of fascinating to look at from a tester perspective.

    We know software always has bugs. The good ones of us are good at catching them, and catching some of the relevant ones before we ever deliver our software out. And we catch some of them after release, listing carefully to signals and confirming suspicions. As a tester, I've come to the ultimate conclusion that what matters is our ability to change. It matters because we will miss something, regardless of our best (and improving) efforts. But it matters more because we have an adversary that plays the game from a different angle.

    The people who create malware are software developers, just like us in many ways. I find it fascinating to think if they have the same needs of investing in testing. Any software introducing the right kind of bugs can give an attacker a route into a system they shouldn't be in. Does it matter if the software using an exploit is fine-tuned and tested?

    I had a chance to talk with an old tester colleague of mine who now uses his analytical skills in working with security after something bad happens. Something bad these days could be that a company has ended up with ransomware, lots of critical data unaccessible as it has been encrypted. You could try restoring from backups, but what if your incremental backups don't sum up to full? Apparently you can also use your testing skills, testing the malware and finding bugs in it to be able to open the encrypted files again. The joy on my colleagues face on finding the bug and using it to open up all the files was the same joy we feel when we catch relevant bugs in our software. The skills used are same or similar, the target of testing is just completely different.

    Working in a security company makes me wonder about the role of testing. A lot of the bugs I routinely handle have a lot to do with lost time, lost services, inability to do what needs doing. But some bugs, the security ones, have to do with lost access and lost data.

    We're starting to see the value of security as data is crucial. I wonder if we will ever see the value of people's time, or if other solutions to free up time over delivering well functioning (tested and fixed) software will win out.

    We live in interesting times. And today I stop to appreciate that what I do for *this* software, I could do for any software. 

    Friday, February 2, 2018

    Reading with rose-colored glasses

    As a tester, I specialize in feedback. I both find things that no one else was bringing to the table, and amplify things someone else did so that the feedback gets the attention it needs. One of my favorite sayings is from Cem Kaner's book Lessons Learned in Software Testing.


    I got to think about this today, as I had a pair of people to give a piece of positive feedback on. 

    I approached the first, using a phrase of "waking up a security bear" to emphasize that something they did resulted in a positive outcome of identifying and addressing a vulnerability. The positive feedback was taken at face value. 

    I approach the second, explaining a little more context of why this was important. And while I thought I was still trying to say "well done", I got into a spiral discussion of what was wrong with what they did. Reflecting the interaction, there was *one thing wrong* - the immediate response that the bug had been previously reported and dismissed then. 

    The first one approached the feedback as something positive. The second approached the feedback as something negative. I ended up with two completely different interactions for the same message: job well done, I would like to turn up the good and this was good. 

    The whole experience took me back to a one-liner from my boss: "I want to talk with you". My immediate response was "bad or good". I chose to wear my rose-colored glasses and assume positive intent. 

    Putting these two things together, I realize that wearing the glasses of good intent is the single most relevant thing I have done to feel happier and more successful as a tester. The world is filled with good, and even the (negative) feedback we are bearers of is positive from a constructive angle. 

    Thursday, January 25, 2018

    I hate it when the rules change

    I have a blessing and a curse in feeling empathy at office. It is a blessing in the sense of being able to help people do emotional labor and eventually be happier and more fully themselves at work. It's a curse as the emotional labor isn't free but drains on my energies, and can easily distract me from the so called technical content. 

    Sometimes other's feelings are not hard to pick up at all. One of these days, I arrived to my place of work seeing angry pacing and hearing swearwords. Understanding that someone is upset wasn't hard at all. Offering myself as someone to listen to what was wrong wasn't hard at all. Comparing the vocal expressions in privacy of our room and what ended up being said in writing for a pull request wasn't hard at all. And realizing there was a difference in the unfiltered and filtered expressions, I found ways of seeing how this discrepancy could manifest in quality as we experience it.

    Sometimes feelings can be really hard to pick up, as some people hide it better. And I confess to the possibility of mirroring my own feelings to people. Me hating rules changes could as well be my feeling projected and understood through an experience someone else was going through.

    There was a pull request, introducing a new test into test automation. There was nothing special about it. It was business as usual. Until the rules changed, without a warning.

    If things flowed just like they always flow, we could have expected to get two comments. There was a mistake in the logic of what gets verified, and these typically get caught and corrected in someone else reviewing the code.  But this time, things were different. 

    I sat on my computer, looking at this piece of code and invited a developer to look at it with me. I asked about things that bothered me on it, expressing uncertainty and need to learn what was our standard. Instead, I learned that our bar was sometimes low. The comments would be the minimal to make the pull request acceptable. Not my idea of actively fixing and putting things in the right order. 

    Just talking about it started a change of rules. What used to be enough no longer was. But on the other side of that pull request now getting 12 comments and basically advice to completely rewrite it was a real person, who was not part of changing the rules. 

    If this happened to me, I would be devastated. They turned quiet and worked on other things. 

    As I expressed the  need of going and checking on they felt on receiving end, to be surprised by the idea that others considered change of rules like this business as usual, and wondering why I might be concerned.

    The whole experience just reminded me that while we like to think of feedback we're giving as striving for improvement and objective, there's always a person on the receiving end. 

    When rules change, having a system in the middle may hide how the other felt. It does not take away the fact that the other one struggled. Addressing this face to face, person to person is a much  nicer way of changing rules for the better. 

    Wednesday, January 24, 2018

    Plans make us slower and worse

    Over time, I've had the pleasure of working with different kinds of projects we like to call "agile".

    There's ones that I absolutely love, like what my "home team" does. We frequently talk about business directions, needs and visions and contribute to those equally from a technology perspective. But we don't have a plan. We don't know what we will do next. When we are starting something, we try to figure out how we could make it smaller. When something is started, then there is an idea of a plan. And we don't do estimates. No story points, no t-shirt sizes, no hours to work on it. Again we ask if we could make it smaller.

    Then there's ones that require "visibility" that I don't particularly appreciate. They start with the idea of someone outside "approving and following a plan". The plans, as flexible as we try to make them SHOULD fail in the face of reality and adjusted. But making the plans was painful. So we end up rather faking we work to a plan. We reject doing the right things that emerged, because the plan suggested an order of things. For each part of the plan, we refine and we estimate. We refine estimates. We monitor what was done and what was not. Every ceremony discourages doing the right things over doing the planned things.

    The first one succeeds when you learn to deliver parts that take you days, not months. And these are parts of valuable functionality, not "an api I could use later on".

    A lot of times, plans just make us slower and worse. The time you use on planning could be used on doing.


    Tuesday, January 23, 2018

    Test Automation Smells so Obvious Anyone Could Notice

    Like so many exploratory testing enthusiasts before me, I can easily find things to do with my time so that I don't have to go dig into the realms of test automation. But every now and then, unlike so many exploratory testing enthusiasts, I go and take a dig at automation anyway. It reads mostly like English anyway.



    Here's my list of things to work on inspired looking at one set of tests today.

    Tests that don't test only tour

    So you have some kind of structure with your tests. You have test suites / sets somewhere that bundle things together. You have some tests going into those suites /sets. And you have some libraries you can use. Great start. But take a look at the things in this that are considered tests. Can you find any that haven't got a single verification, asserting that something must be true? When you do, these are tests that don't really test, they tour. They are a path to getting to a place where you actually want to do some testing. Don't muddle them together on the same level as things that actually are checking things.

    Tests inheriting tests over libraries

    Inheritance can make things very muddled for  your random stroller. You're trying to read steps of what happens, and for sake of similarity, someone came up with the great idea of inheriting and overriding to create the differences. Same things would appear like they could be done with libraries, instead of test cases. Do you really have to conceptually mix up test cases with inheritance?

    Randoms inside a test

    You see a test that says TestABC. Great, it looks like this is a test that tests ABC. You continue reading, and a call to random pops in front of you. Whenever this test gets run, it always does ABC but there's a few different ways to do ABC. Someone decided to leave which of the ways you get to faith, making sure every time it fails you need to go and check which of the options was broken. Isn't there enough lack of deterministic behavior in the tests without adding randoms in?

    Avoiding passing values

    Reading the code, you start realizing a whole lot of method calls look like they're the same yet they are not - quite. There's some little detail on each that differs. Your method could take an argument, and you could do some magic with the argument. But for some reason it seems it was a better idea to create a number of separate methods to call, without argument but each including what looks awfully lot like a thing that would have belonged in the place of an argument in its name.

    Duplicating to my corner

    There's a directory there somewhere, that uses a label you recognize as a concept meaning ownership that draws you in. And you find a nice cozy corner of clearly laid out tests. Reading through their names, you start to feel like you've seen some of them before. Searching confirms - there's a well organized little corner there somewhere what everything is concisely together. There however is a lot of duplicated code elsewhere. But that must be someone else's problem.


    A Fool's Coverage

    As an exploratory tester, you start realizing how much coverage there could be if the collaboration between what you learn and know would better turn into automation. You identify some of the English in the tests that makes sense in the world of using the application for exploring it, and realize how little of your ideas have ended up encoded into the tests. Whatever is in, gets continuously monitored. Running the same thing a thousand times isn't really yet that much in the coverage - it's a version of a fool's coverage.



    Here's a thing: anything I can name and recognize, I can fix. And while these feel obvious to me today, they clearly have not been equally obvious when they were introduced. What does your "let's fix these" list look like?


    Monday, January 22, 2018

    Learning Programming Test-Driven

    I've been delivering a test automation course, in a mob format. I get the whole group together, to work on test automation activities. We first get an experience that we share, and then we talk about what we learned and need to know about automation.

    We've typically spent a couple of sessions on Selenium web driver, first just creating a test that runs, but soon after refactoring the tests so that we'd have more of a page object pattern in use. We spend a session on ApprovalTests, just to figure out what could be tested if we approached testing with creating and comparing to a "golden master". We test an API in a "fill in the blanks" kind of style. And finally, we spend a session starting a program blank, seeing where Test-Driven Development takes us.

    The Test-Driven activity has turned out to be my favorite. I love how the tests grow with the application. I love how small steps end up taking us to a coverage we wouldn't create later on if we tested after. But what I love the most is the way the group - usually full of people who don't identify as programmers, some people who have never written code in their life - reacts to it.

    We usually do very simple problems like FizzBuzz. Getting the 1st test out of the group asking for a single example is often the hardest part. Turning the example to code some syntax needs to be revealed, but none of the cruft around classes, just a single test. And when most of the programming in the IDE is learning to use the automatic generation of methods, people come out with a very different feel to programming than if they had to know how to write all the magic words around it.

    We've done this in Python. We've done it in C# while I still had Visual Studio. We most often do it in Java. And there is little difference in these two when the full editor support is available.

    The reactions of the learners I've worked with make me want to experiment with this more. What if we could teach programming so that we did not go through the usual moves of naming concepts, but we could teach test first and experience first. Naming what you did can come later. Even much later, when you've seen several examples of how some things are done or used.


    Thursday, January 18, 2018

    I'm an awesome tester who also happens to be woman

    At a test automation conference, I started talking with another participant. We exchanged details of the craft through name dropping and content appreciations. We talked about his upcoming talk, and I shared some ideas on the same topic. Excited on the discussion we were having, he decided to suggest I could do a lightning talk in the evening.

    It was great that he suggested - I was not aware there was one, let alone that I could still get listed and that the voting was going on throughout the day. I immediately made up my mind.

    But he continued: "we need women speaking".

    I said nothing, more like shrugged. But decided to address it when speaking.

    I introduced myself. I shared that I was here doing a lightning talk because another participant encouraged me to so that there would be women speaking. Needless to say, I was the only woman on the lightning talks round. I also mentioned that while it feels that my main reason to be here would be my gender, I'm here just as a person with credentials: that this is talk number 353 that I'm delivering, and that Kent Beck said yesterday he loved my book. I made a joke about it, because it was a better option for feeling uncomfortable than being silent about it.

    After the talks, I talked again with the other participant. He apologized and invited ideas of how he could have better expressed his excitement on what I had potential on sharing. I suggested that reminding me on my gender (any woman on their gender) could be a pattern to avoid. They're already painfully aware.

    The conference had no women in organizing committee.
    There was one woman speaker out of 9 talks.
    There were LOTS of women in the audience.

    There's no absolute "best" in speakers. Speakers tell stories that are based on the life lived. We need diverse voices. What I said today to open my lightning talk was something that just happened to me: all three details on my 'credentials'. I bet no one encouraged any of the men in a gender-based fashion.

    I make lists of awesome testers. Notice the list never mentions "awesome women testers". Because the qualifier of gender isn't relevant. They are awesome, just as they are. Yet looks like no one can share and promote that list without mentioning gender.



    Sunday, January 14, 2018

    Why positive discrimination is equality over time

    I remember a spring day 20 years ago. I was a university student, who had just taken a course on public presenting with a teacher that turned out to be the most transformative in my life. What I remember is not that her forcing me to watch me speak on video made me realize my inner world was much more messy than my presentation. I remember her for one of my first discussions on feminism.

    I was a hopelessly shy student who believed she possessed little opinions. And even if I did, I was very uncomfortable sharing them. While on that course, I read news every day to force myself to be even remotely able to have group discussions on day-to-day topics. That just wasn't me.

    So when in the end of the course my teacher told me in private that she thought I was a feminist, I responded like so many women: I wasn't. I did not need to be. There was nothing wrong with equality. If anything, I was always just positively discriminated.

    I did not think about that discussion for a very long time, but obviously years since have changed my perspective and raised my awareness on need of feminism. There's tons of wonderful writings on the problems and solutions, and my concern is still not that I get regularly mistreated, but that I've needed in many ways to be exceptional when normal should be enough.

    A few days ago, I retweeted this:

    Let's look at what it claims:

    • Women generally apply for jobs only when they meet all the requirements
    • Some women apply without meeting all the requirements and that requires them extra effort because it is against what they'd naturally do.
    Just sharing this tweet meant there was someone puzzled asking to be educated (extra work on women when all the resources are already available). To be honest, it did not sound as asking for education, much more on explaining to me why I shared a tweet that was just wrong using "one woman got selected with us even though she did not fill all the requirements" as evidence that this is not a general trend. Still, see point 2 above: she might have needed to exert extra effort to apply. Regardless, one data point isn't enough. 

    In our discussion we got soon to a point I see commonly coming from women: There should not be positive discrimination - "I don't want to be selected for my gender"

    The thing is, acts of discrimination are a long-term phenomenon, and we need to look at it discrimination over long term, not as individual event happening at individual job interview.
    • When I was 10, my family purchased our very first computer, and we had different ones ever since. They always were located in my (younger) brother's room and I asked for permission to use it as budget rules gave me space. His access was less limited. 
    • He started working with programming seriously at age of 12 (I was 14). His friends were all into it. I coded games by typing them from magazines already at 12, but I never had a single friend who'd do that with me. 
    • By time we both went to university to study computer science, he had 7 years of hobbyists programming because "computers were boys toys". I had a starting interest with time spent on BBSs and rudimentary programming I had done on "Teaching myself Turbo-Pascal" as schools gave you space to learn, they did not teach anything back then.  Most girls were not quite so advanced. 
    • Most of my university students were with backgrounds akin to my brother. I was years behind. In addition, if I ever did group work, I got told I probably did not contribute anything. Both other students but also some teachers. I needed to continuously keep proof of my contributions, or work alone when others got to work in groups. 
    • Any course with classroom exercises were my nightmare. There were 2% women and many teachers believed both genders needed to speak every time. I learned to skip classes to suffer less. Again, more work just to survive.
    • If I was ready to get help, I had lots of the classmates helping me. Usually with the price of figuring out if I was single or not. 
    Back then, this was how things were. I wasn't brave enough to call out any of this. I thought it was normal. That was the world I had always been in. I had no feminist friends to make me aware this was exceptional. I went through it all with plain stubbornness. 

    I know I'm not alone with my experience.

    So when then I get invited to a conference past the call for proposal process, I recognize that is positive discrimination. Similarly, if two candidates in job interview seem equal and the woman gets selected, that could also be positive discrimination. But we really don't hire just for skills of today, but potential of tomorrow. So it is less straightforward. 

    With all the debt on the negative discrimination I've got to go through, I'm nowhere near equality yet. 

    So I believe in equity. We need to help those who need more help more than those who started off in a more privileged position. Positive discrimination of equality over time - equity today. 

    My story is one of a privileged white woman. The stuff other underprivileged groups go through means we need to compensate for them much longer. 

    PS: I spent 30 minutes writing this post and I've had thousands of discussions like this in my lifetime since I realized I'm a feminist. Imagine what those not needing to have these discussions get accomplished with that time. 



    Friday, January 12, 2018

    All I got for a week of programming was one lousy test script


    From the title, you might think the post is about venting on how slow it is to learn automation. If that's what you are looking for, this is not that post. Instead, this is a post about insights of what happens while we program test automation.

    There was a fairly simple end to end scenario that needed testing. The tool of choice was Python, and examples of doing something fairly similar were plentiful. 

    To maintain the focus, the scenario was first drafted just as code comments. The steps the script should go through. The verifications that needed to happen along the way. The way we would determine what to make note of while the test was running, and what would be things that need to stop the test from proceeding as it just makes no sense. 

    It could all be very simple, except it almost never isn't. 

    First of all, to figure out the scenario, some details of what to check require the external imagination: product we are testing. Seeing the details of what could be verified need hands on the computer. We could call that automating, but what we actually do is mostly manual. Sometimes we can run the start of the script to get to the point of pondering. But  the pondering is still a manual process. We look at what is available that we could programmatically access. We think of what is good enough to determine if things work, and how the actual application would allow us to see things with code. 

    As we get to a manual process, we learn that while I wanted to do a thing, for some reason it does not work. We find bugs. Some of the bugs we notice when we just run through the scenario manually. Other bugs we notice, because automation is picky. Where a person can just work around some deficiencies, automation may get us momentarily stuck. Something else needs changing before the script can proceed. And we end up with todo-markings in our automation code, even fixing the problems on the application ourselves just to be able to make progress. 

    Towards the end of the week, multiple little learnings later with blocking bugs fixed, we finally get the script to a point where it runs in its intended scope. Allowing then to think outside this little agreed box that took the whole week, there's more. But also, just going though this one scenario is already making the work of adding another easier. There will again be bugs, but they will be different. The scenario we already automated gets run since its introduction, alerting us on possible regressions. 

    I write this post because I read that "Testing as an exploratory, investigative activity, cannot be replaced by automated checks". It bothers me how often we testers say this. The automated checks are done by people too. The human part of a check precedes creating the automation that successfully executes things. It grows as we add more checks. Many times when automating, we need to look with more detail. 

    The risk to good testing isn't in including automation into the way we work. It is in not looking wide if automation gives you the sense of already covering what ever scenarios are relevant. The risk is the automators who say "this is fully tested" when there really is one happy day scenario with one set of very limited data and selections. 

    Automation has so much power as a way of executable documentation.

    Thursday, January 11, 2018

    It ain't bragging if it is true

    I'm ready to blog about this:

    Let me start of with quoting the colleague in question, with her permission:
    I had a session with Maaret, where we went through things I do in my job as a software tester. It amazed me how difficult it was to brag about myself and even more, how difficult it was for me to see all the things I do.

    We used white table and Maaret wrote all the thing she has seen me doing. I was speechless. I just nodded, yes yes, that's what I do - I just didn't understand it was worth for mentioning. It was just "business as usual".

    It is really hard to try to prove to your boss how useful you are, when all the things you do happen in the background without any "hard evidence".

    I'm so grateful Maaret took time and went all this through with me, it was an eye-opening session for me, too, and now I have hard evidence to present to my boss :)
    You've been here, right? Feeling that you do a good job, feeling that talking about what you consider good is bragging and that bragging is just awful. It's so awful you can't even find the words to talk the truth to the power when it matters to you personally the most: when someone is deciding on your future.

    So how do you learn to brag about your contributions?

    Ask someone else to brag for you

    It if often easier to notice the good in others than in yourself. Listen to when people say nice things about you and in addition to avoiding the "oh, that was nothing" that comes out all too often, replace it with "Thank you" and a deep mental note on what it was about. You belittle it. Don't. Small things are much bigger than you sometimes give them credit for.

    You can also start specifically asking for feedback. And asking for help, like my colleague did is also ok. Saying things that you hide deep because you don't give them credit can be hard without giving the other a chance of observing you. I worked in the same room for a month to be able to recognize some of the unique ways of working that fits her personality. We do the same work differently, even to same results. 

    Practice

    Start your bragging small, for a safe audience. If you start to learn to brag on your work, effort and results to your boss, you could frame it as "I'm practicing making my contribution visible". Invite feedback. If your boss isn't the person you feel safe with, find someone who is.

    Within Women in Testing Slack Community, we have (from Gitte Klitgaard's initiative, isn't she awesome!) a #BragAndAppreciate channel that gives excellent opportunities on trying out ways of saying something in positive tone. Small and big brags are equally welcome.

    I've had chances of assessing my feelings on bragging with various coaches, guiding (forcing) me through bragging exercises. Realizing almost everyone sucks at bragging and that we are culturally and structurally conditioned to not brag helps in giving yourself a permission to try it out.

    Start small, grow. Encourage others to share positive. Share positive of others actively, and look at which ones also apply to you. 

    Focus on the positive

    Play your strengths, everyone knows you have weaknesses anyway. It is not dishonest to just focus on the positive, and building a case for why you are doing good. You're asked why you don't speak out in design meetings more, focus on what you do instead: listen fully without preparing to answer, digest, let things sink in. Focus on describing what do you do with the information after it sunk in. Make the 1:1 discussions that no one else pays attention to visible.

    You will feel guilty about things and you feel you'd want to say out loud some of them. Learn not to. Say them at a different time. Don't belittle yourself. Others in this world do too much of it already.

    Tell stories

    To have really good bragging is to channel being proud and boastful when you talk about things you do and achieve (learning is an achievement, failing is an option). Include a story of what really happened, examples keep things real. For example my colleague is a thorough, patient, detail-oriented tester. Instead of saying she finds a lot of bugs to discuss and then report, tell a story of the time you tested. Choose one that is exemplary or recent. You can't tell all the stories, choose one that shows you in a good light.

    Manage up

    "It ain't bragging if it's true" is attributed to actor/humorist Will Rogers. You could say it's lying, not bragging if it isn't true. But the difference here is to look at yourself and your work in the best possible light. Shine the light on the good parts. We'll notice the others if they are relevant without you personally pointing them out as disclaimers every time you speak of yourself. We can talk of those at another time.

    Appreciate what you do. All what you do. You use a lot of time on doing it. It is more worth appreciating than you realize. You need to appreciate yourself so that your boss can learn to appreciate you more through your views. Your career is yours, and too important to be left on your manager. It's more of a collaboration, you drive your own future.

    And to end the story my colleague started, one step further: she presented the list of what she does to her boss, only slightly apologizing that she needed to share all of this stuff. But the best part to me was what she said right after: "My throat is hurting, I was talking so much in this meeting". Mild bragging accomplished and adored.