Thursday, March 29, 2018

A Developer's Idea of Exploration

"We did a neat thing exploring today", a developer exclaims. I look, interested, wondering what is the source of excitement this time. It's not the first time they've been excited about doing things clearly very close to my heart. But a lot of times we find our ideas of exploring take very different, yet fascinating turns.

"We did this combinations test", they explain. "We took bunch of values when we did not feel like thinking too much, and passed them all in, and created combinations", they continue. "We learned about behaviors we did not think of", they finish. And we agree that is wonderful. Learning while testing, appreciating the new information, absolutely something we share.

There's been little remarks like this coming my way a lot recently, and while I can share the excitement of learning something we did not know, I also find that  the ways of going to "as close to exploratory testing as I usually do" as a developer isn't quite where my exploration is.

There was a session about property based testing, and generating test cases to run through same partial oracles. Just doing it wider does reveal things of unexpected nature, especially when you have a way of identifying some relevant aspect of correctness with a property.

There was an exercise of creating combinations for a 3 variable method, finding out the application does not work as specified on its boundaries. Just having more cases easily available and visually verifiable revealed information of unexpected nature.

All the three examples I've had recently, are ways of programmatically doing more. While they uncover relevant information, there's still more to exploratory testing.

This makes me think back to exploration of someone else's web app we did in a mob yesterday evening. Just some things I remember us learning:
  • For a system aiming to enhance datasets to acceptable, it makes no sense that when there is a condition preventing the dataset from ever being acceptable, we would first need to fill in some info when the condition of rejecting exists without any additional info (a problem in the order of tasks in the process)
  • For uploading files, we'd like to be able to use explorer over just drag-and-drop. It matters how most others do things. 
  • When a user interface view would include many things to show in tables, having relevant tooltips for some but forgetting placeholders for others less obvious creates confusion. 
  • When you must choose of of two but not both, automatically emptying the other field isn't exactly the best way to present that. 
  • When rejecting an input, logging it might be useful.
  • When failing so that there's a big visible error in the log, it would be very nice if that error was made visible also for the user. 
  • When having a recurring element of guiding users, filling it in three different ways makes little sense.
  • When you can get to a functionality with one click as there is just one option, hiding it in a menu requiring extra click won't be helpful. 
None of these would have been found by the "let me just play with my unit tests" approach to exploring. Then again, none of what we did would have found things that approach could find.

It's not this or that, but this and that. And it's lovely when developers show ideas of applying the same ideas with the tools at their hands. I hope to get to experience a lot more of it going forward. 

Tuesday, March 27, 2018

The Test Automation Trap

There's a pattern that we keep seeing in "agile" projects again and again.

We work together as a team to implement a feature. We automate tests for that feature as part of its definition of done. As end result, we have some more tests than before, on all layers of tests. We get the tests run blue and we make a release.

We work together to implement a feature. The previously added tests make our tests run in all lights of a christmas tree, and in addition to adding the new tests for new functionality, we clean up the previous tests.

The longer we continue, the worse the christmas tree lights get. The more time we spend on fixing the past tests, the less time we have on the new tests. And we take shortcuts on our past tests fixing, just removing the ones we deemed so necessary before.

And no one talks about it. It is a ritual that we must go through. Like a rite of passage.

Over time no one cares about how well the automation tests things. All we care for is that it passes for us to get through the gate.

I've seen so many people trapped in the cycle of being too busy to think about *why the tests exists* and *what value are they really giving us*. These people have no time for manual testing, because - very honestly - automation eats up all their time. And they might not even see that the approach is not really working out for them.

The test automation trap creates testing zombies. Ones that make the moves, but that stopped learning on what they're doing.

The best way I know out of the trap is to start caring about testing again. Put testing, not the scripts, into the center. It's time to talk about risk and strategies again. It's time to build up a test automation asset that supports whatever strategies you're going for. Stop moving through the motions, and think. Learn. Look at where your time goes. Experiment your way out of the trap of magical moves that feel better idea than they are.

Thursday, March 22, 2018

The tester work asymmetries in team

In the organization, I work with a team. I sit in the same room with this team. I use a shared label to identify our togetherness. We go through rituals together: planning, working, delivering, demoing, and improving. There's just enough of us so that we can do relevant things, yet not so many that coordinating our work would be a problem.

The best of these kinds of teams work together over longer time, and on problems they can feel ownership on. That's where my life gets complicated.

The wonderful little team I work with, works in an organization that works with the ideals of internal open source project. The team has no nice list of microservices they'd be responsible for, but anything and everything anyone has ever created into the overall system is up for grabs.

As a tester in a team like this, I find it fascinating to look at how people approach the problem of modeling their responsibilities differently.

One tester seems to model their actions on the team's developers actions.  If a developer goes and changes something, a tester follows and helps tests the change. A lot of this activity happens by the developer pulling someone other than themselves to implement automation.

One tester seems to model their actions on the end to end flows of the system, from the perspective of mention-worthy functionalities being introduced. None of this activity happens by the developer pulling people in, but the tester pushing ideas of seeing value in the system perspective.

One tester seems to model their actions on collecting any work anyone would wish a tester would do. Whatever needs doing and looks like being dropped by others ends up as things they do.

Explaining and understanding where the time goes and what activities belong with "a team" can get very complicated. I guess it also makes sense that its also a high trust environment, where doing is considered more relevant.

Tuesday, March 20, 2018

Working with all levels of ignorance

There's a view of the world of testing on the loose, that I don't really recognize. It's a view driven by those identifying primarily as developers, and it looks at testing as a programming problem. It suggests that we already know what we know and that testing is about keeping tally of that again and again with changes to the applications we're testing.

It is evident that I come to testing from a different place. I approach it primarily as an exercise of figuring out things we don't event know we don't know through spending time and thought with the applications we're testing. I expect to find illusions to break and show how things for real are different that we imagined they should be - in relevant ways.

I think of it as a quest for four types of information.
1) Known knowns - things we know with certainty
2) Known unknowns - things we know with caution
3) Unknown knowns - things we forget
4) Unknown unknowns - things we ignore

So many times over years, I've been fooled by the unknown unknowns, my own self-certainty of my analytical skills, and lack of focus on the serendipity nature of many of the bugs. But even more, I've been around to save same developers, again and again from their self-certainty of their analytical skills and complete ignorance of information beyond what they already remember.

The idea of orders of ignorance is powerful. And as a tester I come to testing much more from the ideas of not knowing at all what I don't know, but a keen quest to keep experimenting until I find out.

When I draw the image some years back, I was trying to find imagery related to building houses. We know with certainty things a house needs. A house without a door, window, or roof wouldn't be much of a house. Yet we even with things we know for certain, we can end up with different expectations because what one of us thinks is certain, another takes as a mere suggestion. We also know with caution what a house needs. Like when we know it needs windows, we might not know the exact shape, number or position of them, but we certainly know we need to figure that out. With a house sufficiently complex, we start forgetting some of the nooks of it and need to rediscover what has become lost. And there's things we just completely miss out on, that could end up shaking the very foundation of what a house should be like.

Thinking back to a particular example of testing a remotely managed firewall, it is also possible to map activities I came across. I knew that if I introduce a rule remotely, it is supposed to show up as rule locally. I knew I did not know if there was a rule name limitation, so testing for it made sense. I knew I had created rules before using the local UI and very short names were allowed, and trying it again reminded me on as short names as single character. Yet when using a single character name remotely through an API, I witnessed completely unexpected performance issues resulting us in forcing a 3 character limit for stability reasons. All levels of ignorance were in play.



Sunday, March 18, 2018

The Lure of Specifications


There's a fun little exercise from Emily Bache called Gilded Rose. The exercise is intended as a piece of software to extend, and naturally you'd want to have tests before you go on changing it. Coming to it from a more pure testing / tester perspective, my fascination towards the exercise is on how people end up modeling the work.

Gilded Rose makes available:
When setting up the exercise, I hand people the spec, and create a combination approval test with the sample unit tests scope that is easy to extend with new values.

The question goes as so many times before: how would you test this?

Given a specification, most people jump at specification. As first values based on the spec get added, I usually introduce the idea of seeing code coverage as we are adding tests, and some people pick it up others don't.

This particular exercise lets me model on how people connect three types of coverage when testing: covering the spec, covering the code, and covering the risk.

The better ones have
1) Refused to follow the spec step by step, because someone else must have done that already
2) Thought of ways to test that neither the spec or the code introduce
3) Not stopped testing at covering the spec when code coverage is still low.

There's something about a specification that drives people's focus, making them less likely to see other things without added effort. Sometimes, it might make sense to step away from the lure of specification's answers, and see if the answers you'd naturally get to make any more sense.


Sunday, March 11, 2018

Building a relationship with developers

At a conference talk, I again haphazardly shared the idea of not writing bug reports. I call it haphazard, because I my talk was about increasing your impact as a tester, and it was just one of the “try this” solutions I shared. But it is one that rocks people’s world and beliefs, makes them approach me with disbelief and even come off as attacking me for having an experience and sharing it.

In all these interactions, I found a lot of value for me, in recognizing how the environments I habitate and set up things can be different. While “stop writing bug reports” is the thing I say, what is really behind that is the idea of starting to pay attention to cost - value structure of your work, with particular focus on opportunity cost. Each one of us has more power to decide on what we do and how we do it than we realize. If we are asked to execute preplanned test case and a manager asks us which ones did we execute at the end of each day, we are more constrained that I believe great testing should ever be. Yet even in that setting, we can choose the level of focus we exert on each of the tests. We can add emphasis on some, quickly browse through others and add our own ideas in between the lines. If we book a meeting with ourselves for an hour to practice using tools and approaches that don’t fit into our normal day, most organizations don’t even notice. And instead of asking permission for this all, think of it as possibility to ask for forgiveness - but only if it is needed. 
The environments I habitate are essentially different. It’s been years since anyone told me specifically what I need to do, and what is my next task. Even the constraints that appear to be in place may not be real. But what is very real is the sense of responsibility and continuous value delivery. I know what good and better results could look like. And I know I don’t know how to get to the best results, without experimenting. 

So I experiment with stopping bug report writing. I end up working my first year in an organization, where on another business line a colleague is being scolded for lack of “evidence of value” saying they don’t write enough bug reports and since they don’t automate or review automation, they are not visible in the pull requests either. The number of Jira tickets I raise in the whole year can be counted with my two hands fingers. Yet the number of issues I find, address and get fixed is different.
It isn’t easy to say to myself to take the harder route and go talk to the developer who could fix this, and seek actively a way to get a decision on it on the spot - it either matters (and we fix it now) or it doesn’t matter (and we fix it when we realize it matters coming back from the users). Delivering continuously supports - even enables - this way of working because you cannot leave issues of relevance around without them impacting the users, very soon. 

The not easy route is rewarding though. In those moments where I used to enjoy my private time writing a bug report I could be proud of later (that never warranted for such care and love for any other reason that being the evidence of “me”), I’m building a relationship with the developer that I need so that my work has real impact. I learn more in that interaction. I have chances of getting my message across better. And more often than without, the bug turns not only into a fix but a unit tests too, in that little collaboration we end up having. 

When I need to choose time to writing a bug report and time to communicate bug report in a way that creates a better relationship, I stretch for the latter. Not because it is comfortable (it isn’t, sometimes the reactions are downright mean) but because it makes us and our software better. 

Friday, March 2, 2018

Results from No Product Owner Experiment

Four months ago my team embarked on an experiment to change the way the team worked in an uncomfortable way. We called it "No Product Owner" experiment because it captures one core aspect of it. It was essentially about empowering a team to be customer obsessed without a decision proxy, in an organization where many people believe in finding one person responsible to be a core practice.

Four months later, the experiment is behind us. We continue working in the product-ownerless mode as the team's de facto way of working. The team is still very much an experiment within the organization, and our ways are not being spread elsewhere as we in the team like to keep our focus on the team, technical excellence and delivery.

Experiment hypothesis

We approached the No Product Owner suggestion as an experiment, as it had many things none of us had experience on. There was still the person in the organization that was hired to be the team's product owner. The team wasn't all super-experienced mega-seniors but a diverse group.

When thinking of the assumption we would be testing with this, we came to think of it as customer-obsessed team directly in touch with their customers performs better without a proxy. 

Better is vague, so we talked about looking at the released output from the team. Not all the tasks we could tinker on given the full freedom, but the value delivered for customer's benefit.

Happened before this experiment...

To understand what happened, there's some things that happened already before. We did not just talk about them as "grand experiments" that would be shared anywhere. They were just our way of tweaking our ways of working by trying out what could work, and not all did.

We had experimented with backlog visibility by using post-its on a wall in form of a kanban board, using all electronic kanban board, and not using a kanban board but a list of things we were working on within the team. The last worked best for us, we did not find much value in the flow, just the visibility (and discussions). We had experimented with the product owner location in relation to the team having him in the team room, and later on different floor. We had learned to do frequent releases, and through learning to do that stopped estimating and focusing on fast delivery of continuous value.

The frequent releasing, in particular, was the reigns of the team keeping us synchronized. The value sitting on shelf in the codebase not visible and available to our users wasn't value but just potential of it. It had transformed the ways we designed our features, and helped us learn splitting features always asking if there was something smaller in the same direction we could deliver first.

We also had no scrum master. At all. For years. No team-level facilitator, and our manager is very hands-off always available when called type with about 50 people to manage. 

Introducing No Product Owner

I blogged about the first activity to No Product Owner already months ago. We listed together all the expectations we had towards a product owner, and talked about how our expectations would change. We moved the person assigned Product Owner to a role we framed as Product Management Expert, and agreed his purpose towards the team was very straightforward: requirements trawling. He would sit through meetings, pick up the valuable nuggets and bring them back to the team for the team to decide what to do with the information.

Team embraced the change, and level of energy became very high. The discussions were more purposeful. We started talking directly to our sales organization, to our real customers over email and in various communities. We increased our transparency, but also our responsiveness.

In the first month, there were several occasions where the PME would join team meetings on a cadence, and express things in the format "I want you to..." to find themselves corrected. The team wants. The team decides. The team prioritize. The power is with the developers.

And our team of developers (including testers who are also developers) did well.

From High Energy to New Impacts

Before starting the experiment, we were preparing a major architectural change effort, and there were certain business critical promises attached to that change effort. As soon as the experiment started, we sat with sales engineers talking about the problem. An hour later, we had new solutions. A week later, the new solution was delivered. The impossible-without-architecture-change turned possible, understanding (and finding motivating) the real needs, and the real pain.

Throughout the experiment, I kept a diary of the new impacts. The impacts are visible in a few categories:
  • Taking responsibility of real customer's experience. We had a fix that is delivered in a complicated way through the organization's various teams, and we did not only do the fix like usually, but we followed through on the exact date the solution was available to solve the customers problem.
  • Fixing customer problems over handing the off through prioritization organizations. We hooked real users with problems directly to the people fixing problems. The throughput time increased, and we did fixes I can claim we would not have done before. 
  • Delivering customer-oriented documentation when there was a solution but it needed guidance.
  • Coordinating work across organization on level of technical details to increase the speed of solutions, removing handoffs. 
  • Coming up with ways of doing requested features that brought down the risk and scope of first deliveries, enabling continuous flow of value. 
  • Coming up with features that were not requested that the team could work on to improve the product.
  • Adding configurable telemetry to understand our product better in a data-driven way
There's two particular highlight days.

21 days into the experiment the team received feedback that  their latest demo was particularly good and focused on the customer value. When confronted with the feedback, the team considered it as "that's what we're supposed to do now" - we are customer-focused.

65 days into the experiment the team realizes the last appearance of the product management expert in planning was around day 55. There were other channels to maintain pulse of what might be important than the structure.

There was one particular low  or risky day.

40 days into the experiment the team reallocated 3/4 programmers and 1/3 testers to work on things outside the usual team scope.

Interestingly, the reallocation after 40 days took the already customer-obsessed developers and moved them to work on something where they could still implement the responsibility assignment. The subteam ended up representing the business like in the cross-business line effort without needing a matching role to the other business line's product owner. Progress on the tasks with the high motivation while feeling empowered has also been great. 2.5 months into a 9 month plan there is an idea that we might be done in 4 or 5 instead, while still bloating that effort with necessary improvements over following the plan.

Team Retrospective

After the 90 days period, we had a team retrospective with the ex product owner and talked about what changed. The first almost unanimous feeling was that nothing changed. Things flowed just as before.

The details revealed that there might have been change we did not appreciate.
We delivered about twice the amount/size of things of value as in the two previous 3-month intervals, all of them assessed after through discussions, not through the estimates.
We were more motivated, regardless of the team split that was temporary (even if for 9 months).
We did things we were not doing before, without having to drop things we were doing before.

I can now believe in magical things happening in very short timeframe. I couldn't before. Some of the things never reached us before to help us keep focus, and turned into big things that could never be done.

We did not magically have more people available. But the people we had available were more driven, more focused, more collaborative and believed more in themselves in their ability to take things through to customers.

Not using time and energy on estimating and the value of that became evidently clear with the taskforced subteam inflicted into an environment where estimates were the core. The thinking around opportunity cost - what else could one do with the time used estimating - became more clear.

Finally, we looked at what the Product Management Expert did. They reported higher job satisfaction and less stress. They reported they focused on strategic thinking and business analysis.

No one remembered any pieces of information the PME requirement trawling or the strategic thinking would have brought to the three months, so there is value potentially not delivered through to customer (or work wasted as it has no impact).

Improving the ways of connecting product management and RnD efforts is a worthwhile area of tasks to continue on. There may be a need of rethinking what and RnD team is capable of without an allocated, named product owner.

There was also some rumours around that I've really assumed the de-facto product owner role, but I assure I haven't. Things flowed just as well while I took my 3 week winter vacation, and spent at least another 2 weeks in conferencing around the world.

Every single team member acted in the product owner role. Every one. Including the 16-yo intern.

I couldn't be much more proud of my colleagues. It is a pleasure to change the world in our little way with them. Without a product owner.  

Grow your Wizard before you need them

Making teams awesome is something I care deeply for, so it is  no wonder that discussions I have with people are often on problems around that. Yesterday again I suggested pairing/mobbing at work to receive cold stares and unspoken words I heard in the last place I worked: "You are here to ruin the life of an introvert developer". I won't force this on people,  but they can't force me not to think about it or care about it.

As I talked about the reactions, and was pointed out a story he has been talking about many times before. And with "just the right slot" in my calendar, I go and write about it. Someone else will probably make an awesome video when they get to it.

Some of us have some sort of history with computer games. Mine is that I was an absolute MUD (multi-user dungeon) addict back in the days, and I still irregularly start up Nethack just for nostalgic reasons. In many of these fantasy game types, we fight in teams. And we have characters of different types. If you play something that is strong in the beginning, you survive early on more easily. The wizards on low levels are particularly weak, and in team settings we often come to places where we need to actively, as a team, grow our wizard. Because when wizard grows to its high level potential leveling up with others support, that's an awesome character to have in your team.

A lot of times we forget the same rule goes around growing people in our teams. The tester who does not program and does not learn to program because you don't pair and mob could be your wizard. At least the results of being invited to "inner circle" fixing problems by identifying them as they are being made feels magical.

Just like in the role plays, you need to bring the wizard fully into the battle, and let them gain the XP, you need to bring all your team members into the work, and find better ways for them to gain experience and learn.

Pairing and mobbing isn't for you. It is for your team.