Tuesday, October 25, 2016

Away from being a gatekeeper

It sneaked up on me. I can't really pinpoint to the exact moment of revelation, but now, looking back, I can see I've done yet another 180 turn on my beliefs.

I used to believe I, as a tester, was around for the benefit of the stakeholders, shining light on all sorts of perspectives so that we understand what we're releasing. Advocacy was a big part of what I did, making sure people would understand the risks and implications,  but on untested features (and I had quite an imagination for things that could go wrong) and bugs we had found.

At some point, I started understanding that risks to business people mean opportunities to win. There's a chance I don't have to pay for this and things will still be alright! So I participated in all sorts of efforts to find actual bugs and problems, evidence of the risks being true and real, so that we can't just close our eyes and wish things did not go wrong.

Whatever the developers would  change, I would test with my tester colleagues. Whatever features those chances were introducing, I would dwell in the implications of chances of bad things happening and then seeking for the evidence. I would often find myself estimating how much time we need on testing and getting my estimates dismissed, being given a shorter time.

Thinking back to those times with the way I perceive things now, I think I've found a less stressful world of testing.

Now I start with the realization that when things are failing in production, it's not my fault. I did not change the environment or the code, and I will not be the one staying late and sacrificing my weekends to fix it. The developers (programmers, if you will) will do that. They are the ones ultimately held accountable for quality. I'm here to help them. I'm here to help them figure out ways to prevent escalations because of bad bug fixes, because they did not quite understand the implications of a change or did not quite get all the "requirements" right. I offer my help, and they can choose to accept it. I no longer need to push it down their throats and sit there guarding a release making sure nothing untested gets released. I no longer work with estimates, I work with time boxes. I commit to doing the best possible testing I can with whatever time I'm allocated.

So today, I stopped to think about where the change of minds comes from. Here's some of my observations:
  • I worked as a solo tester with many developers who tested. I know they can do things without me, even if they can do things better with me. 
  • I saw the developer pain of late nights and lost time away from new features, and realized I could help relieve that pain channeling stakeholders directly to developers. 
  • I experimented letting features go out without testing them and the world did not explode. 
  • I helped change our release cycle to daily, allowing bug fixes and new feature development both go into the same cycles. It gave me full time to exploratory testing as some would happen pre-release and a lot more post-release. 
  • I got more done not going into the endless fights for testing time and advocating for risks for people that would only understand empirical evidence. 
  • I heard other testers, like Elisabeth Hendrickson speak of this: we are not gatekeepers. 
I realized all of this with a slight smile on my colleagues face, when I wanted to undo a release freezing process I created 10 years ago, stating out loud my learning: releases should belong to the developers and not testers as the gatekeepers. Developers, as we know them, often need help in understanding some of the implications of their changes. But they learn through feedback. And they deeply care. I want to side with them, not against them. And we all side with the success of our business through creating the best products we can.

Monday, October 24, 2016

The two language trap

For a good number of years, I worked with an organization that was doing production and test automation code all in C#. The organization got there partially through trial and error with VendorScript and discarding all work of an individual because no one else could maintain it.  And partially from a timely recommendation from me. I had an opinion: it would be better to work on one language and not go for a different language for the testing code. 

And I'm simplifying things here. In addition to C#, there was at first a lot of JavaScript. Both in production code and tests. And then later on, there started to be a lot of TypeScript. And there was Powershell. So clearly the developers could move around languages. But common thing was that each of the languages was motivated primarily from production, and testing followed. 

The good thing about working on the same language selection was that as the lone tester, I would not be alone even if I would contribute to the automation efforts. The automation was there to support the developers, and while e.g. Python and Robot-framework sales were high (it's from Finland after all), a new language, I believed, would create a distance. 

So I changed jobs and the timely recommendation from me was way past being timely, with years of effort committed to a two language environment. The production language being C++, I could see why the choice of the same language for testing was not quite as straightforward. So the test language was Python. 

The more I look at this, the more I understand that there isn't a real choice of one language. But the two languages still looks a lot like a trap. Here's some differences I'm observing:
  • The gap between 'real' production programmers and the test automation programmers is deeper. 
  • The perceptions of the 'betterness' of the languages deepen the gap
  • People seldom develop equal skills in both, and as new learners they strictly start from one or the other. 
  • The self-esteem of the test automation programmers in their programmer identity is lower as they work on the 'simpler language'
  • There is a lot more of a separation of dev and test, even if both are code 
  • As an exploratory tester, I will be in a two-language trap - learning both enough to get the information in two different formats to the extent I feel I need it
I feel the two language trap exists, because a lot of people struggle early in their programming careers to just work on one language. There's a lot of language specific tricks and development to follow in both the ecosystems, taking the groups further apart. 

So this whole thing puzzles me. I have ideas of how, through collaboration, I could improve what I perceive as suboptimal siloing. How there's cultural work in promoting the two languages as equals in the toolset. How clean code and technical excellence might work as a bridge.

But I wish I did not have to. I wish I could live in a shared language environment. And most of all I wish that those who are not forced by realities into separating production and testing into two different languages, would think twice on their choices. 

It's not about what language your tester is comfortable with. It's also about who will help her in the work, and how much will she have to push to get her work to be the feedback the developers need. 

When the test fails, who cares? I would hope the answer extends the group of test automators. 

Sunday, October 23, 2016

Mentorship and sponsorship

I think I saw a tweet passing on my twitter stream planting an idea of the difference in having a mentor or a sponsor. The goal of both these is similar: supporting you with your goals. At first, when I stopped to think about it, I was convinced that I've had many mentors but very little sponsors. And that I have always acted as a mentor, not a sponsor.

Edit: here's the tweet that inspired me.
Looking deeper revealed another truth. And as usual, looking deeper needed a friend to point things out that should be obvious.

Sponsors are people who will advocate for you when you need to be more visible. Mentors are a source of guidance, advice and even inspiration. Mentors advice, sponsors act. And surely people have acted for me.

Some of My Sponsors

Thinking about this makes me think about the people I feel have made significant differences in my professional career through their actions that I never had to ask for.

There's an old boss who was willing to go to court with my previous employee to get to hire me. He supported me while we worked together, spoke positively of me to me and to others. Often the positive words directed at me were the most powerful. They nurtured me from potential insecurity to trusting on my skills and my potential. When I was reminded of a Cindy Gallop quote: "Men are hired and promoted for potential, women are hired and promoted on proof.", thinking of this boss makes me feel he always saw the potential in me and played a big part of making that potential develop further.

Similarly, I can recognize two other jobs that I've ended up because I've had powerful sponsors. I ended up with a great consulting gig (and later a job) at an insurance company, because there was a woman I studied with, and in particular mentored to new studies on her first year in a position of power and she worked hard to hire me. And when offered with the idea of my latest job, I never realized to appreciate the actions of my significant other negotiating a higher salary on the spot and making sure the job, if it would emerge would fit my dreams and expectations better in relation to not giving up my speaking. He spoke for me, so that I did not have to. I did not even consider changing jobs while in that discussion, which made it easy to dismiss his contribution. The job I ended up considering was one he helped create. To start to consider, it took me another month, making it even easier to forget the connection.

Another set of sponsors are the people who have taken me forward in my speaking career and those people I want to mention by names, as they are more known in the community. Helena Jeret-Mae gave me my first keynote a few years back. Rosie Sherry started picking articles from my blog to share and taught me that there are people who make things easier for others. Rosie Sherry and Anna Royzman invited me & Llewellyn Falco to do an opening talk for TestBash NY and Anna Royzman later allowed me to do an opening keynote for her conference. Giving people stage is an act of sponsorship and I've been very fortunate with that.

While I have never had a named mentor or sponsor, I've had plenty of people in both roles teaching me and supporting me.

Paying the good forward?

Similarly, I could easily recognize that I've been a mentor. I've mentored quite a number of new speakers, now both local and international to smooth their way into speaking or delivering just one talk. Trying to support the dreams of people (both testers and developers) around me at work is a big part of what I do.

I often go well beyond mentoring into sponsoring. I work a lot to raise money for helping people with things that have been hard for me, like the financial side of speaking in conferences. I share my extra-free-entry-as-speaking-fee with people I feel need the nudge of inspiration a conference could give. I've sponsored both people in my organization making room for my employers to allow me the time to speak, but also people with criteria "your organization wouldn't pay for your entry".

I hope I've opened a few doors with recommendations, and soothed someone's professional journey there.

Active seeking of sponsors and mentors

The tweet made me realize I've never actively had a mentor or a sponsor. Not doing so actively means it's harder to name and recognize them, but they still most likely are there.

It's great to see there's programs for mentorship, but as sponsorship includes acting on your behalf, it takes more trust. But it's a fascinating thought experiment if there would be more we all could be doing for one another. Encouraging, mentoring and sponsoring.

Friday, October 21, 2016

Safety in being heard

Today, I've been thinking about asking. Let me tell you a few stories that are serving as my inspiration.

You don't know to ask for things you've never experienced

I'm speaking in a conference and as my speaker fee, I negotiated a free ticket - something I've been doing in Finland for quite a while. It means that not only I get to go, but I get to take someone with me. In past years, this has opened the expensive commercial course for people in my community, and people in the same company I work at. Last time I passed a ticket to a colleague, he did not use it. I wanted to make sure this time my work would go for good purpose, so I kept checking with the tester at work I had in mind to take.

In the process of discussing all this, I learned that this was the tester's first ever conference (something I really did not expect) and things like "food is included" was a surprise. In the discussion, I realized that as a regular conference speaker and goer, I take a lot of things for granted. I don't understand anymore that they might not be clear for others.

So I felt grateful for having enough interaction to realize that the unspoken questions in the puzzled looks were things I could pick up. The tester might not have known enough to ask the questions. Then again, here not knowing would have clearly been ok, and learned later.

You get answers when you know to ask

When you have a question, people rarely say no to answering your question. I'm new to my job, so I have a lot of questions, and as long as I come up with the questions, things are moving on nicely.

Yesterday, I was feeling back pain. Sitting in my office chair, I suddenly realized that I had been sitting long days in a non-ergonomic unadjustable chair. I never paid attention, until my body made it obvious I should have, basically crippling me for the day. As soon as I asked for a proper chair, I got it. But I had to ask. Learning to ask was still not too late.

People tend to reject info they don't ask for

I've been experiencing a recurring pattern over last weeks where I point out unfinished work (usually of surprising kind) and the developer I talk to brushes it off. It's often "some other team's responsibility" or "agreed before I joined" or "will be done later". Having been hired to test (provide feedback), rejecting my work categorically feels bad. And it feels worse when I follow up on the claim, and come back with info of what the other party says and then the unfinished work gets acknowledged.

This has lead me to think about the fact that whoever asked me to provide the information as a tester is different from the developer who gets to react to my feedback. And as a new person on the job, I would love a little consideration for my efforts. They are not noise, I pay a lot of attention to that.

Why all this?

All of this makes me again think of psychological safety. Being safe means being heard. Being safe means being heard without fighting for your voice. Being safe means being heard even if you had no words to describe your questions.

As a tester, I've learned to never give up even when I feel unsafe. And simultaneously, I look around and wonder what makes some of the other testers so passive, accepting of what is being told. And yet, they work hard in the tester jobs.

It makes me think that while I'm comfortable with confrontation, it still eats up my energy. Everyone should be allowed to feel safe.

And to get there, we need to learn to listen. 

Thursday, October 20, 2016

Testing in the DevOpsian World

There is an absolutely wonderful blog post that describes Dan Ashby's reaction to being in non-testing conferences that seem to make testing vanish. The way Dan brings testing back is almost magical: testing is everywhere!

At first, I was inclined to agree. But then I decided to look at the DevOps model with more empathy for the DevOpsers and less for the Tester profession, I no longer did.

The cycle, as I've experienced it with the DevOpsers, is used to explain continuous flow of new features through learning about how the system works in production. It's not about setting up branching systems or strategies. It's not about questioning the mechanisms we use to deploy multiple times a day - just the delivery of value to the application.

I drew my version of the testing enhanced model:
In this model, testing isn't everywhere. But it is in places where DevOpsers can't really see it. Like the fact that code is much more than writing code, code is just end result of what ever manual work we choose to put on the value item delivery. All the manual work is done in a branch, isolating the changes from whatever else is going on, and it includes what ever testing is necessary. With a DevOpsian mindset, we'd probably seek even exploratory testing at this point to be driving the automation creation. But we wouldn't mind finding some things of oops where we just adjust our understanding and deliver something that works better, and while some portion of this turns into automation, it's exactly same as with other code: not all things thought around it are in the artifact, and that is ok, even expected.

But as we move forward in the value delivery cycle, we expect the systems to help us with quick moving to production are automated. And even if there is testing there, there's no thinking going on in running the automated tests, the build scripts, deployment scripts and whatever is related to getting the thing into production. Thinking comes in if the systems alert on a problem, and instead of moving forward in the pipeline, we go back to code. Because eventually, code is what needs to change to get through the pipeline, whether it's test code or production code.

On a higher level, we'd naturally pay attention to how well our systems work. We'd care about how long it takes to get a tested build out, and if that ever fails. We would probably test those systems separately as we're building them and extending them. But all of that thinking isn't part of this cycle - it's the cycle of infrastructure creation, that is invisible in this image. Just as the cycle of learning about how we work together as a team is invisible in this image.

However, in the scope of value delivery, exploratory testing is a critical mindset for those operating and monitoring the production. We want to see problems our users are not even telling us on, how could we do it? What would be relevant metrics or trends that could hint that something is wrong? Any aspects that could improve the overall quality of our application/system need to be identified and pushed back into the circle of implementing changes. 

I find that by saying testing is everywhere, we bundle testing to be the perspectives tester thinks testing is. A lot of activities testers would consider testing are design and proper thinking around implementation for non-testers.

By bringing in testing everywhere, we're simultaneously saying the model of value delivery is extended with elements of
  • Infrastructure creation 
  • Team working practice improvement
And it's natural we'd say that as testers. Because, those all are perspectives we consider part of what a tester facilitates. But are they testing of the application and does testing need to go everywhere on a model that isn't about all things development? I would disagree.

My feeling is that the tester community does a disservice to itself saying testing is everywhere. It's like saying only things we label testing make it good. Like things programmers label programming or code wouldn't have same potential.

To stay in the same table discussing and clarifying what truly happens in DevOpsian world, we need to consider speaking in the same scope. Well, I find that useful, at least. 

Wednesday, October 19, 2016

Entitlement - extending our contract

I've got a few examples of things I need to get off my mind - of things where people somehow assume it is someone else's duty to do work for them.

The word on my mind is entitlement. It really puzzles me on how come there are so much of these cases where someone assumes they have free access to my time, just because they had some access to my thoughts in a way I chose to make available. It leads in to what I perceive as a lack of thoughtfulness in requiring services, as if you were entitled to them. And it puzzles me why I think of this so differently, taking it for a fact that I should appreciate what I'm getting on the "free" services and that I could actually need to make it bidirectional in some way if I have specific requirements to fulfill my personal needs.

The Uninvited Debates

The first thing where entitlement comes to play is the idea of debates - whenever, where ever. When you say something and someone questions you, that someone is somehow *entitled* to your answer. Not that I would have the free choice of giving that answer in spirit of dialog and mutual learning, but  that I owe people an answer and an explanation.

I love the idea that my time is mine. It's mine to control, mine to decide on, mine to invest. And investing in a debate (from my perspective) means that I get to choose which debates I stop early and which ones I continue further. And it's not about fear of the other party - it's awareness of the rathole that isn't doing anything but wasting our time.

The Burden of Proof

So I wrote a book. So it's kind of obvious Mob Programming and Mob Testing are close to my heart. The thing that puzzles me is the people who feel that for *evangelizing* something this wasteful (in their perspective), I now need to start a research project or share private company data with numbers to prove mobbing is a good use of time.

I'm happy to say it's a thing you either believe in or not. And that successes with it will most likely be contextual. I also say that my experience was that it made no sense to me before I tried it. None of the rational arguments anyone could have said would have convinced me.

There's a lot of research on pair programming. Yet, I see most people telling it can't work. I welcome anyone to do the research and come to any conclusion they come to, but I'm not planning on setting that up. Again, my time, my choices. Writing a book on something isn't a commitment to have answers to all the questions in the world.

I also find these labels interesting. I've been told I'm an evangelist (for mob programming) and a leader (for testing). I label myself as a sharing practitioner. And my label is what drives my time commitments, not the labels other people's choose for me.

The Conference Requirement

I speak at conferences. A lot. And sometimes I run into conferences that feel that by giving me the space to speak they are entitled to a lot of services and requirements on how those services are delivered.

It's not enough that often these conferences don't pay for the expenses, meaning you *pay to speak*. But in addition, they can have very specific requests. My favorite thing I don't want to do is use of conference template, on anything beyond the title slide. It's a lot of work moving elements around, and that work isn't exactly something I would love to volunteer my time for. And reserving a right to change *my slides* is another. I'm good for removing ads and obscenities, but asking for full editing rights and requiring my compliance to change per feedback sounds to me like I shouldn't be speaking in the first place.

We're not entitled to free services. Sometimes we're lucky to get them. Seeing paid services go down, I get reminded that we are not entitled to those either. We're lucky to have things that are good. Lucky to have people who work with us and share for us.

Saturday, October 15, 2016

Two testers, testing the same feature episode 2

There are two testers, with a lot of similarities but also a lot of differences. Tester 1 focuses on automation. Tester 2 focuses on exploration. And they test the same feature.

And it turns out, the collaborate well, and together can be the super-tester people seem to look for. They pay attention to different things. They find (first) different things. And when that is put together, there's a good foundation for testing of the feature, both now and later.

Tester 1 focusing on automation makes slow progress on adding automation scripts and building coverage for the feature. Any tester, with unfinished software to automate against would recognize her struggles. As she deeply investigates a detail, she finds (and reports) problems. As her automations start to be part of regular runs, she finds crashes and peculiarities that aren't consistent, warranting yet more investigation (and reports). The focus on detail makes her notice inconsistencies in decision rules, and when the needed bits are finally available, not only the other automators can reuse her work directly but also she can now easily scale to volume and numbers.

Tester 2 focusing on exploration has also found (and reported) many bugs, each leading into insights about what the feature is about. She has a deep mind map of ideas to do and done, and organizes that into a nice checklist that helps tester 1 find better ways of automating and adds to the understanding of why things are as experienced. Tester 2 reports mistakes in design that will cause problems - omissions of functionalities that have in the past been (with evidence) issues relevant customers would complain about but also functionalities that will prove useful when things fail in unexpected ways. Tester 2 explores the application code to learn about lack of use of common libraries (more testing!), and placeholders, only to learn that the developer had already forgotten about them. Tester 2 also personally experiences the use of the feature, and points out many things about the experience of using it that result in changes.

Together, tester 1 and 2 feel they have good coverage. And looking forward, there is a chance that either one of them could have ended up in this place alone just as well as together. Then again, that is uncertain.

One thing is for sure. The changes identified by tester 2 early on are things that seem most relevant early on leaving more time for implementing the missing aspects. The things tester 1 contributed could have been contributed by the team's developer without a mindset shift (other than change of programming language). The things tester 2 contributed would have required a change in mindset.

The project is lucky to have the best of both worlds, in collaboration. And the best of it all is the awesome, collaborative developer who welcomes feedback and acts on it in timely fashion and greets all of it with enthusiasm and curiosity.