Thursday, September 29, 2016

See more options


A test automation study group, and a round of collecting things people feel like studying. Easy problems, complicated problems. Something for us all, something to do just between two people. I bring out one thing I'd like us to do: review a fix to test automation script that we're now sharing between my team and another. All I knew of it was that there was a test that was failing too much, and there had been a Jira issue of it. A pull request was waiting. My internal lazy kicked in.

With the study group, we reviewed the pull request and just as we were about to accept it together one of the developers beat us to it. My main goals were fulfilled: the fix would improve the state of both of our automation information radiators (less red) and I never had to go through the Jira, the pull request and the code alone by myself being uncertain of the intent of the changes. Well, to be honest, I wouldn't have done that. I have an automation specialist in my team.

There was one more thing that came out of that study circle though. Reviewing the fix it became clear that the fix was a workaround. The test automation did magic to get around a product that did not behave in the way the script needed. I suggested we'd go for the developers to get a change into the product. It was clear that my suggestion was atypical, even if not unheard of. For me, it seems like a good idea to optimize overall development so that we could trust the feedback and not run circles around with creating workarounds, when in collaboration with the developers we could have a product change that makes things more straightforward. A nice discussion emerged.

Coming out of that study group, I summed my lesson up in a tweet. Susanne Sporys put in into a picture.


Sometimes we stop ourselves from asking because we think we know the answer. And yet, we often don't. Us telling ourselves something is possible comes from past experiences. The past is supposed to be past, and future is full of options. 

Wednesday, September 28, 2016

Slide incident, the long version


 I tweeted an image yesterday from James Bach's conference talk.



Let me be clear: I don't think it was ok to have this slide in the deck. It was less ok to spend a good ten minutes on it only to be interrupted by the audience half-way through explaining with "could we move to hearing about how the tester roles changes?" - the talk topic. If there was a critique of my morning keynote, the place for that was the time reserved for it after the keynote where there was a brilliant facilitator ready for that discussion. Or if this was indeed the place for this critique, I was there whole day available for a little chat to clarify facts.

The slide isn't quoting me. It's stating how James Bach differs from me. And surely, I prefer to be nice and kind, and care for safety of others in addition to mine. Right now I'm less worried about my feelings (I'm ok, just emphasizing this is _unacceptable_) and more worried of the impact attacks like this have on the speaker community so many of us work so hard to build.  It is not safe to have this behavior.

I've been second guessing my choice of sharing this, because if only people in the conference knew this happened, it wouldn't scare others. But it still happened, and being quiet about it isn't really a choice. It's like being abused, and not telling anyone. There's already comments out there saying I deserved it (because of speaking in public). And that I could avoid it (by not speaking in public).

Majority of voices recognize this as what it is. Not ok. Not acceptable. Not something to be referred to as "academic debate". Academic debates are references to what the other person is not just saying but thoughtfully writing. Not statements starting with "I" without the decency of fact-checking the "facts" the statements are supposedly based on.

It's kind of awful that I was really expecting this to happen. I was hoping there was a chance it wouldn't but instinctively knowing it would. It's just a continuation of a theme. A theme of wanting to tell what his keynote is about after I define mine. A theme of reminding me that "it must feel bad for me that he told me that I'm not a tester" and he'll "see if he can retract that after the peer workshop day" on Sunday. Didn't so couldn't. A theme of interrupting me five minutes into my talk with "ha, you just said you are not a tester" and refusing to not listen to the story first before jumping to conclusions about words I've chosen.

James ever says he knows I must feel awful by his actions. And he chooses them very deliberately, understanding how I feel. It doesn't make it better, but worse.

The worst part about these conference attacks is the feeling of alienation it leaves me with. It's like everyone is afraid to choose sides and I would just want us all to be on the side of "curious about good testing". I can best describe it as if my child died and everyone knew about it. No one would really know what to say. Some people would avoid discussion completely - we did not know before, we did not need to mingle now to address potentially a difficult topic. Others feel compelled to come and say something. They say they're sorry. But you feel they're somehow awkward and forced. Others are genuinely connecting and soon moving on to normal topics if I'm up for it, unsure if I am or not so a lot of probing is included.  Similarly, I feel like hiding. I don't want to be the party ruiner who talks about this. How about doing some testing or powerpoint karaoke instead?

In my talk, I talked about how safety is a prerequisite for learning. I talked about how testing is really about learning. And how we need to feel safe when we test and when we learn about testing.  I talked of deep love of testing and how a lot of the tacit knowledge is hard to transfer without sharing experiences.

I hope this experience stops happening to people. My heart aches for the person who let me know based on this that it happened before and someone left the industry for it. Same person: James Bach. Same pattern. Worse results.

This conference's organizer was aware of the risk and was there for me when it happened. She did not cause this. But she, like the other conference organizers are in power to stop this from happening more. I respect their choice on that, and realize that the middle ground of speaker facilitation could also provide a working solution.

With these thoughts, let me go back to testing as a non-tester and program as a non-programmer. Let's just do good stuff, learn a lot and enjoy the big portion work makes of our lives.


The way we teach through stories

We all have our lived lives and experiences acquired. We use those experiences to channel stories to illustrate the points we make in our presentations. My whole talk on a conference yesterday was about lessons I've picked up over the years on the features that would make me a better tester and could be of value to others. Working to become more kind and considerate, understanding that this is not about individuals but collaborative efforts are some of my key learnings that I feel transfer well over time.

There were absolutely wonderful sessions and discussions throughout the day. Richard Bradshaw was  articulate and delivered fun, experience-filled stories to illustrate his points about things he has included in his tester job to take things forward in projects he's been on recently. Anders Dinsen lead people on a thought-provoking journey to think of the really big problems that could take businesses down and our potential in missing those. Ash Coleman (in her debate with James Bach) gave tangible ideas on how we could find the future testers to the growing industry and I appreciated the insight the debate gave to understanding that experts can feel threatened by the new entries to the field. Bernie Berger shared testing insights illustrated with movie clips, and a test tools panel showed a very versatile round of experiences available in the room.

At a point of the day, I was kind of tuned out and thinking of what kinds of experiences, in particular but not limited to tools and automation I would find useful and I tweeted:
Surely there is historical value in remembering all the things that color our past, but also, we often talk to rooms of people who were not there 20 years ago. Who are not working with technologies from 20 years ago. Who are not in organizations stuck to where we were as industry 20 years ago.

Some of my personal experiences and stories can be entertaining lessons of history, but more from a folklore point of view than as helpful advice to current everyday life.

The conference yesterday was in the day. It was a New Testing Conference. Surely there was a glimpse of folklore here and there. And there were people from organizations that even today feel like they live 20 years in the future. But the glimpses of folklore made me aware of the amount of folklore I introduce when I talk. And I'd like to be aware of speaking (and choosing speakers) who will speak of recent experiences, reflecting against a relevant base of past experience to see the recent learnings and approaches deeply.

My developers haven't hated me for being a tester for almost 10 years. Why do I still keep mentioning that and transfer the expectation of bad relations to the new people? Do I need the old stories, or could I just source from the new stories of wonderful developer relations to set that as an example of what I expect to see?

The mechanisms I had to manage releases and release dates are completely different from the agile approaches to having continuous releases. How relevant is it to tell about the hard times of the past, when there is a way out of that hard time?


Saturday, September 24, 2016

Conferences against the structural problems

I've been buzzing around all morning organizing all the things that need organizing before I jetset off to New York to do a keynote at the Test Master's Academy conference. That's kind of awesome. But simultaneously, it's not. It's a lot of (unpaid) work. Just like any other conferences where I speak. I have higher expectations for this one than many others, as I've been positioned to be the opening keynote and there are some internet friends I look forward to meeting. 

If you take a look at the conference lineup, you see plenty of men and women, and even people of color. There's people who are believers in exploratory approaches, and people who are keen to see automation rule to world of testing. And it's all amazing, just as it should be. 

But like I said, the speakers put a lot of work into conferences. There's the time for traveling (that isn't paid time even when the conference is). There's the time of organizing all the things that need organizing for being away. And there's the time for the conference. I have a great chance of actually enjoying this conference as I get to go first. Often times, I'm preoccupied with my preparations throughout the conference.

So I wonder, many times, if this all is worth the hassle. It's fun and all that, but is it really giving me as much as it is taking from me? 

Then I enter the twitterverse and see this:

I retweeted it with my note: "There's a systemic problem under this, as the kindest and most encouraging of us see this. It's not up on women but the allies to change this". 

Allies is a word I don't really like, but I lack a better word. I mean men who care that the world changes to be a better way for their daughters, partners and friends in the underprivileged groups. 

The first step is to recognize there is such a thing as privilege, and I appreciate it might be hard. I have tons of it, even if I lack some. I'm a woman of a certain age (love the term and just learned that from Sandy Metz) which gives me courage I did not have when I was younger. I'm very white and from a society that has given me opportunities I wouldn't have had somewhere else. Privilege is a special right, advantage, or immunity granted or available only to a particular person or group. As someone who is underprivileged, it takes you more work to get the same done.

When talking of women, I hate talking of minority because where I come from, women are equal in numbers in conference attendance and work places in the field of software testing. Women are not a minority, but they are underprivileged. Even where I come from, often a majority of speakers in testing conferences are men. 

It's clear other conferences end up with a different result because they balance out by not just encouraging submissions but inviting specific topics and people. And that is justified because of the systemic forces that are against women.
  • Less women are encouraged to speak for their companies in public.
  • To get to be considered the one at your office who gets to go (or even submit), you need often years of work against the corporate cultures in IT favoring men.  To get to a speaking position, you might need to keep all your focus on keeping that position instead of donating time for free for conferences that are businesses. 
  • Women tend to carry a bigger load of the emotional labor (organizing family life) than their partners and thus have less time available to use. 
  • It can be harder to find time to be away from other duties, it can require direct financial investments and include a mental load from judgement on the choices you've made on leaving family behind. 
  • When speaking, you're continuously facing "chosen for gender" allegations. 
  • When speaking, you need to exert extra effort in presenting yourself in acceptable tone
  • You feel you need to be good, not average to justify your existence. 
  • You need to work against the stream, with lack of role models. 
  • You need to accept that your feedback can be more harsh and personal just because of your gender. 
  • It's common to ask (unpaid) diversity work from minority groups over offering to pay them for the work they do. 
  • It can be harder to find the extra money to even loan for the conference on expenses if it comes out of your own pocket even if expensed later. You will know you chose a conference over something for your family and there's a gendered expectation on how the choice must go. 
  • Coming back from a conference, it still feels women report on the harder treatment of the "stupid ideas" they pick up.
When the problems are systemic, it's not up to the underprivileged group to change it. It's up to all of us to change it. Conference organizers have a lot of power in the change if they choose not to be passive victims of the system. 

After a discussion with a friend, I managed to sum this up in just a few sentences. 


What the conference organizers can do:
  • Pay for the work. All the work. Including submission work. When you stop thinking of submissions as your right to free labor, you start finding better ways of investing your conference budget than call for proposals. Good proposal is hard work. My talks tend to take me a week of work before 1st proposal. 
  • Pay on time or early. Yes, I know it is hard because you don't have the money yet. Stop treating speakers of the world as a loan office. The underprivileged are less likely to submit when they know they have to admit to being challenged in this. 
Before numbers like 20 % women submitted are relevant, we need to consider what are the reasons that stop women from submitting and be open to the possibility that their reasons might differ from the reasons provided by men. And if 20 % of a 100 proposals is 20 talks for a conference that needs 10 talks in total, that might not be a bad situation at all. There's an actual potential for an all-female conference. 


Why would you? Because you believe that diversity means learning from people different to you makes the world a better place. Being representative would be important as different backgrounds bring different lessons. I do, do you?

Friday, September 23, 2016

Fighting the urge for Jira

I've spent a day deepening my personal understanding of end-to-end scenarios and reliability of all the test automation we have around here. I have not come to a conclusion about it yet, but started to slowly frame my role as an exploratory tester as someone who tests reliability of the test automation systems. And coming from my interests on clean code & reuse, I seem to be also taking an active role towards testing the sharing solutions on test automation make sense also or more in the future.

As I was testing end to end with an exploratory approach, I was bound to find some issues. I'm in an easy situation now in the sense that I have an old version that "works" to compare against, kind of like back when I was doing localization testing. If the comparison version was broken in the same way, we just mostly did not need to flag the problems.

All the issues I found ended up in a mindmap while testing. There was a color coding. Problems with the new not confirmed with the old. Problems with the new, confirmed not to be with the old. Problems with the old, vanished from the new. Problems with both.

As  the data was collected and was pretty convinced I knew enough for now, I stopped for a moment. Normally, this would be the moment when I, at latest, go to Jira and log some bugs. I had to fight the urge to do that.

I fight the urge, because I want to keep trying the fix-and-forget approach. Instead of taking these to Jira and moving on, I want to:
  • Find the test automation that isn't catching  these (and pair up to make these caught)
  • Find the developer who is contributing to these, to understand their work priorities on when my feedback on these (and others) would be most timely for not just randomly jumping around the product, but completing a feature / theme at a time
  • If these are known issues, figure out a way to get and keep the main more release-ready
I believe I don't need to prove my worth in the number of issues in Jira. I find that in a new organization, the fear of someone coming to check my work based on Jira cases lurks in the background. And I fight the urge to go for Jira, which would be the easy route. 

Thursday, September 22, 2016

Same same but different

I've been receiving tons of congratulations with my change of jobs. Some mention that my new work sounds awesome, some note that it looks like I went back to doing exactly what I left 8 years ago.

This is how I described my job out of a whim in LinkedIn:
A hands-on tester with enough seniority to figure out what is right for whatever goal assigned. Working on making the Corporate Security Client awesome through empirical evidence and smart testing approaches. 
With two weeks in the company and some hands-time with both production versions and upcoming versions of the product, I recognize many (most) things are the same. This lead me to ask myself the question: why is it that I see this as a way forward in my career? I clearly do.

On my previous round of F-Secure, I was in a test manager position - or at least, that is how I thought of it. There was a bunch of other testers I was helping to be more awesome at testing, and rarely when I felt there was time from all the meetings, I tested. Often mostly localizations and UI, perhaps because I was particularly adept for those tasks in relation to many others.

Test Management wasn't completely new to me, but it was new enough to hog a major part of my days. I relied more on information I could get from other people, than on information the software would give me.

This has changed. I lost a money-worthy argument with a significant vendor in a job after F-Secure because I had done what my manager told me: "you are too expensive / valuable to test hands-on, just guide the others". And I would have had better empirical evidence if I would have instead spent 2 days of the week locked up away from useless meetings, hands-on testing. I would have known things that I ended up speculating on. I could have shown what works and what doesn't better.

I notice that at F-Secure, I'm still fighting the old habit from coming back. I don't need to be part of all the discussions. I don't need to talk to every product manager and owner. I need to be selective, and I need to be able to trust people to tell me things - or that the software when tested tells me things they forgot to tell me. And I can do that now, with a few more years of experience under my belt.

There's another thing that is clearly very different. Now we have significant amounts of automation. And programmers who unit test! I'm delighted to have collaboration with my team's test automation specialist on end points we want to use when testing things, approaches to make our existing automation more versatile and smart, and things to add from what I learn through exploration.

Where the company is now and where I am now, all of this makes it different. And I believe being a better empirical technologist is a better step forward on my career than becoming a manager. Seniority gives me similar things than managers get from their role. But the practicality and impact through practicality - that's the thing I strive for.

Tuesday, September 20, 2016

Reasonable Expectations -exercise

So, you land into a project unknown. You collect a bit of info, organize the claims by sources you've heard them for, pay attention to differences between sources and decide who you're going to primarily believe, for now. And then you dig into testing.

A lot of what you're experiencing does not match any of the claims. And there's a bunch of claims you're making now that no one else did.

You realize there might be a usual problem going on, with "no-one's land". With enough small teams around, there's things where you just see things end to end that none who focus on their bits have. So what do you do?

I remember clearly a time, when I wrote clear and well-investigated bug reports about these, and then the ping-pong game starts. It often continues until there is either a developer with a system perspective or until I play personal relations on getting someone who wouldn't want to look at it to look at it to help pinpoint it further. I feel exhausted just thinking about this.

Today I run into something of this sort. And instead of writing a  bug report, I made a list of reasonable expectations I was planning to talk about. Reasonable expectations are my claims on what I find would be reasonable to expect and I let the developers tell me I'm wrong - or find out that I'm right and then discuss the bugs I did not want to log against the end-user perspective claims that are not true after all,  based on evidence. For now, until we've changed our software or our perceptions.

I realized  this isn't something new for me. I've had great success with this before, with a difficult project manager of the past. The mechanism is just something I've never before labeled. I play a lot with the dynamics of communication while I deliver the messages as a tester. The dynamic of making me the one who is proven wrong on the reasonable claims turns around the feeling of how we'd talk about the exact same thing through a bug report.