Friday, August 26, 2016

How would I describe my new job?

Counting down to last week of my time at Granlund that I've wholeheartedly enjoyed, I'm starting to be dropping mentions on life after Granlund.

I have a new job. I've signed the contract. I've talked about the job but I have not done the job. Inspired by a tweet, I wanted to write down that I think it will be about.

My new job is with F-Secure. I'll work on the Corporate products, and in particular, the corporate security client product. I used to work for F-Secure 7 years ago, so I'm returning to a place I loved, that has changed just as much as I have changed while gone.

Last time I was there, I worked on the consumer client. Corporate is different. If there's one perception of how it is different that I have, it is one where the control over the environment isn't with the product company. Moving from a world where I could at will install new version for customers on a daily basis, this environment is bound to be more constrained.

My role is one of a Lead Quality Engineer. I have no idea what it really means or if people have expectations on it. For me it means I go to be a hands-on tester, who is senior enough to do test leadership through focusing on empirical evidence. I will never be "just a tester", but I will never be "just a manager" either. I'm both. And I will be a programmer whenever I feel like it, even if that identity still gets under-shadowed by others I hold more dear to me.

There's a few things I look forward in particular:
  • Solving the continuous feedback puzzle for the corporate side
  • Pair-testing with a dedicated automation specialist
  • Figuring out team work when there's an invisible wall of technology selection (C++ / Python) 
  • Untangling interrelations through empirical focus in cross-team larger organization setting
  • Delivering the first version on a seemingly unrealistic schedule by focusing on small incremental pieces of value - do less, smarter
  • Working with other testers who want to be awesome (this was what I loved about F-Secure 7 years ago - co-creating innovations on how we work) 
  • Organizing bunch of local meetups with a company that has great location and openness to invite others to learn with us
  • Having conference (keynote) speaking as part of my work instead of my hobby on the side
I can't wait to start on these. Sad to go, and excited to start on something new. 

Wednesday, August 24, 2016

Visual Talk Feedback

Last spring, a Finnish colleague was preparing for his short presentation at an international conference. I've had a long-going habit of practicing my talks with the local community, and he decided to do the same. He invited a group together to hear him deliver the talk.

It was an interesting talk, and yet we had a lot to say on the details of it. Before we got to share what was on our mind, one of the people in audience suggested a small individual exercise. We jotted down the main points of the story line chronologically on a whiteboard together. Then each of us took a moment to think how engaged we felt at different points of the presentation. Finally, we all took turns on drawing our feeling timeline on the whiteboard.


What surprised us all in the visualization was how differently we saw what the engaging key takeaway moments were for us. A diverse group appreciated different points!

Knowing the overall frame of some of us liking almost all parts of the presentation provided a great frame for talking around the improvement details each of us had. It also generated improvement ideas that were not available without the image we could refer to.

Today, I was listening to a talk in a coaching session over Skype. I was jotting down the story line and thinking how I could visualize the same thing online. Next time, I'll get Paper on iPad out early on and share that with who ever I'm mentoring. It will provide us an anchor to see how the talk improves. And it gives a mechanism of inviting your other practice audiences to engage in giving feedback.


It was just a typo

As a tester, I'm a big believer in fixing UI string typos as I see them. You know, just going into the code, and spending a little time fixing the problem instead of spending the same (or more) on reporting it. These are just typos. These are changes that I find easy to understand. And yet, I test after I make those changes.

In the last week, I was testing a cleanup routine. Over the years, our product database has piled up quite a lot of cruft, not the least for an earlier (imperfect) implementation that created duplicate rows for everything that was touched in a common user flow. I asked the developer who created the cleanup routine on what it was about and his advice on testing it, to learn that it was "straightforward" and that he had spent time on testing it.

As we run the cleanup routine on the full data, it became obvious that "testing it" was not what I would mean by testing it. The cleanup for a copy of production data took 6 hours - something that no one could estimate before the run started. Meanwhile, the database shouldn't be touched or things will get messy.

So we talked about how we cannot update the production like we do for test environments - hoping none will touch it. We need to actually turn things off and show a message for the poor users.

The six hours and 1/3 of database size vanishing hints to me that this is straightforward because our data is far from straightforward. With very first tests, I discovered that there was data lost that shouldn't be, resulting in what we would refer to as a severe problem of the product's main feature of reports not working at all. To find the problem, all I needed to do is to try out the features that rely on the data, starting from most important: this one.

Fast-forward a few days there's a fix for the problem. And the developer tells me it was just a typo. Somewhere in the queries with IDs, one of the groupings was missing a prefix. We talk on the importance of testing and I share what I will do next, to learn he had not thought of it. I joke to tell him that at least he did not tell me it was just a 10 minute fix after using significant amount of my time on making him even aware that a fix is needed.

The phrase it was just a typo is interesting. We're an industry that has blown up space rockets for just a typo. We have lost significant amounts of money for various organizations for just a typo. Just a typo might be one of the most common sources of severe problems.

For any developers out there - I love the moment when you learn that it's not about the extent of the fix but extent of the problem the lack of fix causes. I respect the fact that there's fixes that are hard and complex. Just a typo is a way of expressing this isn't one of those. 

Tuesday, August 23, 2016

Just read the check-in!

Today was one of the days when something again emerged. At first, there was a hunch of a sort order messing bug and all of a sudden there was a fix for it.

The fix came with very little explanation. So my tester detective hunch drove me to the routines that I do. I went to see the check-in from the version control and routinely opened up the three changed files without any intention of actually reading them.

The first thing I realized is that the files that were changing had names that matched my idea of what I would be thinking of testing. It's been more often that I care to remember that this was not the case.

The first nagging feeling came from realizing there were three files. A small fix and three files changing. So I looked at the diffs to see that the changes were more extensive than the "I fixed a bug" gave warrant for.

I walked up to the developer and asked about the changes "So you needed to rewrite the sorting?" to learn that it was long due.

With a little routine investigative work, I had two things I wouldn't have otherwise:
  1. An actual content discussion with the developer who thought that the change he was making was obvious
  2. A wider set of testing ideas I would spend time on to understand if the re-newly implemented feature would serve us as well as the bad old one had. 
There's so much more to having access to your version control as a tester than reviewing code or checking in your code/changes. Looking at check-ins improves communications and keeps absent-minded developers honest. 

Circular discussion pattern with ApprovalTests

At Agile 2016 Monday evening, some people from the Testing track got together for a dinner. Discussions lead to ApprovalTests with Llewellyn Falco, and an hour later people were starting to get a grasp of what it is. Even though I Golden Master could be quite a common concept.

Just few weeks earlier, I was showing ApprovalTests to a local friend and he felt very confused with the whole concept.

Confusion happens a lot. For me it was helpful to understand, over longer period of time that:
  • The "right" level of comparison could be Asserts (hand-crafted checks) vs. Approvals (pushing results to file & recognizing / reviewing for correctness before approving as checks). 
  • You can make a golden master of just about anything you can represent in a file, not just text. 
  • The custom asserts are packaged clean-up extensions for types of objects that make verifying that type of object even more straightforward. 
Last week, I watched my European Testing Conference co-organizers Aki Salmi and Llewellyn Falco work on the conference website. There was contents I wanted to add that the platform did not support without a significant restructuring effort. The site is nothing fancy, just Jekyll + markup files built into HTML. It has just a few pages.

As they paired, the first thing they added was ApprovalTests for the current pages to keep them under control while restructuring. For the upcoming couple of hours, I just listened in to them stumbling on various types of unexpected problems that the tests caught, and moving fast to fix things and adjust whatever they were changing. I felt I was listening to the magic of "proper unit tests" that I so rarely get to see as part of my work.

Aki tweeted after the session: 
If you go see the tweet I quoted, an exemplary confusion happens as a result of it.
  1. Someone states ApprovalTests are somehow special / good idea.
  2. Someone else asks why they are different from normal tests
  3. An example is given of how they are different
  4. The example is dismissed as something you wouldn't want to test anyway
I don't mean to pick on the person in this particular discussion, as what he says is something that happens again and again. It seems that it takes time for the conceptual differences of ApprovalTests in unit testing to sink in to see the potential.

I look at these discussions more on the positives of what happens to the programming work when these are around, and I see it again and again. In hands of Llewellyn Falco and anyone who pairs with him, ApprovalTests are magical. Finding a way of expressing that magic is a wonderful puzzle that often directs my thinking around testing ApprovalTests. 

Thursday, August 18, 2016

Defining our SLA retroactively

I've been fluctuating between focused and energetic to get all the stuff in as good order as I possibly can before changing jobs (my last day at Granlund is 2.9) and sad and panicky about actually having to leave my team.

Today was one of those sad and panicky days, as I learned that the last three things coming out of our pipeline did not really work quite as expected but feedback was needed.

We changed a little editing feature with new rules, resulting in inability to do any editing for that type of objects after the change - the data does not adhere yet to the new rules and it was not "part of the assignment" to care for the data. And yet, we never release things intentionally that would break production.

We cleaned up some data with straightforward rules that shouldn't impact anything. Except they completely broke our reporting feature that bases on the unclean data.

We nearly finished the main feature area we've been working on for months (too big!!) except that I know that from today's "just five more fixes" there's bound to be 3, 2 and 1 more to go.

I love the fact that my team's developers have a great track record on fixes not breaking things worse. That they take the time and thought on what the feedback is instead of patching around. And that they do the extra mile around what I had realized if only they can make the connection. They care.

All of these experiences lead me to a discussion with our product owner on time after I have left. I was suggesting he might want to pay more attention to what comes out of the pipeline after I am gone. His idea was different, and interesting.

He said that the team's current ability to deliver working software without his intervention, just pulling him in as needed, is as he sees the R&D SLA (service level agreement). He expects the team to re-fill the positions to continue delivering to the SLA.

Remembering back four years on the same person's shock on "The software can work when it comes to me?!?!? I thought that is impossible!", we've come a long way.

I'm proud of my contribution, but even more, I'm proud of my team for accepting and welcoming my help in making them awesome. It's great to see that we've created an SLA retroactively to define that good is what we have now. And yet, it can still get better.

The team is looking for a tester to replace me. It's a great job that I wouldn't have left behind unless there was another even greater I couldn't have without the experiences this job allowed me. You can find me at maaret@iki.fi. 


Friday, August 12, 2016

The programming non-programmer

Over the years, I've time and time again referred to myself as a non-programmer. For me, it means that I've always rather spent my time on something else than writing code. And there's a lot to do in software projects, other than writing code.

This post is inspired by a comment I saw today "I found Ruby easy to learn, I'm not a programmer". It reminded me that there's a whole tribe of people who identify as programming non-programmers. Some of us like code and coding a lot. But there's something other than programming that defines our identity.

Many of my tribe are testers. And amongst these testers, there's many who are more technical than others give them credit for, identifying as non-programmers.

I've spent over a year trying to learn to say that I'm a tester and a programmer. It's still hard. It's hard even if over the years starting from university studies I've written code in 13 languages - not counting HTML/CSS.

Why would anyone who can program identify as non-programmer?

People have wondered. I've wondered. I don't know the reasons of all others. For some, it might be a way of saying that I'm not writing production code. Or that I write code using stack overflow (don't we all...). Or that I'm not as fluent as I imagine others to be.

For me being a non-programmer is about safety.

I'll share a few memories.

Back to School

In university on one of the programming courses, I was working like crazy to get the stuff done on the deadlines. Learning all that was needed to complete assignments, not having a demo programming background from age of 12 meant a lot of catching up. The school never really taught anything. They passed you a challenge and either you got it done, or did some more research to learn enough to get it done. We did not have much of stack overflow back then.

There was no pair programming. It was very much solo work. And the environment emphasized the solo work reminding me regularly that one of my male classmates must have done my assignments. That girls get coding assignments done by smiling. I don't think I really cared, but looking back, it was enough to not ask for help. I can do things myself. An attitude I struggle to let go of decades later.

Coming to Testing

I soon learned software development had things other than programming. And awareness of programming would not hurt anyway. I fell in love with testing and the super-power of empirical evidence.

There was a point in time when everyone hated testers - or so it felt. Not respected, cornered into stupid work of running manual test cases, reporting issues of no relevance in the end of the waterfall where nothing could be done based on the provided feedback. A lot of attitudes. Attitudes the testers' community still reports, even if my day-to-day has been lucky to be free of those for quite some time.

My gender was never an issue as a tester. My role was an issue that overshadowed everything else.

Programming tasks

When I was a non-programmer, it wasn't really about me when I got to hear that a friend has never seen a woman who is any good as a programmer. I cared about being good at what I do, but as a non-programmer, that wasn't about me. I got to hear that two colleagues talked about women never writing anything but comments in code. Well, I didn't write even the comments in any code they had seen, again not about me. And if there was code I for any reason wrote, the help in reviewing and extensive feedback to help me learn was overwhelming. It felt like everyone volunteered to help me out to a point of making me escape.

Every time I write code, I can't forget that I'm a woman. Every time I go to coding events, I can't forget that I'm a woman. Even when people are nice and say nothing about it.

As a tester, I'm just me. And you know, no one is *just* anything. If I was a programmer, I would have left this industry a long time ago. Saying I am a programmer makes me still uneasy - after 13 languages. I get most of the good stuff (geeky discussions) but much less of the bad when I'm a tester - tester extraordinaire!

Being a programming non-programmer is safe. Being a non-programmer is safe.