Monday, June 29, 2015

Friends with pick-up trucks

When you have friends with access to information and skills (or tools) you don't have, the smart thing  is to make us of them as long as they can make use of you too. The idea of a friend with a pick-up truck is a metaphor James Bach has introduced many times when talking about non-programming testers getting stuff done, that requires programming: befriend a programmer. 

In November 2014, I blogged about a friend who changed my world at work in an hour, helping us get rid of estimates. Today I'm blogging about a friend who seems to be doing the same in a scale I couldn't hope for in a day or two.  

About six months ago, I was starting to feel desperate at work. We had been putting an effort (as in developer time) into unit tests, I had spoken for them to make all the room they need for shaping up their skills through learning and I was at a point where everyone wanted to delete all unit tests instead of adding any more of them. I could guess, with all the discussions I've had with developers who do unit test a lot, that the problem is with skills, and just even getting in the right direction in what (and how) to do to practice while working. Since I was told our software "cannot be unit tested", the first step I went for was to see for myself if that would be true. With help and guidance from the friend, I managed to create a few tests, learn about Approval Tests -framework (realising how powerful concept the golden master is comparing to hand-crafting asserts was a major thing for me) and get just a little more convinced the problem was with skills. 

I started working on a major area of printouts (it is the main result our product produces, and keeps causing me grey hair in coming up with ways it will break) and got stuck right away. The printouts is an area where I have a 34 point high-level checklist of things I need to remember to consider whenever it changes. I did not have the code knowledge of how our software was built and not the energy to dig into it by myself (I prefer taking help in these things...), so I quickly drew in one of the developers to show what to call. The three of us got forward a little, but got stuck on an excel problem: we could not generate two excels with same contents that would pass a diff, there was always a difference. With other work to focus on, we left the problem to rest for months. 

This morning we finally revisited the problem. The friend agreed to show up at my office and work with us for a day, on getting us started with unit testing. I got together a small group of myself (the goal if this is still to teach _me_ unit testing, at least that is how I frame it), a developer familiar with the implementation and a summer intern working on unit testing. 

The day ended up being split in retrospect to three very different, relevant parts. 
  1. Solving the excel output consistency problem
  2. Adding unit tests around excel (and classes in chain that wouldn't need excel)
  3. Discussing our observations
The observations really sum up the different parts. 

Discussing our observations

At the end of the day, my friend invited us to write down observations on post-it notes. The exercise reminded me again on how powerful it is to hear what others feel they learned, something we do way too little. There were items I wanted to share that I felt the others should notice. But more over, there were things others were noticing that I would have completely missed.

 (sample of our notes)

From the discussion, I got the idea that just this day alone was probably the best thing I could do to make our unit testing efforts go forward. The main developer said he found ideas of things he could actually test, something he did not have prior to this day regardless of having spent significant amount of time browsing through the code trying to find those ideas. I saw us working in a more powerful way (as per results, we got more done in a day that we usually get in a week) all contributing in relevant ways, regardless of 2/4 being basically beginner-students. And I saw the intern not learning when shown, but learning when doing in a pairing situation. 

I took a picture of our observation post-its and summarising our observations from those. Observation as it was written down is in bold. 
  • What's really blocking us? Missing examples of "harder" stuff. All we did today was coding as usual, finding mostly the "Do this" part of the tests, not the "Check this" part. Find the method to call, find the arguments to pass it, find ways to populate the data we'd want to see. Coding problem at a time, we churned away things that were "impossible". It all seemed very possible, and actually very much flowing now that they were around. Sometimes it was the pairing (we don't normally do that...) that solved problems. Sometimes it was them kindly pushing into not giving up and giving hints on the potentially right direction. 
  • More clear what should be tested. Test outside of excel. HTML odd. Untested pieces use by print. There's a lot of pieces the excel calls, that we can test outside that we did not realise. Setting up things so that we can test the basics through excel helped identify other stuff we could test. Including weird HTML. While the starting point for working for these tests is the end product (printout), there's little reason to test things in scale if they can be more focused. 
  • Pairing helps with not giving up. More mobbing. We really saw the power of group work. Part of it was the ability to be in a group with an external. It's sort of funny to realise that he could be "a developer on his first day at a job", transforming pretty much anything we're working on. Having people to turn to helped us go for the solution. I was particularly impressed with myself on making us stop on a hard solution to try the next option that turned out to be a lot easier. I made a good step forward with my efforts on convincing we'll mob amongst ourselves after vacations. 
  • Contribute as driver. Contribute by talking. At first, the experienced developer was at the keyboard and the rest of us were navigating. The intern was silent and watched. Later, the intern was at keyboard, stumbling on things he has seen us do. He could in theory know from having seen it how it was done, but the practice was that he needed to do it himself as the driver for the thing to really catch on him. Being passively in the table isn't the way to contribute or learn. Both navigating (talking) and driving (learning to do things at the keyboard) are valuable. And a non-developer like me apparently can contribute, at least today, by speaking up on ideas.   
  • Harder to explain than to show. The navigator role for the experienced developer was also a good experience. He realised how hard it is to talk about things you are doing routinely. It's a skill set to practice for pairing and did not come naturally, needed a few reminders on using words. Practice speaking about things. It's also harder to explain than to show how to do unit tests with Approvals. 
  • Learned hotkeys. Can move debug cursor. We again learned things on how to use the tools more efficiently. And on tools, it was fun to notice that even they picked up something small he wasn't aware of before. Everyone learns. 
  • Verify Excel. Excel zips modified (ATM) dates. Zip contains file timestamps. Zip store metadata. C# zip process. Getting the excel inconsistency away. We did a significant work on finding a way to verify Excels. We learned that Excel can be unzipped, and that the zipping process of creating Excel files includes metadata that includes dates that change. We learned to set the dates so that the files are comparable. And all this will, according to them, end up as Excel Approvals, extending the current set of things available in Approval Tests. I find the idea of contributing a feature to a relevant open source project very rewarding.
  • Graphics mess up. The dates weren't the only source of inconsistency in comparisons we run into. Images also messed things up, and we're still to looking into how to test that, as it seems that OpenXML has issues with changing images in ways we'd not want.  
  • Reading database should be outside printing. There should a step of reading the database to get the data and a step of printing the data. Those two should not be combined. So we learned on structuring our code to make it more easy to test. When testing the code, deficiencies of architecture become easier to see. 
  • Found and fixed a bug. While doing the tests, we found and fixed a bug. This in particular is something I really like that we could do while testing, instead of logging it to be fixed separately. The group brought enough courage to extend. The courage should always be there, but I know from practice that for us it isn't. 
  • Saves time on maintenance (approving). Doing this, especially as another developer on the side was apparently about to go and solve the same printout testing problem with assert and xpaths for elements in excel, made me realise that how you do your tests will have a significant impact on maintaining the unit tests later on. With approvals when we change things, we visually verify the changed file and re-approve it. With asserts, we handcraft new asserts that match the changed situation. The other developer visited us for half-an-hour and gave up as we were in process of finding ways of calling things. His view was his time should  be used on his assigned task - he tends to think tasks are assigned, instead of emerging as they really are. I'm sure he'll come back to this, and would have been great if he would have experienced this afternoon instead of working it out with the developer who was there for the whole day. We just today had a weekly meeting where some of us again suggested we'd just throw away tests because changing the asserts is so much work. Approvals would save us effort, to an extent I had not realised before. 
  • I want to see more code. I added an observation saying I want to do more of this. I want to be a part of the team, not just a service from the side. I really want to get to a point of us testing in this format, and I'm still convinced mobbing after vacations is the right thing for us to do. 
  • Hard for users to print. We also learned that there's things that "cannot be done" that probably can be done that make things harder for our end users. Accepting these as "cannot be done" seems wrong when there's evidence that it's just a question of finding the right idea to make it doable. 
I still have the little activity of "having a chat" with the manager with money on this. Finding (and normally paying for) the right help is so important. The value of just this day alone is significant. Having someone to turn to with the next roadblocks we imagine we have would be relevant. And like I said to the group today: I can call a day of "friend helping for free", but I really know and feel strongly about the fact that the friend should be paid. Finding the right person is the key. We did a workshop on unit tests earlier that drove us more into despair, whereas this day was a light in the tunnel. 

I find the thought experiment that results from this quite intriguing. Imagine we could have hired the friend. This would have been his first day with the company. With skills of inviting people who understand the domain (I did, as a tester), and people who understand the code to go to right places quicker (the experienced developer did) he could get us together to do things we couldn't have done without him. And we were supposed to. He saved us a lot of time and effort. There's a lot of quite impressive skills in working with code with shallow knowledge of it. And having seen a bunch of developers over the years, I too can see that these skills are not commonality amongst the developers out there. I guess that sets one example to strive for on developer skills.  

Some friends come with pick-up trucks that move all of the stuff in one go. With others you take a few batches. And carrying the items one by one just isn't right even if it gets the job done. I need to get better at using the assigned friends (developers) at work. Calling on personal favours works for a day. For real heavy lifting, pay for the work. 




Friday, June 26, 2015

Blog while testing, an experiment of a way to test

For a while, I've been struggling. There's a new feature I should test and even before I start, I feel bored. It's relevant but somewhat small (calendar months in development though). It's not obviously easy, but the moves that I would need to take feel too similar to what I always do. And whenever bored, I need to do something differently. So I experiment with blogging while testing.

In the product I'm testing, there's an area we refer to as luminaire schedule. Simply stated, it is a view into luminaire's (lamps in layman terms) and their attributes, organized into different buildings. An inventory of to-be-physical devices and information on them. The area has been around a while. The new feature I'm just now getting started with is about sharing same luminaires for different buildings. Because design-wise, some buildings will just end up being identical and doing identical lists as copy paste will make it hard for the later maintenance.

Just as I write this, I realize that the purpose the feature serves for its end users is maintenance of the information. They could easily copy-paste once, but change of mind in during design (they learn too) needs to happen with this feature. But during existence of the building, the luminaires may go their own directions, like someone changing a living room lamp in one building to a different one, but letting neighbors do their own choices of what and when. So the ability to turn it on and off for different design-construction phases will be relevant for the designer. I add these things on my mindmap and start thinking about reading the specification about this, deciding to leave it for later. I want to first think and explain what the feature is.
As my mindmap skeleton is set up, I move to take a screenshot of how the feature appears, just to explain it for this blog. Instead of the screenshot I intended to take, I take another one that I pass to my developers - environment is broken. And I take another screenshot of the same with less info, for the purposes of blogging.
Seems everyone is out for lunch, and I think this is my cue to decide between two things I could do now. I could go to lunch and hope to continue after. Or I could think more about the feature without actually using it. I dislike the latter option, mostly because I have a strong preference to building my ideas of features while using it, and I almost always postpone reading about the feature to time I have already used it without instructions.

Writing about it makes me realize there's a pattern I could just as well break for today. I go dig out the specification, and realize it is one page of text amongst 31 pages of description of other stuff related to this area. This makes me realize I'm highly likely to find a lot of problems about how this feature goes together with the other ones. And I make a note to myself about the idea that I will also need to go through the 30 pages for ideas of connections, but that I will leave for later.

The spec has one picture of a new dialog this feature needs, and a bit of explanation. For purposes of blogging, I finally take a picture of how the user interface is. Simple: I just choose which buildings use same data. And there's no exceptions, it's all or none in the linking. But I see a new concept: first feature to connect to other properties and their buildings. That should be interesting source of inconsistencies as it breaks our current assumptions of how things get grouped.
My train of thought with the spec gets interrupted, as it turns out I have found (with some other tests) a way of breaking our data so that login fails. I make a note of that on my to-do, need to isolate what I did in the last two days that caused that. I have a pretty good idea, but don't want to run for it right now. My notes allow me to do that also next week, I want to finish this today. So I start from a clean data, visiting our SQL database with a query to set up things as I want them.

With the database set, I revisit my mindmap, updating my current ideas there.

I get interrupted with a face-to-face question of what I'm doing. I explain I'm experimenting a with a new way of testing I call "Blog while Testing". As usual, I get a "Why?". I explain I feel bored and and as the asker is a developer, I get a usual answer of perhaps I should automate, if I do the same things. I explain that the "same things" are same activities on high level, where I feel alone, while the contents of the testing are very much different from all other features. I give him the example of what I just learned about linking properties, and that with automation-focused approach, I would not have anything ready for that particular perspective. He accepts my notion of automation not being the solution here, and I thank him for voicing out the idea so that we could clarify it.

By this time, I realize that writing things slows me down but I feel more engaged and want to run through the experiment. But I need to break for lunch. Regardless of my intention to get up and go, I come back to make notes of the bug I need to isolate, as the theory of what it is based on what I have done is building in my head. I still update my mindmap.
As I'm ready to continue, I remember more clearly than usual that I decided to look at the spec first. It doesn't give me much. There's a current and target state description. I learn that the dialog changes to a bit more complicated one. Looking at the new dialog, I'm immediately convinced that with ability to mix properties, I can get this messed up bad. I resist the urge to go try out that theory. I just make a note: "what if I link third to a pair". As usual, writing it down helps me keep myself focused where I was going for.

I added some ideas about claims the spec has into my mindmap, and then tried logging in again. Turns out it still fails with fresh data. I would want to try again with restarted browser but since I don't want to close this browser right now as it has other windows, I mark down different browsers and switch the application to Chrome. I guess this is as good a reason as any other for switching browsers, I will anyway move between CG, FF, and IE10/11 before I'm done. I also add a bit about data (what data luminaire consists of), as the spec mentions data being copied.
I know from past testing experiences with the product that there's clearly three different types of pieces.  And while I write this, I further split one of the types into more subtypes. I know I could get this stuff reading other specifications, but having worked with the product, a lot of the relevant stuff is already available to recall, if anything just triggers my memory. I remember a few types and feel uncertain if I forgot something, so I make a note to go check that later. I also remember there's a nicely hidden feature of three intertwined attributes that needs special attention.

I've looked at spec for just this feature, and my need of getting on the application again is getting urgent. I feel a little hindsight on my choice of first working on the spec, it's getting late and with the decision of first not using the application after there was the claim that it is now ok, I might have missed availability of developers. This might soon block my testing for real.

I log in with fresh data and fresh browser and no problems yet. Time to find that new dialog. I have the starting data, with two buildings and no luminaires. On shallow UI level, I seem to be able to select one and link to the other. So far so good. I decide to start simple, the case where the designer links the buildings within same property without yet having any luminaires. I color my note saying this is where I am now just to remind me of what I was planning to do, and add a couple of other options I could start from. The nagging feeling is telling me that there's more than just these three, but I can worry about those later. I need to see if this works at all, now.
Linking, could have worked as in no error messages. But what happens now that I add a new luminaire, it was supposed to be same for both buildings. How does that show in the application? I realize I don't know, but I'm about to find out. I'm just about to do this, and thinking of taking a screenshot, I realize the UI speaks Finnish. It doesn't have to, so I'll just change it. And add languages to my mindmap of things to cover. I open the dialog again, to notice it's partially Finnish. I decide to log that in Jira. I write a title stating the problem "Unlocalized text in the new building selection dialog" and a quick note of description "check the picture". The screenshot I took I edited as I was taking it to mark the places I'm making a note on.

As I was about to take another screenshot for this blog post, I realized there's Finnish text that is from the fact that the data I use is Finnish. I consider starting a new property with different data, but the blogging cannot distract me that much. And with that thought, I realize there must be a problem when we mix properties and links between two properties that are started with different language data - and nothing to block that. I add the ideas to my list of things that could go wrong with the new kind of concept. I also realize it's hanging on top as I process my ideas, and I compare instinctively anything to it to find out new ideas.

I take the pictures. Here's what the dialog looks like - similar enough to my spec. I can't resist clicking on the link I decided to leave for later, just to see what opens there. And with 30 second wait, I first look at the frame of it and finally see also the data.


The second dialog and the wait for it at time when I'm on my shortest patience leads me to log another issue requesting a new feature: this needs to have paging. We can't load all buildings at once. And I need to be able to search the list. Why? Because these features reflect principles we have elsewhere in the application.

I decide to spend time logging those requests now, as two separate things. But I end up writing them both on the same issue. And I decide that while I'm not going to work on this session that much longer today, I will want to see if even the basic thing works as the spec claims.

I add my first luminaire. Position 1, just one text attribute with content being the name of the attribute and I click ok. I notice I'm expecting a visible error message any moment, as I feel almost surprised when there is none. But I realize I still don't know if this worked. So I have this for a building. What is there for the other building? Changing buildings I find out - nothing. Hmm, I could expect this, seems plausible. But now what if I add with the same position? I add same position but another attribute, same principle on naming my data. And this seems to work too. I'm puzzled. I go check if the buildings were linked - they were. So something is off.

I change to the other building to realize the data there is as I just changed the second building to be. So my expectation could be off. I was expecting a popup of some sort that would warn me. I go back to read what exactly did the spec say. This is not correct. But how it is not correct isn't obvious.

The spec states that when adding a new luminaire, you go fetch the other building's same position information. No fetching I could see as user, but it was fetched in the sense that the other got overwritten. The spec states that when updating the luminaire - which is sort of what I could be doing here except I don't see anything on the visible list that I'm updating, it's "hidden" - the update goes with the latest owner. But buildings are not really owners in our terminology. I make a note talk to team about how the owner concept has been used in implementing this, or check the code. But later.

I decide the problem as I would state is is the fact that there's a hidden list of luminaire's which confuses the user. I decide that based on all the things I know on how we do things. It would make sense rather to show the luminaire's that exist through linking, as opposed to get the information as I'm adding a position number in the add dialog. If that would be the place for the functionality, it would be dependent on what information the user first enters on the dialog - complicated. But bringing the luminaire's visible isn't complicated. I ponder a moment between "unfinished work" and "program improvement". We do both anyway, the latter just have lower priority. I decide this is more of an unfinished work type of thing, as we see things.

With this problem, I realize there might be more of a problem when I have created luminaires on two buildings and never edited them on other and I unclick the linking. Or if I have changed them in the other, will the buildings end up having the right luminaires at that point? I make a note of that. And I take another picture of my mindmap.

I've been painfully aware how long this post has been getting, so I decide to wrap it up. A little reflection is still in order.

How did this work for me?

My highlights of this testing are:
  • Being blocked from use of the application directed me to specification
  • I felt awful spending much time on spec (low info, low value) as opposed to spending time on the application and felt sorry for people who have to write test cases before they test.
  • I was aware of doing things in a different order and making more active choices on that
  • Automation would have had little to do with the boredom of repeating this activity again
  • I should have had so many of these discussions earlier before implementing the feature, it comes back whenever we have an excuse not to do that. But it's never too late. 
Blogging while testing makes me more analytical than I really am. I explain things I normally wouldn't explain, and I'm not convinced that all of my explanations of what and how I think are actually true. Humans have the tendency of coming up with a rationalization even if there wasn't one.

It clearly changed how I felt about getting on this feature. I had more fun. I know I'm not _really_ more connected by writing about this, but I feel I am. I feel just a little less alone. I'm guessing, as I have no comparison data. But I'm guessing that with how little motivation I had for this feature when I was getting started and how much more I have right now, I tested better today this way. Even though writing slowed me down and was away from time I could have used on testing. For now. While this experiment is new and fun. It stopped me from procrastinating to twitter, and kept me on the thing I was supposed to work on. I can imagine a lot of other techniques that could give me that. Change suits me. It makes me energetic.

There were many realizations about how things link. The session was focused on identifying what is there and I just scratched the surface on actually testing it. The writing of what I think about slows me down, as expected. But I did not expect that slowing down seemed to stop me in ways that enabled coming up with ideas I usually come up in different settings, with more effort and focus on them. Slow leaves room for thinking. Writing formalizes the way I think.

I love the fact about my work that I can do this. I can share. That adds to my motivation in general.

It would be great to compare notes with other testers on how they think with similar testing problems. Real products. Anyone up for that?










Sunday, June 21, 2015

Investment in code literacy in Finland

There's a lot of experiences that define our ideas that we feel the need to pass on. Since I'm taken on a thread of teaching kids (mine in particular, but others while practicing), I've spent a significant amount of energy thinking about things I do not want to pass on. Things that are very much a heart of my experience in the IT industry.

The first one is kind of evident. I'm the one woman amongst 20 male colleagues. But whenever I hang out with teaching kids on things related to creativity with software, I try not to mention the gender gap. I'm committed to "fake it till you make it", with the idea that by the time my kids are in the age of realising that coding could be in anyway gendered selection, they have been modelled courses where there's a pair of a man and a woman teaching and the whole problem has vanished. Talking about it would just make girls consider their stance on would they want to fight for their right to exist like the current generations have in IT and boys think the division is what is supposed to be, as it has always been there. There's no reason for the division. There's no reason to waste their time early on in managing the bias that could be gone by the time they are there.

The second bias I have just hit me today. I'm biased on not having kids aim high on requirements on their programming skill. "We're not trying to make them professional programmers" is a common message. But actually, we are. We are investing significant amount of their learning time and the level of professionalism in programming as we know it now is so varied, that I'm sure we can easily target better skills than the worst professional programmers I've worked with.

The investment in code literacy is significant. The school system in Finland changes in 2016 so that coding penetrates all classes of elementary school. It's a significant investment of time to help the kids learn to use coding to help their other studies or whatever personal aspirations they have. I've grown to like the analogy that we're teaching code kind of as we needed to start teaching reading in the middle ages, to make reading and writing a common thing instead of a small group's privilege - we need to enable everyone's contributions. I'm thinking of a phrase I picked from What is Code -article: "If coders don't run the world, they run the things  that run the world".

I need to stop letting my bias limit how high we aim for. A step at a time, they will have years of practice by end of 9th grade that I never had. And it will not be just if they are "hobbyist" in computers, like myself and my brother were back when we were young. The aim of being able to program is available for every one. And we need to start working to make the best out of it, instead of passing on the expectations from our (mine) personal history.


Wednesday, June 17, 2015

Lessons learned on avoiding testing

I'm a tester. I love testing. It's great to look at an interface from a user perspective, and find out how it could fail. Send in something, see what is the response, learn and come up with new ideas. The same intellectual stimulus is there, looking at an interface directed for a human user or a computer user. This can fail, and I'll find out when.

As I was talking about test strategy with a group of great people over a testing dinner last night, I was testing what I was saying. I had this out of a body experience (very mild!) of looking at what I was explaining, and I realised that while I tried really hard to focus on ideas that guide my test design, I ended up mentioning, over and over again, ways of avoiding testing I've focused on.

Don't get me wrong, I test a lot. My testing is far from shallow. But I spend some of my effort on building an environment where things don't fail when I'm not around to test. And where I can do new things, instead of repeating the same old ideas.

1. Enable technical excellence and beautiful code
Here's a discussion that I've seen happen in organizations. A developer mentions that something should be refactored, because it's actually not that easy to comprehend. A project manager / product owner hears her out and puts the thing on the backlog. And prioritizes it so low it will most likely never climb up from there.

The "refactoring" here isn't a small "let's change it just a little and tests will protect us". It could be ripping out a self-made solution and replacing it with an external component we do not need to maintain ourselves. It could be that whatever was coded, should be significantly reorganized or even partly rewritten, as it was done when the smart developer knew a little less than today as she too learns every day.

As a tester hearing that discussion, I used to look at it as something between the developer and the project manager. But having spent enough time with developers, I've grown empathy. I know how much a developer is in pain to even say out loud that his code that he thought six month ago was good isn't good. I know how much a developer suffers when the cleaning work gets postponed, indefinitely. And when a developer suffers, quality suffers. I suffer, as I run into stupid bugs a happy developer would never leave around for me. Morale is the key.

So nowadays, whenever I hear this discussion starting, I take a proactive, constructive approach and start negotiating on how soon we could do the changes. I explain the long term costs. I build a business case if needed. And over time, I've built an atmosphere where my team's developers actively identify how we could improve and we drive the improvements through.

In the past, I wasn't only passive, I was sometimes actively against. With year-long projects leading to a single release, the clean-up work was a risk. It would probably break everything. But with agile, we can contain the risks. 

2. Build room and skills for unit testing
Most of the refactoring we do happens with little to none of automated tests around. Our unit testing goes in cycles, where we add them, think they are not useful in the way we implemented them, remove them and then start missing them again.

We started from none. I did a lot of talking to convince the people in control of the money (time) on giving developers learning time for unit testing. I got us training. And I still get us training.

Requiring something without proper room in schedules for it just won't happen. And the skills take time to build. Time and examples of successes. I'm here to not let us give up. Many of the things that cannot be done, cannot be done because we just don't yet know how to do them.

3. Call for the best possible contributions for quality
I hang out with my team, and while I look at myself as a helper providing quality information, I'm also very much a senior, active team member. I'm often one that mentions risks I feel I could not address by myself: security, performance, maintainability, usability. And when called for, we together agree on how we address them. Performance testing with proper tools is done by people with proper tools - licenses dictate who can build and run them.

I often suggests pre-testing activities too. When something is ready to be tested, I'm often the one who suggests another developer to review the code before I get started on it. Judging from facial expressions, I've gotten somewhat good at guessing when someone is hoping to get through lowering the team bar on technical excellence. And I usually also hear this right away with an estimate on how much this will delay the time when my testing would be useful. But over the years, I've learned that things that look great in the user interface can be the biggest mess of all times when the first changes / fixes must be made. And maintainability of the code plays a big role in that, reviews are just the key to do it.

I also often speak about avoiding reviews, because they are always a late feedback. Wouldn't everyone like to get help while doing it, instead of hearing they did it wrong? Calling for pair programming and even mob programming has ended up being something I do. Because better code makes it easier to test-and-fix without breaking it in surprising ways. Regression happens for a reason, and spaghetti code without structure is a major reason. Something we can avoid, though. 

4. Holding the space
Before I test, I ask developers to give me a demo of what they've implemented. This is often very funny event. If I call it "pairing", I get negative reactions. So I call it demo. Sometimes I guide the developer to show me things I would test first, just to move the experience of things not working from me to them. But over time, I've started being more passive as the expectations of what will happen has been established. And I've started to see amazing things.

A developer comes in with a feature they have tested. We sit together and I don't say a word. He shows it, and starts showing things I could ask for without me asking. And pointing out bugs. It's been amazing to see how a developer can test their own feature really well, when their mind is in the right place. And to get it in the right place, they seem to need a "totem", in this case me, holding the space and quietly reminding there's a purpose we're here for.

5. Being undependable and hard to predict
I've heard the argument, that as testers, people need to know what services they can expect from us and what is out of our scope. With experiments, I've learned that for me in this particular project, it works great to be someone others cannot depend on, and being predictably unpredictable.

Sometimes I don't test a feature. After all, all the features have been tested by the developer before they reach me (I do participate before implementation starts too). So we can also release them to production without me looking at them. But whenever I won't look, I say it out loud. And from the reactions, sometimes I can also see that it is the right thing to do.

When a developer asks me for help in testing, I never refuse. But I pass some of the common expectations, forcing people to ask when they need it.  I do this with a consideration of risks, but with agile and continuous delivery, small changes just don't carry as big a risk as some things I've played with in the past.

6. Passing work forward 
When I test, I find ideas of things I wouldn't want to have to notice are broken. Key functionalities, key flows for the user, fragile components or use cases. I collect these, and discuss them with developers. From the discussion, developers automate things for us in testing. It could be a unit test. It could be a selenium test. It could be a database monitoring test.

Our automation grows from ideas of work I pass forward.





Saturday, June 13, 2015

Checking: How to feel like an idiot while not being one

I write to clarify my thoughts and build my understanding. I use words that other people will understand differently. I get corrected on my choices words much more than on my intentions or understanding. I write with the idea that it's a form of communication - always amongst two parties. I'm not telling you how things are or should be, I'm opening a window into my head, trusting that you will take whatever I write as well-intentioned. And if it matters, continue the dialogue. So I'd like to start this post by reminding of that: believe people have good intentions.

This post is really about the idea that today I feel like an idiot. I know I'm not an idiot. But the idea that you share ideas with thousands of people and get something wrong and everyone else got it right, just feels very much like I'd just like to crawl under a rock and hide. How could I miss the idea that creating checks is testing and programming, not checking. Then again, I'm most likely not alone with my lack of understanding. I have no need to be right, all I need is to learn to make sense of my own head driving my actions.

Over time, I've read James Bach's article on Testing and Checking Refined many times, and with all the reading, all the video and talk listening I've done, it never got through to me that they might be saying that checking does not encompass creation of checks. Coming up with the ideas of what to put in the checks is testing. Implementing the checks is programming. And checking is only running the checks that exist. Checking is done by a machine, or a human that acts as if he was a machine.

So, their checking is not only a part of testing, but it is a part of testing that gets twisted in a complex relationship of  activities of testing (anything a human would do that is testing), programming a check (anything that human would do that is programming) and checking (anything a human would not do unless made act like a computer). What I thought was their checking is three things in their universe of words.
  1. Skilled cognitive work of coming up with ideas of checks = testing. 
  2. Skilled cognitive work of turning the ideas into checks = programming. 
  3. Applying a decision rule again and again = checking, 
A way of describing that I'm looking for is to distinguish the activities around getting a check done (my way of thinking of checking - coming up with ideas of checks and programming the checks included) or getting a learning delivered (my way of thinking of exploring, using tools to extend reach). As a tester, I feel I think differently when I test with the purpose of coming up with algorithmic decision rules that I can then program as checks and when I test with the purpose of finding information in ways that I never plan to reuse as checks.

I am using checking and exploring as words to describe those purposes - still will do so, regardless of understanding someone else's use now differently than before.

Reading twitter discussion on Toby the Tester's blog post on feelings, I note we appeared to share the idea that coming up with checks and writing checks are checking. That discussion finally lead me to the realisation this blog post is about. In his blog post, without defining the words, his post on feelings make sense. It communicates it purpose well without a universal (or even incorrect use of some referred to) terminology. I understand what he is explaining without adhering to right labels and meanings of checking/testing. And with the short definition of checking, taking outside the context of original article by James Bach, I could still easily come to the idea that process of making evaluations by applying algorithmic decision rules would include coming up with ideas of decision rules (testing) and implementing them (programming).

I've been suggested the wordplay I chose to again get into is a game I can only lose. It's not about being productive but about being superior. But since I believe that people have good intentions, I can't believe it's just that. But it's good to remember how little specific words can mean. I'll leave this post with less than two minutes of Richard Feynman - a video that gives me comfort.

Wednesday, June 10, 2015

The courage to get a "no"

Introspection and learning about yourself is a great resource to drive for personal change. I've been reflecting a lot on what is important to me (the coding crisis) and paying attention to feedback I get on how I do things. This is a story of one of the pieces I've learned during the last year.

I've learned that getting a "No" is very difficult to me. It is so difficult, that I often convince myself of not wanting something I really would want, just to avoid getting a No. And as such, I'm accidentally denying myself of opportunities. Changing a well-learned habit like this is not easy. Sometimes I feel it is not even possible, as it requires pushing myself so far out of my comfort zone that I won't even recognise myself. I've been telling myself that I'm less in avoidance of No in professional life than in private life, but I may be fooling myself.

Let me give you an example. Last autumn I met a man. Clearly there was something special about him, since we're nowadays dating, but looking back at us ending up on a date is an interesting thing in retrospect. I was about to go out to dance by myself and instead of asking him if he would like to join me (direct question that could be replied with a No), I made a statement about my plans with him asking if he could join.

Looking at this later I started to see a pattern, with other incidents on clear avoidance of the No. I would think I'm asking by stating a possibility that can be interpreted as opening an option. But I would also feel very safe from the potential No, as I wasn't really asking. And since I realised this, I've been seeing myself doing this a lot, everywhere and in particular at work - trusting people to pick up questions framed as statements. Sometimes they do. But when they don't, I either let myself believe it wasn't relevant or ask directly, with more courage.

There's been suggestions that this might be a very typical female trait, and that a great way to stretch the limits of women can achieve is to start collecting No's: asking for something you are not even sure if you want or could get and embracing the idea of getting the No. And great people around me report this works brilliantly.

For work I've asked for things, and looking back at that, I've had great results. I got us the expensive router that "this company will never acquire". I got my developers to travel to conferences abroad in in "a company that never trains their developers". I changed the ways we work when "things have always been done this way". But most of my examples are working for the team. The No I might get isn't personal, so it isn't intimidating.

So today is a day to remind myself again: dream big, go for it and never mind the No's. There's a Yes there somewhere. Trying and failing is better than not trying. I need to start collecting and celebrating no's.


Inclusive language and non-testers need to drive for exploratory testing in agile

Context-driven testing community often talks about checking and testing, as defined by James Bach and Michael Bolton. In this language testing is superset and checking is a subset of testing. There's really no name for the non-checking -testing as the word "exploratory testing" is deprecated. Checking is what computers can do. Checking is to testing what compiling is to programming.

Trying to use words in this way have resulted in an experience of correcting developer language in a tone that does not seem helpful. Talking to a happy developer, who eyes gleaming with joy and pride tell me how they've become test-infected and just love testing, I don't want to go tell them they use the wrong word as they in fact are check-infected. And that what they do is to testing as compiling is to programming. They do a lot more than that. But they miss many of the aspects of exploring as the exploration they tend to do is only in the intent of identifying things to automate, and it narrows done the problem space to miss relevant types of feedback testing provides.

I too use the word checking, but when I talk about the part they miss, I try not to redefine their testing as checking but to introduce another concept. Kinds of like the exercise I remember doing way back on a Scrum course: continue sentences with but -- continue sentences with and. The latter makes the same idea more inclusive and forward-driving. Instead of "You are doing testing BUT that is checking", try "You are doing testing AND there's another style of testing I call exploratory". I'm not ready to deprecate exploratory testing. I go with Elisabeth Hendrickson's concept: testing = checking + exploring. And I specialise in exploring, while I understand the checking part well too, that is not where my passions lie.


And there's another side of the inclusive language. While I don't want to go correcting developers in everyday setting on their terminology, I find it very uncomfortable when co-training with a developer great in automated unit testing and there's an accidental redefinition of testing to mean only that. Exploring is different type of activity, focused more on idea generation than plan implementation. As such, I'm not convinced that it can all be summed up with the same ideas that checking (automated unit testing) can.

Purposes of "test" in automated unit testing context:

  • spec: a clear idea of what you are doing
  • feedback: information about whether it works (as you thought it would)
  • regression: keeping things in place with change
  • granularity: pinpointing when things went wrong
I haven't yet taken time to identify what in this list makes me feel excluded as an exploratory tester, but there's at least a heavy bias to make automated unit testing visible and leave the exploration as a side note. I'm thinking of a re-labeling exercise where testing, checking and exploring are forbidden to understand it better. Exploring looks at testing as performance - something you do. Checking looks at testing as artefact creation - something you write. They are different. 

I'm missing a more inclusive language, that would make me (and the likes of me) welcome as significant contributors, instead of always being on the sideline fighting for being included and  understood. The non-programming exploratory tester is useful. 15 years of misrepresented testing should stop so that non-testers would actively drive the idea of need of exploration forward with us testers. Agile and breaking down the silos should work both ways. And to many people it does. I remember to be thankful for having met some of those people, especially in days when the other kind dominates my signal reception.