Tuesday, September 26, 2017

Mob Programming on a Robot Framework Test

A year ago as I joined my current place of work, I eagerly volunteered to share on Mob Programming in a session. As I shared, I told people across team lines that I would love to do more of this thing here. It took nearly a year for my call to action to sink in.

A few weeks back, a team lead from a completely different business line and product than I work on pinged me on our messaging system. They had heard that I did this thing called Mob Programming, and someone in the team was convinced enough that they should try it, so what then should be the practical steps? They knew they wanted to mob on test automation, and expressed that a challenge they were facing was that there was one who pretty much did all the stuff for that, and sharing the work in team was not as straightforward as one might wish for.  Talking with three members of the team including the team lead and whoever had drawn me in, we agreed on a time box of two hours. For setup, one would bring in the test setup (which was a little more than a computer as we were testing a Router Box) and ideas of some tests we could be adding.

It took a little convincing (not hard though) to get the original three people to trust me with the idea of not having first a session of teaching and introducing the test automation setup, and that we would just dive in. And even if we agreed on that in advance, the temptation of explaining bits that were not in the immediate focus of what the task drove us towards was too much to resist. 

The invitation for the team of 7 included “test automation workshop”, without a mention of mechanism we would be using on this. And as we got to work on adding test, I asked to figure out who knew the least and made them sit in front of the keyboard first, just saying the rule of “no thinking on the keyboard”. I also told them we’d be rotating on 3 minutes, so they would all get their chance on the keyboard and off, except for the one automation expert. Their rule was to refrain from navigating unless others did not know (count to three before you speak). 

Looking at the group work for 1,5 hours was a delight. I kept track of rotation, and stepped into facilitating only if the discussing got out of hand and selecting between options was too hard. I noticed myself unable to stop some of the lecture-like discussions where someone felt the need of explaining, but the balance of doing and talking was still good. People were engaged. And it was clear to an external person that the team had strong skills and knowledge that was not shared, and in the mob format insights and proposals of what was “definition of done” for the test case got everyone’s contributions.

I learned a bit more on Robot Framework (and I’m still not a fan of introducing a Robot language on top of Python when working with one still would seem a route of less pain). I learned of use of Emacs (that I had somewhat forgotten) and could still live without. I learned on different emphasis people had on naming and documentation on tests. I learned on their ideas of grouping tests into suites and tagging them. I learned of thinking in terms of separating tests when the Do is different, not when Verify needs a few checks for completeness. I learned to Google Robot Framework Library references. And I learned that this team, just like many other teams at the company, is amazing. 

Asking around in retro, here’s what I got from the team: This was engaging, entertaining. We expected to cover more. The result of what we did was different than the first idea of what the test would be, and the different means better in this case. 


My main takeaway was to do this with my team on our automation. If I booked a session, they would show up. 

No comments:

Post a Comment