Tuesday, October 25, 2016

Away from being a gatekeeper

It sneaked up on me. I can't really pinpoint to the exact moment of revelation, but now, looking back, I can see I've done yet another 180 turn on my beliefs.

I used to believe I, as a tester, was around for the benefit of the stakeholders, shining light on all sorts of perspectives so that we understand what we're releasing. Advocacy was a big part of what I did, making sure people would understand the risks and implications,  but on untested features (and I had quite an imagination for things that could go wrong) and bugs we had found.

At some point, I started understanding that risks to business people mean opportunities to win. There's a chance I don't have to pay for this and things will still be alright! So I participated in all sorts of efforts to find actual bugs and problems, evidence of the risks being true and real, so that we can't just close our eyes and wish things did not go wrong.

Whatever the developers would  change, I would test with my tester colleagues. Whatever features those chances were introducing, I would dwell in the implications of chances of bad things happening and then seeking for the evidence. I would often find myself estimating how much time we need on testing and getting my estimates dismissed, being given a shorter time.

Thinking back to those times with the way I perceive things now, I think I've found a less stressful world of testing.

Now I start with the realization that when things are failing in production, it's not my fault. I did not change the environment or the code, and I will not be the one staying late and sacrificing my weekends to fix it. The developers (programmers, if you will) will do that. They are the ones ultimately held accountable for quality. I'm here to help them. I'm here to help them figure out ways to prevent escalations because of bad bug fixes, because they did not quite understand the implications of a change or did not quite get all the "requirements" right. I offer my help, and they can choose to accept it. I no longer need to push it down their throats and sit there guarding a release making sure nothing untested gets released. I no longer work with estimates, I work with time boxes. I commit to doing the best possible testing I can with whatever time I'm allocated.

So today, I stopped to think about where the change of minds comes from. Here's some of my observations:
  • I worked as a solo tester with many developers who tested. I know they can do things without me, even if they can do things better with me. 
  • I saw the developer pain of late nights and lost time away from new features, and realized I could help relieve that pain channeling stakeholders directly to developers. 
  • I experimented letting features go out without testing them and the world did not explode. 
  • I helped change our release cycle to daily, allowing bug fixes and new feature development both go into the same cycles. It gave me full time to exploratory testing as some would happen pre-release and a lot more post-release. 
  • I got more done not going into the endless fights for testing time and advocating for risks for people that would only understand empirical evidence. 
  • I heard other testers, like Elisabeth Hendrickson speak of this: we are not gatekeepers. 
I realized all of this with a slight smile on my colleagues face, when I wanted to undo a release freezing process I created 10 years ago, stating out loud my learning: releases should belong to the developers and not testers as the gatekeepers. Developers, as we know them, often need help in understanding some of the implications of their changes. But they learn through feedback. And they deeply care. I want to side with them, not against them. And we all side with the success of our business through creating the best products we can.

Monday, October 24, 2016

The two language trap

For a good number of years, I worked with an organization that was doing production and test automation code all in C#. The organization got there partially through trial and error with VendorScript and discarding all work of an individual because no one else could maintain it.  And partially from a timely recommendation from me. I had an opinion: it would be better to work on one language and not go for a different language for the testing code. 

And I'm simplifying things here. In addition to C#, there was at first a lot of JavaScript. Both in production code and tests. And then later on, there started to be a lot of TypeScript. And there was Powershell. So clearly the developers could move around languages. But common thing was that each of the languages was motivated primarily from production, and testing followed. 

The good thing about working on the same language selection was that as the lone tester, I would not be alone even if I would contribute to the automation efforts. The automation was there to support the developers, and while e.g. Python and Robot-framework sales were high (it's from Finland after all), a new language, I believed, would create a distance. 

So I changed jobs and the timely recommendation from me was way past being timely, with years of effort committed to a two language environment. The production language being C++, I could see why the choice of the same language for testing was not quite as straightforward. So the test language was Python. 

The more I look at this, the more I understand that there isn't a real choice of one language. But the two languages still looks a lot like a trap. Here's some differences I'm observing:
  • The gap between 'real' production programmers and the test automation programmers is deeper. 
  • The perceptions of the 'betterness' of the languages deepen the gap
  • People seldom develop equal skills in both, and as new learners they strictly start from one or the other. 
  • The self-esteem of the test automation programmers in their programmer identity is lower as they work on the 'simpler language'
  • There is a lot more of a separation of dev and test, even if both are code 
  • As an exploratory tester, I will be in a two-language trap - learning both enough to get the information in two different formats to the extent I feel I need it
I feel the two language trap exists, because a lot of people struggle early in their programming careers to just work on one language. There's a lot of language specific tricks and development to follow in both the ecosystems, taking the groups further apart. 

So this whole thing puzzles me. I have ideas of how, through collaboration, I could improve what I perceive as suboptimal siloing. How there's cultural work in promoting the two languages as equals in the toolset. How clean code and technical excellence might work as a bridge.

But I wish I did not have to. I wish I could live in a shared language environment. And most of all I wish that those who are not forced by realities into separating production and testing into two different languages, would think twice on their choices. 

It's not about what language your tester is comfortable with. It's also about who will help her in the work, and how much will she have to push to get her work to be the feedback the developers need. 

When the test fails, who cares? I would hope the answer extends the group of test automators. 

Sunday, October 23, 2016

Mentorship and sponsorship

I think I saw a tweet passing on my twitter stream planting an idea of the difference in having a mentor or a sponsor. The goal of both these is similar: supporting you with your goals. At first, when I stopped to think about it, I was convinced that I've had many mentors but very little sponsors. And that I have always acted as a mentor, not a sponsor.

Edit: here's the tweet that inspired me.
Looking deeper revealed another truth. And as usual, looking deeper needed a friend to point things out that should be obvious.

Sponsors are people who will advocate for you when you need to be more visible. Mentors are a source of guidance, advice and even inspiration. Mentors advice, sponsors act. And surely people have acted for me.

Some of My Sponsors

Thinking about this makes me think about the people I feel have made significant differences in my professional career through their actions that I never had to ask for.

There's an old boss who was willing to go to court with my previous employee to get to hire me. He supported me while we worked together, spoke positively of me to me and to others. Often the positive words directed at me were the most powerful. They nurtured me from potential insecurity to trusting on my skills and my potential. When I was reminded of a Cindy Gallop quote: "Men are hired and promoted for potential, women are hired and promoted on proof.", thinking of this boss makes me feel he always saw the potential in me and played a big part of making that potential develop further.

Similarly, I can recognize two other jobs that I've ended up because I've had powerful sponsors. I ended up with a great consulting gig (and later a job) at an insurance company, because there was a woman I studied with, and in particular mentored to new studies on her first year in a position of power and she worked hard to hire me. And when offered with the idea of my latest job, I never realized to appreciate the actions of my significant other negotiating a higher salary on the spot and making sure the job, if it would emerge would fit my dreams and expectations better in relation to not giving up my speaking. He spoke for me, so that I did not have to. I did not even consider changing jobs while in that discussion, which made it easy to dismiss his contribution. The job I ended up considering was one he helped create. To start to consider, it took me another month, making it even easier to forget the connection.

Another set of sponsors are the people who have taken me forward in my speaking career and those people I want to mention by names, as they are more known in the community. Helena Jeret-Mae gave me my first keynote a few years back. Rosie Sherry started picking articles from my blog to share and taught me that there are people who make things easier for others. Rosie Sherry and Anna Royzman invited me & Llewellyn Falco to do an opening talk for TestBash NY and Anna Royzman later allowed me to do an opening keynote for her conference. Giving people stage is an act of sponsorship and I've been very fortunate with that.

While I have never had a named mentor or sponsor, I've had plenty of people in both roles teaching me and supporting me.

Paying the good forward?

Similarly, I could easily recognize that I've been a mentor. I've mentored quite a number of new speakers, now both local and international to smooth their way into speaking or delivering just one talk. Trying to support the dreams of people (both testers and developers) around me at work is a big part of what I do.

I often go well beyond mentoring into sponsoring. I work a lot to raise money for helping people with things that have been hard for me, like the financial side of speaking in conferences. I share my extra-free-entry-as-speaking-fee with people I feel need the nudge of inspiration a conference could give. I've sponsored both people in my organization making room for my employers to allow me the time to speak, but also people with criteria "your organization wouldn't pay for your entry".

I hope I've opened a few doors with recommendations, and soothed someone's professional journey there.

Active seeking of sponsors and mentors

The tweet made me realize I've never actively had a mentor or a sponsor. Not doing so actively means it's harder to name and recognize them, but they still most likely are there.

It's great to see there's programs for mentorship, but as sponsorship includes acting on your behalf, it takes more trust. But it's a fascinating thought experiment if there would be more we all could be doing for one another. Encouraging, mentoring and sponsoring.

Friday, October 21, 2016

Safety in being heard

Today, I've been thinking about asking. Let me tell you a few stories that are serving as my inspiration.

You don't know to ask for things you've never experienced

I'm speaking in a conference and as my speaker fee, I negotiated a free ticket - something I've been doing in Finland for quite a while. It means that not only I get to go, but I get to take someone with me. In past years, this has opened the expensive commercial course for people in my community, and people in the same company I work at. Last time I passed a ticket to a colleague, he did not use it. I wanted to make sure this time my work would go for good purpose, so I kept checking with the tester at work I had in mind to take.

In the process of discussing all this, I learned that this was the tester's first ever conference (something I really did not expect) and things like "food is included" was a surprise. In the discussion, I realized that as a regular conference speaker and goer, I take a lot of things for granted. I don't understand anymore that they might not be clear for others.

So I felt grateful for having enough interaction to realize that the unspoken questions in the puzzled looks were things I could pick up. The tester might not have known enough to ask the questions. Then again, here not knowing would have clearly been ok, and learned later.

You get answers when you know to ask

When you have a question, people rarely say no to answering your question. I'm new to my job, so I have a lot of questions, and as long as I come up with the questions, things are moving on nicely.

Yesterday, I was feeling back pain. Sitting in my office chair, I suddenly realized that I had been sitting long days in a non-ergonomic unadjustable chair. I never paid attention, until my body made it obvious I should have, basically crippling me for the day. As soon as I asked for a proper chair, I got it. But I had to ask. Learning to ask was still not too late.

People tend to reject info they don't ask for

I've been experiencing a recurring pattern over last weeks where I point out unfinished work (usually of surprising kind) and the developer I talk to brushes it off. It's often "some other team's responsibility" or "agreed before I joined" or "will be done later". Having been hired to test (provide feedback), rejecting my work categorically feels bad. And it feels worse when I follow up on the claim, and come back with info of what the other party says and then the unfinished work gets acknowledged.

This has lead me to think about the fact that whoever asked me to provide the information as a tester is different from the developer who gets to react to my feedback. And as a new person on the job, I would love a little consideration for my efforts. They are not noise, I pay a lot of attention to that.

Why all this?

All of this makes me again think of psychological safety. Being safe means being heard. Being safe means being heard without fighting for your voice. Being safe means being heard even if you had no words to describe your questions.

As a tester, I've learned to never give up even when I feel unsafe. And simultaneously, I look around and wonder what makes some of the other testers so passive, accepting of what is being told. And yet, they work hard in the tester jobs.

It makes me think that while I'm comfortable with confrontation, it still eats up my energy. Everyone should be allowed to feel safe.

And to get there, we need to learn to listen. 

Thursday, October 20, 2016

Testing in the DevOpsian World

There is an absolutely wonderful blog post that describes Dan Ashby's reaction to being in non-testing conferences that seem to make testing vanish. The way Dan brings testing back is almost magical: testing is everywhere!

At first, I was inclined to agree. But then I decided to look at the DevOps model with more empathy for the DevOpsers and less for the Tester profession, I no longer did.

The cycle, as I've experienced it with the DevOpsers, is used to explain continuous flow of new features through learning about how the system works in production. It's not about setting up branching systems or strategies. It's not about questioning the mechanisms we use to deploy multiple times a day - just the delivery of value to the application.

I drew my version of the testing enhanced model:
In this model, testing isn't everywhere. But it is in places where DevOpsers can't really see it. Like the fact that code is much more than writing code, code is just end result of what ever manual work we choose to put on the value item delivery. All the manual work is done in a branch, isolating the changes from whatever else is going on, and it includes what ever testing is necessary. With a DevOpsian mindset, we'd probably seek even exploratory testing at this point to be driving the automation creation. But we wouldn't mind finding some things of oops where we just adjust our understanding and deliver something that works better, and while some portion of this turns into automation, it's exactly same as with other code: not all things thought around it are in the artifact, and that is ok, even expected.

But as we move forward in the value delivery cycle, we expect the systems to help us with quick moving to production are automated. And even if there is testing there, there's no thinking going on in running the automated tests, the build scripts, deployment scripts and whatever is related to getting the thing into production. Thinking comes in if the systems alert on a problem, and instead of moving forward in the pipeline, we go back to code. Because eventually, code is what needs to change to get through the pipeline, whether it's test code or production code.

On a higher level, we'd naturally pay attention to how well our systems work. We'd care about how long it takes to get a tested build out, and if that ever fails. We would probably test those systems separately as we're building them and extending them. But all of that thinking isn't part of this cycle - it's the cycle of infrastructure creation, that is invisible in this image. Just as the cycle of learning about how we work together as a team is invisible in this image.

However, in the scope of value delivery, exploratory testing is a critical mindset for those operating and monitoring the production. We want to see problems our users are not even telling us on, how could we do it? What would be relevant metrics or trends that could hint that something is wrong? Any aspects that could improve the overall quality of our application/system need to be identified and pushed back into the circle of implementing changes. 

I find that by saying testing is everywhere, we bundle testing to be the perspectives tester thinks testing is. A lot of activities testers would consider testing are design and proper thinking around implementation for non-testers.

By bringing in testing everywhere, we're simultaneously saying the model of value delivery is extended with elements of
  • Infrastructure creation 
  • Team working practice improvement
And it's natural we'd say that as testers. Because, those all are perspectives we consider part of what a tester facilitates. But are they testing of the application and does testing need to go everywhere on a model that isn't about all things development? I would disagree.

My feeling is that the tester community does a disservice to itself saying testing is everywhere. It's like saying only things we label testing make it good. Like things programmers label programming or code wouldn't have same potential.

To stay in the same table discussing and clarifying what truly happens in DevOpsian world, we need to consider speaking in the same scope. Well, I find that useful, at least. 

Wednesday, October 19, 2016

Entitlement - extending our contract

I've got a few examples of things I need to get off my mind - of things where people somehow assume it is someone else's duty to do work for them.

The word on my mind is entitlement. It really puzzles me on how come there are so much of these cases where someone assumes they have free access to my time, just because they had some access to my thoughts in a way I chose to make available. It leads in to what I perceive as a lack of thoughtfulness in requiring services, as if you were entitled to them. And it puzzles me why I think of this so differently, taking it for a fact that I should appreciate what I'm getting on the "free" services and that I could actually need to make it bidirectional in some way if I have specific requirements to fulfill my personal needs.

The Uninvited Debates

The first thing where entitlement comes to play is the idea of debates - whenever, where ever. When you say something and someone questions you, that someone is somehow *entitled* to your answer. Not that I would have the free choice of giving that answer in spirit of dialog and mutual learning, but  that I owe people an answer and an explanation.

I love the idea that my time is mine. It's mine to control, mine to decide on, mine to invest. And investing in a debate (from my perspective) means that I get to choose which debates I stop early and which ones I continue further. And it's not about fear of the other party - it's awareness of the rathole that isn't doing anything but wasting our time.

The Burden of Proof

So I wrote a book. So it's kind of obvious Mob Programming and Mob Testing are close to my heart. The thing that puzzles me is the people who feel that for *evangelizing* something this wasteful (in their perspective), I now need to start a research project or share private company data with numbers to prove mobbing is a good use of time.

I'm happy to say it's a thing you either believe in or not. And that successes with it will most likely be contextual. I also say that my experience was that it made no sense to me before I tried it. None of the rational arguments anyone could have said would have convinced me.

There's a lot of research on pair programming. Yet, I see most people telling it can't work. I welcome anyone to do the research and come to any conclusion they come to, but I'm not planning on setting that up. Again, my time, my choices. Writing a book on something isn't a commitment to have answers to all the questions in the world.

I also find these labels interesting. I've been told I'm an evangelist (for mob programming) and a leader (for testing). I label myself as a sharing practitioner. And my label is what drives my time commitments, not the labels other people's choose for me.

The Conference Requirement

I speak at conferences. A lot. And sometimes I run into conferences that feel that by giving me the space to speak they are entitled to a lot of services and requirements on how those services are delivered.

It's not enough that often these conferences don't pay for the expenses, meaning you *pay to speak*. But in addition, they can have very specific requests. My favorite thing I don't want to do is use of conference template, on anything beyond the title slide. It's a lot of work moving elements around, and that work isn't exactly something I would love to volunteer my time for. And reserving a right to change *my slides* is another. I'm good for removing ads and obscenities, but asking for full editing rights and requiring my compliance to change per feedback sounds to me like I shouldn't be speaking in the first place.

We're not entitled to free services. Sometimes we're lucky to get them. Seeing paid services go down, I get reminded that we are not entitled to those either. We're lucky to have things that are good. Lucky to have people who work with us and share for us.

Saturday, October 15, 2016

Two testers, testing the same feature episode 2

There are two testers, with a lot of similarities but also a lot of differences. Tester 1 focuses on automation. Tester 2 focuses on exploration. And they test the same feature.

And it turns out, the collaborate well, and together can be the super-tester people seem to look for. They pay attention to different things. They find (first) different things. And when that is put together, there's a good foundation for testing of the feature, both now and later.

Tester 1 focusing on automation makes slow progress on adding automation scripts and building coverage for the feature. Any tester, with unfinished software to automate against would recognize her struggles. As she deeply investigates a detail, she finds (and reports) problems. As her automations start to be part of regular runs, she finds crashes and peculiarities that aren't consistent, warranting yet more investigation (and reports). The focus on detail makes her notice inconsistencies in decision rules, and when the needed bits are finally available, not only the other automators can reuse her work directly but also she can now easily scale to volume and numbers.

Tester 2 focusing on exploration has also found (and reported) many bugs, each leading into insights about what the feature is about. She has a deep mind map of ideas to do and done, and organizes that into a nice checklist that helps tester 1 find better ways of automating and adds to the understanding of why things are as experienced. Tester 2 reports mistakes in design that will cause problems - omissions of functionalities that have in the past been (with evidence) issues relevant customers would complain about but also functionalities that will prove useful when things fail in unexpected ways. Tester 2 explores the application code to learn about lack of use of common libraries (more testing!), and placeholders, only to learn that the developer had already forgotten about them. Tester 2 also personally experiences the use of the feature, and points out many things about the experience of using it that result in changes.

Together, tester 1 and 2 feel they have good coverage. And looking forward, there is a chance that either one of them could have ended up in this place alone just as well as together. Then again, that is uncertain.

One thing is for sure. The changes identified by tester 2 early on are things that seem most relevant early on leaving more time for implementing the missing aspects. The things tester 1 contributed could have been contributed by the team's developer without a mindset shift (other than change of programming language). The things tester 2 contributed would have required a change in mindset.

The project is lucky to have the best of both worlds, in collaboration. And the best of it all is the awesome, collaborative developer who welcomes feedback and acts on it in timely fashion and greets all of it with enthusiasm and curiosity.

Tuesday, October 11, 2016

The three ways to solve 'Our Test Automation Sucks' in Scrum

Scrum - the idea of working in shorter increments. The time frame could be a month and when you struggle with a month, you'll try two weeks. Or even one week. But still there's the idea of plan, do, and retrospect.

When we work in short increments, a common understanding is that moving fast can make us break things. And when things could break, we should test. And with the short cycles, we're relying on automation - like it was our lifeline. But what if our test automation sucks, is there no hope?

Option 1. Make it not suck.

I would love this option. Fix the automation. Make it worthwhile. Make it work.

Or like someone advised when I hinted on  troubles with automation: hire someone better. Hire a superstar.

No matter what you need to do to make it not suck, do it. And with a lot of things to test, there's a lot of work on fixing if a lot of it sucks. And what sucks might just be the testability of the application. So don't expect an overnight change.

Also, don't give up. This is the direction you will go to. But it might not be quick enough to save you.

Option 2. Freeze like crazy.

This option seems to be one that people resolve to, and it is really an antipattern. It feels like the worst of two worlds. You slow down your development to make time for your testing. You test, you fix, you despair. And you repeat this again and again, since while the main is "frozen", some work gets bottled up somewhere just to cause a big mess when unfreezing takes place.

Freezing brings in the idea that chance is bad now that we need to fix it. Hey, maybe chance in a way that breaks things is the bad thing, and making developers wait for improving things isn't exactly helping.

Let go. We're not the gatekeepers, remember. Freezing is, a lot of times, gatekeeping. Would there be a safe to fail way to get to the lesson of letting go?

Option 3. Do continuous releases with exploratory testing

I've worked with options 1 and 2 long enough to know that while we work for option 1 to be reality, there's a feasible option too. What if we would only put things in main that can be released now?

What if, instead of thinking of the programming as the only manual tasks, we'd realize the testing is too. Couldn't we find a way not only to program but also to test before we merge our changes into the mainline.

I've lived with option 3 for a few years (with gradually less sucking automation), and I'm having hard time seeing why would anyone choose to work any other way. This basically says: stop doing scrum. Do a feature at a time, and make your features small. Deliver them all the way through the pipeline.

Continuous Delivery without Automation is awesome. With automation, it gets even better. But the exploratory part (the 'manual' thinking work, just like programming the chances) isn't going away any time soon.

An Old Story of a Handoff Test

It was one of these projects, where we were doing a significant system with a contractor. Actually, all software development was done by contractors, and on the customer side we had a customer project manager and then need to set up a little acceptance testing project in the end of it all. 
Acceptance testing was supposed to be 30 days at the end of the whole development effort. If the thing to be delivered was super big, you might have several rounds of deliveries. So it was it in this particular one.

As the time of acceptance testing was approaching, preparations were in full steam. No early versions of the software were made available. A major concern was that when the 30 days of testing starts, there’s no return. You test, you get fixes and you accept when you have no fixes pending. If the quality is bad enough and blocks testing, you’re not well off. 

The state of the art approach for dealing with the risk of bad quality that would block your testing and thus eat away your test time was to set up a handoff test, just before the testing would start. It would often serve a few purposes of confidence:
  • the system to test was properly installed so that testing could happen
  • we’re not wasting our specialists time on work the contractor was hired to do

For a typical handoff test, you needed to define your tests in advance and send the documentation to the contractor at least a week before the day of handoff test. And so we did, fine-tuned and tailored our tests to be prepared for the big day.

As the big day came, we all got together in one location to test. We executed the tests we had planned for, logged bugs and were in for a big surprise. 

The contractor project manager and test manager rejected all the reports. All of them. They reviewed them against the test cases as they read them, forged in iron. “You couldn’t find this problem with exactly these steps and these steps alone”. They did not reject the fact that the problems were not real. They rejected them based on the test cases.

Some hours (and arguments) later, we were back on track and the real bugs were real bugs. 
This experience just popped back from my memories, as I was reading about Iron Scripts where deviation isn’t allowed. I can just say that I’m so lucky to not have seen any of this in … about 6 years. I’m sure my past is still the current struggle for someone. 

Sunday, October 9, 2016

Details into Mob Exploratory Testing

I love exploratory testing, and have a strong belief in the idea that exploration can have many paths to go forward and still end up in great testing. Freedom of choice for the tester is a relevant thing, and I've grown to realize I have a dislike for guidelines such as "find the happy path" first when exploring.

Surely, finding the happy path is not a bad idea. It helps you understand what the application is about and teaches you about putting the priority of your bugs into a context. It gives you the idea of "can this work", before you go digging into all the details that don't work.

I've had to think about the freedom of choice more and more, as I'm doing exploratory testing with a mob. While I alone can decide to focus on a small piece (and understand that I don't know the happy path and the basic use case), people who join a testing mob are not as aware of the choices they are making. People in the mob might need the frame of reference the happy path gives to collaborate. For me, each choice means that it enables something but also leaves something out. Playing with the order of how I go about finding things out can be just as important for my exploration than getting the things done in the first place.

For example, I often decide to postpone reading about things and just try things out without instructions, recognizing documentation will create an expectation I care for. I want to assess quality in the experience of use both without and with documentation, and unseeing is impossible. Yet, recognizing documentation reading matters, I can look at the application later too trying to think of things (with my mind map there to support me) simulating the old me that had not read the documentation.

My latest mob I lead, I ended up with a more strict facilitation. I asked the questions "what will you do next?" and "what are you learning?" much more than before, and enforced a rule of making quick notes of the agreements and learning in the mind map.

When the group got stuck in thinking about a proper phrasing of a concept in the mind map or location of an idea, I noticed myself referring to rules I've learned around mobbing on code. Any name works, we can make it better later, "just call it foo" and learn more to rename it. Any place in the mind map works, we can rearrange as our understanding grows, we don't need to do it at the time we know the least.

Finally, I was left thinking about a core concept of mobbing around code: intentional programming. The shared intention of what we're implementing, working in a way where the intention does not need to be spoken out about, but the code shows it. Test-driven development does this in code, as you first define what you'll be implementing. But what does it in mob exploratory testing?

Working from a charter is intentionally open-ended and may not give the group a shared intention. Even a charter like "Explore Verify(object) with different kinds of objects and contents using Naughty strings -list to find inconsistent behaviors" isn't enough to keep the group on a shared intent. The intent needs to be worked out in smaller pieces of the test ideas.

Looking at this group, they often generated a few ideas at a time. Making  them write those down and execute them one by one seemed to work well on keeping them coherent. So, it looks like I had not given enough credit for the mind map as a source of shared intent for group exploration.

Saturday, October 8, 2016

Two testers, testing of same feature

Two testers, working in same team with very different assignments as their starting point. Both same age, in years of life and career, both women, but very different backgrounds. Both starting in the same organization almost at the same time.

The first tester is hired as as automation engineer. She has spent her career with code and used to identify as programmer. Her previous company moved her from programmer to tester, still programming the tests on a highly technical interface-based system. She approaches testing an automation idea at a time, ticking off test by test to add to her test suite.

The second tester is hired as an exploratory tester. She reads code regularly, but will instinctively approach any testing as a wide scale learning effort, where detail of automation is best used when there first is a good understanding of what we have and how it works.

In a week, the first tester automates one test. The test fails on upcoming weeks as other teams intentionally break functionalities in order to introduce a new feature. The second tester creates one mind map and has lots of conversations that change her understanding of what the state of the feature is supposed to be now, and which of the observations are meaningful.

While the first tester takes tools as they're given, the second tester questions everything. The second tester insists on manually testing in pieces to tweak inputs more easily, and finds ways of breaking down the long chain to run thousands of little tests finding out what kinds of things are relevant. The first tester automates the long chain, until discussing with the second adding the idea of automating in shorter bits, just to realize that it's not as straightforward with automation as you might hope for.

I suspect that over time, you could get to good testing from both of these angles, but it is also evident that there is hardly enough time for the first tester's approach to grow to coverage. The exploratory approach brings long term benefits not through the available test automation, but the improved code and programmer knowledge.

We're lucky to have both. I've been lucky to have the first as something developers do, while the latter has been my assignment.

I'm looking forward to playing more with time boxing my time to be more like tester one and seeing if that makes me more super than I already am with focus on exploration. It is clear that those are, for me, conflicting learning goals and my head just isn't ready on intertwining the two yet. Perhaps in a year.

And the most interesting question of them all: when tester one and two collaborate, from their different starting points, will they become the same? Both are *testers* after all. 

Wednesday, October 5, 2016

Sharing is a Way of Learning

There’s a big underlying theme on everything I do in my career: I’m here to learn. 

I think of software testing (and development) as learning activities, and I’m particular fan of exploratory testing that isn’t only founded on learning, but is an approach for learning with an external imagination: any piece of the system we’re building. 

I like learning from people, and I find inspiration and energy from people. So for me, in hindsight it has been a natural progression to become a community facilitator from a selfish starting point. What would be a better way to access people and their ideas, than have them available in an event sharing what they do? 

I speak in conferences to learn. I share and people tell me of their similar or conflicting experiences. Speaking is a great way of meeting people with similar interests, especially to someone like me who is comfortable talking deep about testing and development topics I’m into, but close to panic when the topic is general, like sports, current american presidential election (unlike politics otherwise, this is a smalltalk topic), movies, books or beers. My idea of random small talk I’m comfortable in conferences is to ask people if they speak in public and if they would, what their topic would be. But I love a discussion over lecture, and while there’s a lecture or two in my back pocket, I prefer to try to pay attention to both parties in conversation contributing.

I sometimes wonder if the time I spend for conferences is giving me what I seek, and if there would be better uses of the same time. But usually I find someone from the conference that makes the investment worthwhile - a meaningful discussion that inspires me. The time I spend in conferences is time I spend growing a speaker community. 

The very same reason of learning has driven me to volunteer to coach speakers, both with Speak Easy and with the Ministry of Testing Gems webinar series. I’ve had a chance to hear (and discuss) in detail some great experiences making their way into a polished talk for some conference. I feel privileged to have these people’s attention one on one. 

I suggest you try learning this way too. Volunteer to help a local meetup or just organize a meetup of your own. Think of something to share, be a Speaker and let the meetup organizers know of your existence, the local stages have often hard time finding their speakers. And volunteer as mentor for SpeakEasy as a mentor, your view can be really relevant in helping find a great story the world of conferences needs to hear. While learning about our craft, you might learn a thing or two about being active and organizing, and those lessons might be invaluable in your career. They’ve been on mine. 

Don’t do it because it’s the right thing for the community to contribute and give back a little, do it because it’s the right thing to amplify your own learning. 

My response to Jon Bach's post

Since I write long comments on blog posts where the blog owner has full control over what they choose to publish, I will cross-post my comment here. This one is for Jon Bach on My brother, the Tester.


Good thoughts, and well presented, thank you. I wanted to add a few perspectives though.

I don't think people are (including me) are upset about using my name, but using my name to describe my character and misrepresent me. A lot of the first reactions came from people who (unlike James) know me, and felt that the slide misrepresents me. It does not include claims I've made dissected, it includes statements of how James sees he is different from me. And quite frankly, I find many of them quite insulting.

In his blog post, on the other hand, the continues the theme. He misquotes my slides from my presentation (not what I said, the text - the slides were already available for fact checking during conference and in particular when the article was written).

With great power comes the responsibility. I find that James isn't, in this particular case acting very responsibly, instead he chooses to continue to misrepresent me and not publish my response in his blog.

I love how you talk about congruence and being true to your values, and I agree that is what James is doing. He says things as he sees them, and I value that. I'm not against that. I'm against forcing people into 'debates' with fishy tactics. I'm against intimidating people and not realizing that while some find it clear, others find it so scary that we lose relevant voices in the community. A lot of the voices I recognize as lost are women's, and I find we're in a place where we need to stop accepting losing those voices. It's reasonable to expect a change.

You mention the dysfunctional archetypes. I read his post very differently, He starts with blaming me for copyright infringement stating that while it's a fact he's not using that against me - he just did stating that. He corrected it after I quoted a piece of US copyright law and I find it interesting that I needed to do that. Similarly, he blames me for posting the slide. I would never had the chance of posting unless the slide was there in the first place. I find this is often the stance in James I'm witnessing. 

When James 'tests' you, he does so knowing you. He does not know me. He has not studied my writings apart from some tweets. He has boxed me, unjustifiably, as 'not a tester'.

I also find his actions in this particular case are not congruent of the message he portrays of himself as The Tester. He did not test the characterizations he made of me, he did not fact check his evidence. He lost focus on evidence for emotions around thinking I'm attacking our craft.

I find, unlike James, that we break through our different beliefs and paradigms through being kind and working with real tasks with one another. Inflicting cognitive dissonance (doing and enjoying things people thought are worthless, like exploratory testing) is far more powerful than a debate.

Tuesday, October 4, 2016

It's not about debate, it's about the style and tactics welcomed

I'm in need of therapeutic writing because I made a mistake of going into James Bach's blog post and reading some of the comments. Realizing it is not enough that James misrepresents my character but that with the incident, there's a whole bunch of others who feel they now know me ('nice', 'avoids debate') feels wrong.

I've left two comments on that post myself. One that corrects the contents of the post that has not been published by now. And other that corrects something from early comments that has been published. This just goes to show one aspect of the nature of 'debate' going on: only the aspects the moderator chooses to be included are included, giving an appearance of different voices which isn't the whole picture. And it the same debate tactic is in use in twitter, when discussions ('debates') happen only between people who have not blocked the other. The need of blocking and selective publishing just emphasizes the need of being kind to one another. When we're not, we just block voices that could teach us something if we were open for it. Or, they could learn something from u.

The selective voices is one problem with the debate as it is framed in the James Bach School of Context-Driven Testing, but there's also others. Twitterverse just taught me the name of the other tactic that I think is low, 'Red Herring'. The idea with that is that when you're discussing about something, one of the debating parties introduces into the discussion something that just diverts the discussion. For me, the 'copyright infringement claim' was a clear red herring.

And then there's style. Being physically intimidating, banging tables, shouting and pouting. All of these are tactics that I see in the example debates I've endured with James and while I find them hilarious, I also find them harmful.

I believe in a debate, when it is about seeking understanding through investigating difficult questions, not when it's framed as a fight with one winner. The underlying belief system must be one where we look for a win-win. This is not a zero-sum game where the stronger idea should win. The weaker idea should be nurtured, and grow not its full potential too.

I've had my share of experiences of trying things that first seem absolutely silly and outside my view of the world, and I could have debated and discussed them indefinitely. Unit testing was one of them. Mob programming is another. I wouldn't have learned and loved either one if I did not allow for the experience to happen without understanding all the base of it.

Some things we deepen with a discussion (even a debate, if we leave the dirty style and tactics out - the argument culture). Some things we just need to try out in practice, in context and see if it teaches us anything.

I value experimentation culture over debate culture. Both include deep, meaningful discussions, but the latter too often settles for intellectually and theoretically addressing things that require an experience to make sense.

Sunday, October 2, 2016

My response to James Bach's explanation of the slide incident

This is my response I posted on James Bach's post on the slide incident. I cross-post it here in case people don't feel like reading all the details and want to see my picks.

TL; DR. When you're at conferences, this is place to share and tell stories. When you make it so that the one keynoter attacks the other's character (not statements), it's not safe to share and we all lose a lot.

(my short is way too long :) )


You make a simple thing very complicated with a long post. It took me over an hour to read this, and I'm not going to try to go through all your points. No one is going to read all of the stuff I could have to say about this. So I'll cover ones I find key.  Opportunity cost: this post is time away from learning more of testing, and I'm trying to educate someone who seems very stuck on his views.

I declined my right to proofread your post with the hope I wouldn’t have to be the one stopping you from being more insensitive and misrepresenting than having that slide in your talk made you. Your blog post is an open forum. Your conference talk isn’t. It has time constraint, and a topic different than our dispute. You might not care about the conference audience’s good use of time, but I do, and it keeps me from wasting their time in long defense that your misrepresentation of me would require. Thank you for finding something to apologize, but I see this as a non-apology. Instead you insinuate that tweeting you talking so that your slide is visible is a copyright infringement, and that the slide somehow had it’s place in the talk. You claim it was presented better than the words on it but I quote a participant: ““IMO, the words he spoke on the Maaret slide were way nastier than what was written. He chastised her and said if she wants to be a leader she needs to be able to handle this.” Unsurprisingly, I can handle this. And I fight for the safety of the community saying that this is not ok. What you did was wrong. Stop doing it. To me and to anyone. Please.

I was in the talk and in the context of the talk. The keynote was recorded. Don’t try to say it was private to that room. It wasn’t. And it wouldn’t be even if I did not publish the slide. Your talk was not a second talk on the role of tester. Our talks were not designed to be connected or I missed the memo. I tweeted the slide to express that a slide of another presenter is unprofessional. I tweeted it to express that I don't approve of the idea that you have the right to talk about me (not about my points) in your keynote. The many comments on twitter confirmed I was not alone with my judgement. James, you have hurt many people who just walked away. I have listened to the stories of people leaving context-driven testing, testing and even software because of you. What you do is not ok. I can’t stop you but you should stop yourself.

I have not said, even once that “you can’t talk about me without my permission”: I’ve said you can’t talk about me as part of your keynote that isn’t about me and if you want to talk about me in your keynote, you could have at least had the decency of fact-checking your claims with me. I was unnecessary to your point, and it is just misuse of your power to include me in your presentation in front of an audience who could care less about our imagined dispute. You did not use my statements, all you say is you differ from me in 3 self-centric I-statements that misrepresent my position.

You still don't see it but you haven't done your homework on me. Try reading more of my blog. You're spouting personal judgements of me based on some tweets and *one talk* you've ever sat through with me. That one talk surprised you on how similar we are, but it did not do enough for you to realize we have even more in common.

Here's examples of how you misrepresent my talk in this article:

  • “Maaret said that eliminating the testing role is a good thing” No, I did not. I said that I find a lot of value in being identified as a tester. Quoting my slide: “Tester identity helps you find community to learn with, tester skillset makes you blend into the system and overall value chain.” None of this says I’m ok with getting rid of testers. 
  • “She has a slide that says “from tester to team member” but it actually said  “From a tester to a software professional with a testing emphasis”. “She confirmed to me that I hurt her feelings by saying that” - you confirmed I was hurt with you telling me on twitter I was not a tester but this happened on Sunday, outside the conference. In the conference, I told that the two terms are synonyms to tester to me, and I use the latter as my wider identify - with testing emphasis. 
  • “Maaret has expressed the belief that no one should name another person in their talk without getting their permission first.” No, I’ve expressed that when you choose to name me, you would use things I say / state and not things that describe you as opposing to me - misrepresenting me. And that if you would want to go into representing me, you should either use statement I made, or check with me on the truthfulness of your claims that have no backing in what I said in my talk. The thing that backs up your statements is your false belief on me, because you never learned to know me, as I choose not to engage in fights but deep and meaningful discussions.
  • “Anyone who takes vocation seriously”. Implying I don’t is offensive. I’m just different, but just as serious about my vocation. Why otherwise I would take offense in you telling me repeatedly I’m not a tester (I just test for my work). I’ve repeatedly given you a reference on argument vs. dialog culture. You’ve repeatedly decided not to hear what I say on the difference of what a debate could be. Read Deborah Tannen’s book on the topic. She says it in so many words that I don’t have the energy of repeating all of it. 

And here's how you misrepresent me with the points on the slide:

1. I’m authentic and compassionate, just like you. I just want to step out of an argument based on opportunity cost and/or inability to continue without seriously offending the other. In my talk, I spoke of both points. I don’t know where you think  your evidence for lack of authenticity and compassion on me are. I think this is where we are same. We are different in caring about how our actions leave others to feel. You’re ok leaving people offended and scarred. I’m ok having a dispute, but I try to get to shake hands in the end and agree we disagree on a fundamental level without compromising my integrity. I hurt people because I’m not nice/kind, I work hard to learn to be nice/kind because it makes a positive difference in the things I care for. Being heard. Doing great testing. Contributing in projects in a meaningful way. Learning from others.

2. I think listening and understanding the others viewpoints is critical to learning. I think safety is a prerequisite for learning. If I would attack people, they wouldn’t feel safe to learn with me. Since great testing is about deep learning, I feel we are doing a disservice to great testing when we attack people in name of debate. I’m not against a good discussion or an academic debate. It doesn’t include the “aha, I got you, you’re not a tester” comments or the character descriptions. It works through statements and is often well prepared for. It’s not an act of “you now met me, I challenge you to duel in debate” but a scheduled activity that has a reserved time slot just for that discussion to take place. You’re right that we’re different, but in a different way that you think. I think with your style of attacking people, many other people than the ones you attack start to feel unsafe to learn with you. We’re losing diversity because of you. You might not care, and I can’t push you to care. That is where my “we can agree to disagree but still be kind to one another” rule kicks in. But I know you care. You care just as deeply as I do. Your experiences of why this would be relevant are not same as mine and may never come to a place where we learn to agree on this. I surely will not go on being intentionally mean an inconsiderate. It happens often enough by accident when I live by my other rule: ask for forgiveness, not permission. I do what I believe is right.

3. I don’t think excellence can be maintained without focus and energy. I was trying to state this in my talk, when I spoke of the belief of mine that new people in particular need to have one area of focus to go deep in, or we will lose on having people who can deliver the value we now recognize skilled testers deliver. I’ve had a lot of focus on being and becoming good at testing. I work as a tester in my teams. I logged on average 4 issues that got fixed for a period of 2,5 years, until I went to experiment with fix-and-forget style of helping my team learn to avoid bugs. The hours I’ve tested over the last 20 years probably exceed yours because training and speaking in conferences is just a side thing for me. I think your focus and energy does not go into testing, it goes into arguing. It goes into being available for whatever challenge whenever. My aversion to useless twitter discussions has freed up a lot of time to pair test with people, and have deep focus on complicated product problems ranging from idea to use. Implying I don’t have excellence because I don’t have focus and energy like you is just plain wrong. My choices are different, but they go just as deep as yours.

You did not fact check the differences. You did not ask clarifying questions after my talk. You did not talk to me about how you were thinking of presenting me. You chose to misrepresent me, even if with good intention of clarifying your surprise on how much you ended up agreeing me. I’ve not questioned your intention, I’ve questioned and keep questioning your execution. A slide with statements of another speaker in a keynote talk is questionable approach. The words as they were written were hurtful. You said something nice too, but most of the stuff was just bad and misrepresenting me in a place where there was not enough room for disagreement. Even though I was trying to be time-conscious, you got interrupted from the audience on second point and moved on. That was not a place for debate on my character. Or rather, your perception of my character in comparison to yourself.

Being accountable does not mean I’m available to discuss things with you indefinitely, until you’re done. It means I stay true to my words and beliefs. I’m in a discussion as long as I too get value out of it. A lot of the discussions I step out of are ones that are just not going anywhere. I stand behind my words, until I change my mind. And that is known to happen. It's called learning. Happens to me all the time. Even then, I work hard to realize when and what I've learned. I would imagine that being part of accountability in an industry where learning is a prerequisite rather than an afterthought.

I do get to say I’m not appointing myself as a leader but a learner. I speak to learn. I have every right to frame my need of speaking in public what ever way feels authentic to me. I recognize others will listen to me, and even take action following that. And I try to represent my experiences so that it wouldn’t do harm but help. Saying a leader (someone with followers) can be attacked for speaking in conferences is like saying people can be beaten up because they walk in dark alleys. You must know that bad behaviors are not the victim’s fault, but the choice of the attacker. You’re misbehaving, and it would be time to see that after 15 years of abuse of Rex Black and Lisa Crispin, just to name a few.

I will not, ever again, approve of you mistreating anyone. I will never stay silent when I see you trash Lisa. I will say I disagree on your conduct when you trash Rex Black. I will say out loud that it is not ok to call Dawn Haynes a chipmunk. But I will also say that you were lovely towards Ash Coleman while debating with her. I will say I’ve seen you be wonderful to so many new people joining testing in conferences and on your classes. I will say that you’re divided and incoherent but mostly wonderful. Wonderful people can still do bad things. I’m not accepting your bad things because you’re wonderful or smart. I will annoy the hell out of you reminding that I still haven’t changed my mind about what is appropriate behavior, and that I believe that people with your smarts could choose to change.

I’m not calling a boycott, because your voice is important to me and to many testers. I’m asking you to be who you say you are and start exercising more of that compassion by dismissing people over attacking them. By pointing out their claims you disagree with over going into their character.

Saturday, October 1, 2016

Learning on Exploratory Testing from other teachers

If there's anything I think I know something of, exploratory testing would be it. It's the thing I love, it's the thing I learn through practice and trying to get into the heads of others who practice it.

I'm absolutely excited for next week, when I get to learn more about exploratory testing teaching together with Maaike Brinkhof. Maaike seems to look at me as I look at her: with respect and expecting to learn tons. Finding people who are into exploring as I am is rare, and I believe that interest is a driving factor in learning to be good at it. We already had a blast remote pair testing finding problems I did not expect we'd run into even if the application we were testing was something I had tested before.

At Test Master's Academy workshop day, I got to spend half-a-day in a Tutorial about Exploratory Testing in Agile. It turned out to be almost like a peer workshop, with people comparing experiences of what made exploration hard in their organizations and what approaches we had used to include exploring in agile. I learned that most people framed exploration as something they time-boxed into their processes, while for me it's the thinking that drives testing and engulfs also the decisions about automation. I learned the level of trust we had managed to establish in our organizations significantly differed.

The workshop was run by Alessandra Moreira, and I absolutely loved the depth of her experiences. And yet, she did not make it about her lecturing to us, but her letting us have a dialog and her pitching in from her experiences of the challenges and solutions.

My favorite insight I came out with was that while what I call the artifact-oriented view to testing (others prefer the word checking) gives us in contractor settings the information on how well we're fulfilling the contract, the performance-oriented exploratory testing (others prefer the word testing) gives us information about the reputational risk. We might fulfill the letter of the contract, yet we risk unhappy customer without exploratory testing.

Bernie Berger, a fellow tester in the workshop lead me to a realization. There's two kinds of reputational risk. There's my reputation at risk as a tester and the contractor's reputation towards the client. I thought of the latter at first, but the first is one that I find I've needed to manage a lot in the non-contractual settings. "Why did you miss this bug?" used to be a phrase that I needed find an almost automatic answer for.

An exemplary debate

I got a chance to see a really good debate as part of the Test Master's Academy: one by Ash Coleman and James Bach.

The structure of the debate was insightful. First, both presented their opposing positions in the discussion. That was followed by a very short round of replies to the other's position. Then the debate took a pause for both positions to collect arguments for the position with the audience that split nicely for the views. Finally, the debate continued with replies and arguments until it was time to close.

Ash Coleman was wonderful and articulate. She defended her position from personal experiences coming into testing just five years ago from a chef background, that perhaps we're looking for the mindset instead of the deep skills when hiring new testers, and that approach could widen the door.

James Bach was calm, clear and explained his position well. Spending years in deep learning of what testing is about and how you could be better at it, the ideas of anyone can be a tester seem wrong. That there is such testing that requires deep knowledge and expertise, that a new tester couldn't do unsupervised.

The end conclusion was that perhaps the disagreement is less of a disagreement, since Ash is talking from the perspective of wider views in the intake to the industry, whereas James is considering more heavily the excellence and ability of whoever is in. We all come from somewhere, and having a baseline of abilities to build on helps us on our growth path.

If you would have watched that debate, you'd feel comfortable being a part of it. James was kind and considerate. Ash did not need him to be that, she rocked her arguments knowing she is doing well in her tester position now. Both parties brought insightful perspectives into the discussion.

The debate had been prepared for. It had mutual learning in mind. There was no shouting on top of the other. The views were more opposing before preparation, and the dialog increased understanding of where the parties came from.

There's more depth into that debate that couldn't be handled within timeframes. I have many open questions, like "If nowadays really anyone can start coding and making software products, why wouldn't we want to allow the same for testers?" and "What is this fear I'm sensing of becoming obsolete if the world does not understand the uniqueness of testing skills?" but I'm happy to leave them for a different time.

I can find two things to criticize to make this more exemplary:

  • It shouldn't need the pre-debate debate to feel safe
  • The use of time between the opponents could be more equal especially in the end where just a hint of explaining for the other was emerging. 
If a debate looks like this, who would not be up for it?

Not Closing Down the Debate

Twitter is getting overheated on the slide incident with James Bach, and I feel I need to add my perspective into this.

I am not calling for boycott and I would hope other people wouldn't either. As per my experience over the years, James Bach is a caring, intelligent individual and many of us have a lot to learn from him. But we also have a lot to learn from others, and some of the things he does are not encouraging others to join the discussions.

Looking at James' twitter replies, there's claims like "I think they are trying to close down debate", "A community that discourages debate is a community for children not adults. I believe in a free press.", "Consent is not part of public debate. Others there would be no free press."

Let me clarify: I'm for free press too. I have no problems with debate. But the "debate" as James' exercises it isn't what academics would call a debate.

In this particular case it was:
  • Attacking person over arguments she made
  • Timed wrong - as part of his keynote talk that wasn't framed as response to the other instead of as part of the discussion of the my keynote talk where there was room for the discussion
  • Constrained to be one-sided attack not a discussion (again, due to chosen timing as part of the keynote when any other times during the conference would have been more appropriate)
  • The slide was written to stand better with the words spoken around it, and was more awful as text and as the whole presentation. 
I do not agree that my speaking at conferences allows for people to negatively characterize me in form of an attack. Say what it is I say you don't agree with. James said in his talking part he agrees with much of my stuff, quoting "I could have even used some of your slides". 

There's two parts here that I want to change, not close down the debate.
  1. It's not a debate in the first place if it is not mutual. Debate is about learning deeply. Safety is a prerequisite for learning. Both parties need to want to be in it, for it to be a debate. I'm happy to have discussions with James as he is in person. He is warm, caring and wonderful. 
  2. It has rules. We debate about experiences and statements, not person's characteristics. We  seek understanding, added mutually (dialog) over a winner (argument). And we avoid the personal attacks like me telling James he was a jerk (he only behaved like one for a moment, again - but that does not define him) or James telling me I am shallow in my testing (I'm not, even if I say something he interprets that way). Some rhetorics used for winning (argument) are bad for learning (dialog).
We're a professional community, and making the debates about people's character in general seems wrong. Holding a belief that is different from yours shouldn't be a step to ridicule and attack. When you fundamentally disagree and neither party is budging, there is no common ground and you can just call it at that, respectfully. We don't have to agree to be kind to one another. Trying to continue is just a waste of time, find someone else to talk to.

I'm calling a stop to this debate every time, everywhere, without consent from the other party or other people who are stuck in the middle of it. If there is no time, there is no depth and I'm not interesting in debate for debate's sake. Free speech allows you to write a blog post or tweet as much as you like, as long as you don't go into slander. That approaches illegal. Professional code of conducts make exercising your free speech as part of talks to be about mischaracterizing other people possible but inappropriate. James could do what he did, but it was not right place.

I'm not against debate. I just think debate has a form and style and rules. And one of the rules is that you can't debate without having the other person debate with you. You can exercise your free speech, and even attack. But attacks are unprofessional.

James, your style isn't getting you deep. It's alienating the people who would have something to share, and making it appear shallow to you because you intimidate them. We need to take each other into consideration to have a real dialogue. You then represent people based on the shallow understanding, which makes it even more of attack.

I encourage deep learning about how to be nice and kind without sacrificing bluntness and honesty. And for me, learning to be nice is much harder than learning to find insightful problems. I'm not nice and kind by nature. I work to become one, because learning is magic. And there is no learning unless we feel safe.