Monday, September 18, 2017

Announcing an Awesome Conference - European Testing Conference 2018

TL;DR: European Testing Conference 2018 in Amsterdam February 19-20. Be there! 

Two months of Skype calls with 120 people submitting to European Testing Conference 2018 in Amsterdam has now transformed into a program. We're delighted to announce people you get to hear from, and topics you get to learn in the 2018 conference edition! Each one of these have been hand-picked for practical applicability and diversity of topics and experiences in a process of pair-interview. Thank you for the awesome selection team of 2018: Maaret Pyhäjärvi, Franziska Sauerwein, Julia Duran and Llewellyn Falco.

We have four keynotes for you balancing testing as testers and programmers know it, cultivating cross-learning:
  • Gojko Adzic will share on Painless Visual Testing
  • Lanette Creamer teaches us on how to Test Like a Cat
  • Jessica Kerr gives the programmer perspective with Coding is the easy part - Software Development is Mostly Testing
  • Zeger van Hese Power of Doubt - Becoming a Software Sceptic
With practical lessons in mind, we reserve 90 minute sessions for the following hands-on workshops you get to choose to participate two, as we repeat the sessions twice during the conference:
  • Lisa Crispin and Abby Bangser teach on Pipelines as Products Path to Production
  • Seb Rose and Gaspar Nagy teach  on Writing Better BDD Scenarios
  • Amber Race teaches on Exploratory Testing of REST APIs
  • Vernon Richards teaches on Scripted and Non-Scripted Testing
  • Alina Ionescu and Camil Braden teach on Use of Docker Containers
While workshops get your hands into learning, demo talks give you a view into looking someone experienced in doing something you would want to mimic. We wanted to do three of these side by side, but added an organizer bonus talk on something we felt strongly on. Our selection of Demo talks is:
  • Alexandra Schladebeck lets you see Exploratory Testing in Action
  • Dan Gilkerson shows you how to use Glance in making your GUI test code simpler and cleaner
  • Matthew Butt shows how to Unit/Integration Test Things that Seem Hard to Test
  • Llewellyn Falco builds a bridge for more complicated test oracles sharing on Property-Based Testing
Each of our normal talks introduces an actionable idea you can take back to your work. Our selection of these is:
  • Lynoure Braakman shared on Test Driven Development with Art of Minimal Test
  • Lisi Hocke and Toyer Mamoojee share on Finding a Learning Partner in Borderless Test Community
  • Desmond Delissen shares on a growth story of Two Rounds of Test Automation Frameworks
  • Linda Roy shares on API Testing Heuristics to teach Developers Better Testing
  • Pooja Shah introduces Building Alice, a Chat Bot and a Test Team mate
  • Amit Wertheimer teaches Structure of Test Automation Beyond just Page-Objects
  • Emily Bache shares on Testing on a Microservices Architecture
  • Ron Werner gets you into Mobile Crowdsourcing Experience
  • Mirjana Kolarov shares on Monitoring in Production 
  • Maaret Pyhäjärvi teaches How to Test A Text Field
In addition to all this, there's three collaborative sessions where everyone is a speaker. First there's a Speed Meet, where you  get to pick up topics of interest from others in fast rotation and make connections already before the first lunch. Later, there is a Lean Coffee which gives you a chance for deep discussions on testing and development topics of interest to the group you're discussing with. Finally, there's an Open Space where literally everyone can be a speaker, and bring out the topics and sessions we did not include in the program or where you want to deepen your understanding.

European Testing Conference is different. Don't miss out on the experience. Get your tickets now from http:/europeantestingconference.eu 

Saturday, September 16, 2017

How Would You Test a Text Field?

I've been doing tester interviews recently. I don't feel fully in control there as there's an established way of asking things that is more chatty than actionable, and my bias for action is increasing. I'm not worried that we hired the wrong people, quite the opposite. But I am worried we did not hire all the right people, and some people would shine better given a chance of doing instead of talking. 

One of the questions we've been using where it is easy to make step from theory to practice is How would you test a text field? I asked it in all, around a whiteboard when not on my private computer with all sorts of practice exercises. And I realized that the exercise tells a lot more when done on a practice exercise.

In the basic format, the question talks of how people think of testing and how they generate ideas. The basic format as I'm categorizing things here is heavily based on years of thinking and observation by two of my amazing colleagues at F-Secure Tuula Posti and Petri Kuikka, and originally inspired by discussions on some of the online forums some decades ago. 

Shallow examples without labels - wannabe testers

There's a group of people who want to become testers but yet have little idea of what they're into, and they usually tend to go for shallow examples without labels. 

They would typically give a few examples of values, without any explanation of why that value is of relevance in their mind: mentioning things like text, numbers and special characters. They would often try showing their knowledge by saying that individual text fields should be tested in unit testing, and suggest easy automation without explaining anything else on how that automation could be done. They might go talking about hardware requirements, just to show they are aware of environment but go too far in their idea of what is connected. They might jump into talking about writing all this into test cases so that they can plan and execute separately, and generate metrics on how many things they tried. They might suggest this is a really big task and suggest to set up a project with several people around it. And they would have a strong predefined idea of their own of what the text field looks like on screen, like just showing text. 

Seeing the world around a text box - functional testers

This group of people have been testers and caught up some of the ideas and lingo, but also often over reliance on one way of doing things. They usually see there's more than entering text to a text box that could go wrong (pressing the button, trying enter to send the text) and talk of user interface more than just the examples. They can quickly list categories of examples, but also stop that list quit quickly as if it was irrelevant question. They may mention a more varied set of ideas, and list alphabetic, numeric, special characters, double-byte characters, filling up the field with long text, making the field empty, copy-pasting to the field, trying to figure out the length of the field, erasing, fitting text into the visible box vs. scrolling, and suggest code snippets of HTML or SQL, the go to answer for security. They've learned there's many things you can input, and not just basic input into the field, but it also has dimensions. 

This group of people often wants to show the depth of their existing experience by moving the question away from what it is (the text field) to processes and emphasize experiences around how relevant it is to report to developers through bug reports, how they may not fix things correctly and how a lot of time goes into retesting and regression testing. 

Tricks in the bag come with labels - more experienced functional testers

This group of testers have been looking around enough to realize that there are labels for all of the examples others just list. They start talking of equivalence partitioning and boundary values, testing positive and negative scenarios and can list a lot of different values and even say why they consider they're different. When the list starts growing, they start pointing out that priority matters and not everything can be tested, and may even approach the idea of asking why would anyone care of this text field, where is it? But the question isn't the first  thing, the mechanic of possible values is.  They prioritization focus takes them to address use of time into testing it, and they question if it is valuable enough to be tested more. Their approach is more diversified and they often are aware that some of this stuff could be tested on unit level and others require it integrated. They may even ask if seeing the code is available. And when they want to enter HTML and SQL, they frame those not just as inputs but as ideas around security testing. The answer can end up long, and show off quite much of knowledge. And they often mention they would talk to people to get more, and that different stakeholders may have different ideas. 

Question askers - experienced and brave

There's a group who seems to know more even though they show less. This group realizes that testing is a lot about asking questions, and mechanistic approach of listing values is not going to be what it takes to succeed. They answer back with questions, and want to understand typically the user domain but at best also the technical solution. They question everything, starting with their understanding of the problem at hand. What are they assuming, and can that be assumed? When not given a context of where the text field is, they may show a few clearly different ones to be able to highlight their choices. Or if the information isn't given, they try to figure out ways of getting to that information. 

The small group I had together started just with brainstorming the answer. But this level wasn't where we left of. 


After the listing of ideas (and assumptions, there was a lot of that), I opened a web page on my computer with a text field and an ok button and had the group mob to explore, asking them to apply their ideas on this. Many of the things they mentioned in the listing exercise just before immediately got dropped - the piece of software and possibility to use it took people with it.

The three exercises

The first exercise was a trick exercise. I had just had them spend 10 minutes thinking how they would test, and mostly they had not thought about the actual functionality associated with the text field. Facing one, they started entering values and looking at output. Over time, they came up with theories but did not follow up testing those and got quite confused. The application's text field had no functionality, only the button had. After a while, they realized to go into dev tools and the code. And were still confused with what the application did. And with a few rounds of three minutes each on the keyboard, I had us move on to the next example. 

The second exercise was text box in the context of a fairly simple editor application, but one where focusing on the text box alone without the functions immediately connected to the text box (unit test perspective) would miss a lot of information. The group was strong on ideas, but weaker on execution. When giving a value, what a tester has to do is to stop (very shortly) and look at what they learned. The learning wasn't articulated. They missed things that went wrong. Things where to me, an experienced exploratory tester, the application is almost shouting to tell how it is broken. But they also found things I did not remember, like the fact that copy pasting did not work. With hints and guidance through questions, I got them to realize where the text box was connected (software tends to save stuff somewhere) and eventually we were able to understand what we could do with the application and what with the file it connects to. We generated ideas around automation, not through the GUI but the file and discussed what kind of things that would enable us to test.  When asked to draw a conceptual picture of relevant pieces, they did good. There was more connections to be found, but that takes either a lot of practice on exploring or more time to learning layers. 

Again with the second exercise, I was left puzzled on what I observed. They had a lot of ideas as a group on what to try, but less of discipline in trying that out of following what they had tried. While they could talk of equivalence partitioning or boundaries, their actual approaches on thinking what values are equivalent and learning more as they used the application left something to hope for. The sources of actual data were interesting to see, "I want a long text" ended up as something they could measure but unawareness immediately of an online tool that would help with that. They knew some existed but did not go to get those. It could have been a priority call, but they also did not talk about doing a priority call. When the application revealed new functionality, I was making a mental note of new features of the text box I should test. And when that functionality (ellipsis shortening) changed into another (scroll bars), I had a bug in mind. Either they paid no attention, or I pointed that out too soon. Observation and reflection of the results was not as strong as idea generation. 

The third exercise was a text field in a learning API, and watching that testing unfold was one of the most interesting ones. One in the group quickly created categories of three outputs that could be investigated separately. This one was on my list because the works right / wrong is multifaceted, and in the perspective of where the functionality would be used and how reliable it would need to be. Interestingly in the short timeframe we stick with data we could easily generate, and this third one gave me a lot to think about as I made later one of my teams's developers test it, getting them even stronger into testing an output at a time, and insisting on never testing an algorithm without knowing what it includes. I regularly test their algorithms of assessing if the algorithm was good enough for the purpose of use, and found that the discussion was around "you shouldn't do that, that is not what testers are for". 

The session gave me a lot of food for thought. Enough so that I will turn this into a shorter session teaching some of the techniques of how to actually be valuable. And since my conference tracks planned are already full, I might just take an extra room for myself to try this out as the fourth track. 





Friday, September 15, 2017

Fixing Wishful Thinking

There's a major feature I've been testing for over a period of six months now.  Things like this are my favorite type of testing activity. We work together over longer period of time, delivering more layers as time progresses. We learn, and add insights, not only through external feedback but also because we have time to let our own research sink in. We can approach things with realism - the first versions will be iterated on, and whatever we are creating is indefinitely subject to change.

I remember when we started, and drew the first architectural diagrams on the wall. I remember when we discussed the feature in threat modeling and some things I felt I could not get through before got addressed with arguments around security. I remember how I collected dozens of claims from the threat modeling session as well as all discussions around the feature, and diligently used those to drive my learning while exploratory testing the product.

I'm pretty happy with the way that feature got tested. How every round of exploration stretched our assumptions a little more, giving people feedback that they could barely accept at that point.

Thinking of the pattern I've been through, I'm naming it Wishful Thinking. Time and time again with this feature and this developer, I've needed to address very specific types of bugs. Namely ones around design that isn't quite enough.

The most recent examples came from a learning that certain standard identifiers have different formats. I suggested this insight should lead to us testing the different formats. I did not feel much of support for it, not active resistance either. I heard how the API we're connecting to *must* already deal with it - one of the claims I write down and test against. So I tested two routes, through a provided UI and the API we connect with, only to learn that my hunch got confirmed - the API was missing functionality someone else's UI on top of it added, and we, connecting with the API missed.

A day later, we no longer missed it. And this has been a pattern throughout the six months.

My lesson: as a tester, sometimes you work to fix Wishful thinking. The services you are providing are a response to the developers you're helping out. And they might not understand at all what they need, but appreciate what they get - only in small digestible chunks, timed with enforcing the message into something actionable right now.


Learning through embarrassment

Through using one video from users to make a point, we created a habit of users sharing videos of problems. That is kind of awesome. But there's certain amount of pain watching people in pain, even if it wasn't always the software failing in software ways, but the software failing in people ways. 

There was one video that I got to watch that was particularly embarrassing. It was embarrassing, because the problem shown in that video did not need a video. It was simply a basic thing breaking in an easy way. One where the fix is probably as fast as the video - with only a bit of exaggeration. But what made it embarrassing was the fact that while I had tested that stuff earlier, I had not recently and felt personal ownership.

The sound track of that video was particularly painful. The discussion was around 'how do you test this stuff, you just can't test this stuff' - words I've heard so many times in my career. A lot of times the problems attributed to lack of testing are known issues incorrectly prioritized for fixing. But this one was very clearly just it - lack of testing. 

It again left me thinking of over reliance on test automation (I've written about that before) and how in my previous place of work lack of automation made us more careful. 

Beating oneself up isn't useful, but one heuristic I can come up with this learning: when all eyes are on you, the small and obvious becomes more relevant. The time dimension matters. 

I know how I will test next week. And that will include pairing on exploratory testing. 

Saturday, September 9, 2017

Burden of Evidence


This week, I shared an epic win around an epic fail with women in testing slack community, as we have a channel dedicated for brag and appreciate - sharing stuff in more words than on twitter, knowing your positive enforcement of awesome things won't be taken against you. My epic win was around case of being heard on something important, where not being heard and not fighting the case was the epic fail.

On top of thinking a lot around the ways of how I make my case to get feedback reacted on, like bringing in real customer voice, the person I adore the most in the world of testing twitter tweeted just on the topic.
I got excited, thoughts rushing through my head on the ways of presenting evidence, and making both the objectively and emotionally appealing case. And it is so true, it is a big part of what I do - what we do in testing.

As great people come, another great one in twitterverse tweeted in response:
This stopped me to think a little more. What if I did not have to fight, as a tester, to get my concerns addressed? What if the first assumption wasn't that the problem isn't relevant (it is clearly relevant to me, why would I talk about it otherwise?) and that burden of evidence is on me? What if we listened, believed, trusted? What if we did not need to spend us much time on the evidence as testers, what if hunch of evidence was enough and we could collaborate on getting just enough to do the right things as response?

Wouldn't the world this way be more wonderful? And what is really stopping us from changing the assumed role of a tester from being one with the burden of evidence to someone helping identify what research we would need to be conducting on the practical implications of quality?

Creating evidence takes time. We need some level of evidence in the projects to make right decisions. But I, as a tester, often need more evidence than the project really needs just to be heard and believed.  And a big part of my core skillset is navigating the world in a way where when there's a will, there's a way: I get heard. It just takes work. Lots of it. 

Talking to 120 people just for a conference

I'm doing the last two days of intensive discussions with people all around the world. Intensive in the sense that I hate small talk. These people are mostly people I have never met. It could be my worst nightmare. But it isn't. It's the best form of socializing I can personally think of. 

We had 120 people submit for European Testing Conference 2018. Instead of spending time reading their abstracts, we spend time talking to the people, to hear not just their pitch but their stories and lessons. Each one gets 15 minutes. In this 15 minutes, they both introduce all their topics, but also, get feedback on what we heard (and read during the session), and as an end result, we walk out with one more connection. 

Entering the call, some people are confused on why we care so little on introduction and professional history. We assume awesome. We assume relevance. And we are right to assume that, since each and every one of us has unique perspectives worth sharing. Connection is a basic human need, and volunteering to speak on something that matters to you makes you worthy.

The discussion is then centered around what ever topic the one with a proposal brought into the discussion. It's not about small talk, but about sharing specific experiences and finding the best voice and story within those experiences to consider for the conference.

When I tell people about the way we do things for the call of collaboration, the first question tends to be around use of time. 120 people, 15 minutes, that is 30 hours! And not just that, we pair on the discussions, making it even a bigger investment. But it is so worth it.

The investment gives the conference a real idea of what the person will bring and teach, and enables us to help in avoiding overlap and build a fuller picture of what testing is about. Similarly, it gives us the best speakers, because we choose based on speaking not writing. It brings forth unique aspects of diverse perspectives that enable us to balance our selection. The conference gets an awesome program and I do not know any other mechanism to build a program like this.

The investment gives the people collaborating with us a piece of feedback they usually never get. They get to talk to organizers, hear how their talks and topics balance with the current topics people are sharing. Many people come in with one topic, and walk out with several potential talk topics. And even if they get nothing else, they get to meet new people who love testing from some angle just as much as they do. 

The investment gives most to me personally. With only 30 hours, I get to meet some of the most awesome people in the world. I get private teaching, specifically answering questions I raise on the topics we talk on. I get to see what people who want to speak share as experiences, and I get to recognize what is unique. I become yet better at being a source of all things testing, who can point out where to go for the deeper information. It improves my ability to google, and to connect things I hear into a view of world of all things testing. 

I've met awesome developers, who enforce my restored belief in the positive future of our industry. I've met testers and test managers, who work the trenches getting awesome value delivered. I've met designers and UX specialists, who want to bridge the gaps we have between professions and share great stuff. Some stories teach stuff I know with a personal slant. Some bring in perspectives I wasn't aware of. 

It's been a privilege to talk to all these people. I see a connection from what we do for collaboration calls to what we do with our speed meet session. We give every one of our participants a change to glimpse into stuff the other knows without small talk. A connection made can turn into a lifetime of mutual learning. 


Monday, September 4, 2017

Funny how we talk of testing now

Most of the time, I just do what I feel I want to do. And I want to do loads of exploratory testing, from various APIs to whatever is behind them, all the way to a true end to end.

Today, I however needed, for finally getting to a point of end-to-end exploration, and idea of when the last piece in my chain is available. So I asked:
I want to test X end-to-end, any idea when the piece-of-X is available? 
The response really surprised me. It was:
A is working on such an end-to-end test.
I was almost certain that we were confusing test (the artifact) and test (the activity) here, so I went on to clarify:
This test is exploratory (not automated) and with focus on adding information on end user experience. Is that what A does as well?
I got a quick no. And a link to one single test automation case that the team has agreed to add, for quite a simple positive end-to-end cases.

As I did not test yet, I have no idea what more will I find. Most likely some. Almost every time some.

I'm happy that the end-to-end automation case will end up existing and monitoring what there is. But surely that is not what testing is all about?

It's fascinating how quickly this degeneration of talking around test happens. How quickly the activity turns into specific artifacts.

It takes a belief into the unknown unknowns to get people to explore, when they think they can plan the artifact. Communication gets "fixed" by using always more words, rather than less.