Wednesday 6 April 2016

NWEWT #1 Regression Testing

Introduction

Last weekend I attended my first ever peer conference, which was the North West Exporatory Workshop on Testing (NWEWT). This conference was organised by Duncan Nisbet with help from the Association for Software Testing (AST). I’d met Duncan at the Liverpool meetup for the North West Tester Gathering after he had presented a great talk on ‘Testers! Be more salmon!’. Shortly after that meetup, he contacted me asking if I was interested in attending this brand new peer conference. After mulling it over, I accepted it for two reasons:
  1. Why not? (pretty much the main reason I’ve done a lot of things!)
  2. It sounded like an opportunity to learn by deep, critical thinking - I liked the challenge of presenting my ideas and having people really critically analyse how I think about testing. If nothing else I was going to get a lot out of it by forcing myself to think about a subject deeply in preparation!

Attendees

The attendees were as follows, the content of this blog post should be attributed to their input as much as mine, the thoughts I have here were brought together through collaboration:
Ash Winter
Callum Hough
Christina Ohanian
Dan Ashby
Duncan Nisbet
Emma Preston
Gwen Diagram
Joep Schuurkes
Jean-Paul Varwijk
Liam Gough
Lim Sim
Richard Bradshaw
Simon Peter Schriver
Toby Sinclair
Tom Heald

Theme

The main theme for this conference was ‘Regression Testing’, specifically what we loved or loathed about it, whether we even do it, whether we should automate it and just generally what our experience and thoughts were.

What the hell is a ‘peer conference’?

I had no idea before I went! I researched around and I knew it was following a format whereby deep discussion and debate were encouraged between professionals, but as with many things, I didn’t really understand it until I started doing it.
Basically, there were 15 or so of us gathered, each bringing our own ‘experience report’ on the theme or topic for the conference. We each took turns in presenting our experience report through slides, flipchart or just simply talking. We then held a Q&A session where the real action started.
The Q&A session featured green, yellow and red cards. People were only allowed to talk when indicated to do so by the facilitator. If people wished to ask a question or contribute to the discussion, they held up one of the cards which had the following uses:
  • The green card indicated to the facilitator that you would like to ask a new question or start a new thread based on the current discussion. So at the start, everyone would show green cards because there was no thread yet.
  • The yellow card indicated to the facilitator that you would like to ask a further question or talk about the current thread. This is how the discussions got deeper and deeper into particular threads.
  • The red card indicated to the facilitator that you felt the current discussion needed to stop or that you think a ‘fact’ being stated by another person was wrong. We didn’t see a lot of use for this card, only really once or twice for when particular threads went too long or something needed clarifying. Red cards can only be used for situations that the facilitator feels are genuine, so they can be taken off people if they are abused.
Typically, the discussions were mainly between the presenter of the experience report and the person asking the question. However, they could shift to a discussion between two other people - when you showed a yellow card you could directly challenge the person who had caused you to raise the card.

On a personal note

I actually think one of my biggest takeaways was cementing the feeling that meetups and conferences are not as scary as they might seem. What I mean by that is it's easy to feel like members of the community who are quite out-spoken or actively involved are not approachable. I’ve thought about this a lot recently and I think for me it comes from the assumption that because people are experienced, they know everything I know and have already come to the same conclusions. At the start of this year, I felt like I didn’t have anything new to add and more experienced people have taken their thoughts to a more advanced level. I guess I also felt that well-known people must get a lot of questions from their public position, so I naturally feel like leaving them alone, especially if I think my questions or thoughts are less developed than theirs.
So if you’re reading this and feel the same way about meeting testers and asking people questions, then fight those thoughts! My experience at all of the testing events I’ve attended so far is that everyone is more than happy to take the time to listen to you and help you! The key here is that you’re open to suggestions and ideas from other people and this is one element of my “oh god I don’t know anything” thought process I’d like to keep - I’d like to remain humble. But don’t be afraid to ask the questions and approach people!

Takeaways

Enough about my personal development, what about the content? Well I think the biggest takeaway that I think everyone agreed upon was that ‘regression testing’ is a phrase that is poorly defined and definitely not consistently used in our industry. I found myself agreeing a lot with Joep’s idea of not even talking about it. He suggested that instead of using ‘regression testing’ we could just talk about whatever we are doing, e.g. “I’m performing these tests to find out if these two systems are functioning as we desire”. However, that doesn’t mean we should never use the phrase, it may be that we feel within a particular circle of people or within a company, there is very clear understanding. The point is more about being aware that phrases such as this may not be as clearly understood as people might think and can even be used to not think about what you are doing. Again, Joep gave a funny example of it being a ‘jedi mind trick’ whereby managers are told “we’re regression testing!” to which they respond - “great” and walk away.

Several people also shared their different approaches to regression testing. Richard shared his F.A.R.T. Model which I had seen before and Ash also shared his own, similar model for exploring the large unknowns of systems. Toby took a different approach and discussed the idea of ‘regression in testing’ - the idea that your skills and knowledge regress and what we might do to try and combat that.  

One of the best parts of attending events like this is learning how exactly people conduct testing within their companies and the different situations and problems they are having to deal with. Christina, Simon and Tom all shared different situations that I think generated a lot of useful discussion and debate, they definitely gave me plenty to think about in terms of considering how I would approach those situations. Richard gave a particularly useful piece of advice that I really love - which I can only paraphrase as ‘don’t focus on the politics, make sure you’re still doing a good job first and foremost at all times’. This really struck a chord for me personally as I have experienced some very political situations that I haven’t agreed with, but I personally value my own professionalism to still deliver good work despite this.

Another idea that stuck in my head (unfortunately a lot of our notes were binned by the hotel staff on the second day so I’m stuck with just a few notes and my memories!) was Jean-Paul’s idea of using the phrase ‘continuous testing’ to help highlight the need to still perform manual testing throughout ‘continuous delivery’ pipelines - in other words combat the feeling that continuous delivery leads to people forgetting about testing. However, we did also discuss that potentially this could have the opposite effect where people treat testing as a separate concern because we are using a separate phrase for it.

In summary, there was a nice mix of ideas and approaches that I felt I could apply to my work or in future as well as a lot of food for thought. I feel like there are some threads that left me with even more questions - I’m guessing this is normal for these things! Unfortunately, I think a lot of the attendees had a lot of similar points of view, so we ended up agreeing on a lot of topics without much debate. However, I think I still learnt a great deal and it was useful to find a lot of validation in my current line of thought on this topic.

My experience report

Not everyone got a chance to share their experience report but I was lucky enough to be one of the chosen! I’ve written up my views on regression testing here:

Summary

I really enjoyed my first experience of a peer conference a lot and it left me wanting more. I really liked the chance to start digging deep into a topic. It was also nice to find a lot of validation of my own ideas on regression testing and to learn new ideas and approaches from other people.

No comments:

Post a Comment