A few weeks ago I was lucky enough to spend a few days far north of the Arctic Circle, first in Tromsø where we got to watch some majestic Northern Lights from a graveyard, then farther north on Svalbard. Thought I’d share a few photos.

Clue: if there’s trees, it’s Tromsø – trees can’t grow on Svalbard as it’s above the permafrost.

Schrödinger’s Test Case: A Cautionary Tale on the Importance of Test Granularity

A test plan review with a colleague yesterday left me kind of baffled. Test Rationalisation is a great tool in capable hands, reducing waste and diagnosing problems as rapidly as possible, providing a shorter feedback loop and reducing effort to the minimum.

Whilst I’m all for using the leanest viable approach and cutting waste where necessary, sometimes being too concise can lead to confusion – in the case of test cases, this can lead to tests no longer testing anything much at all.

The Requirement

We are testing a new count included in an export.

  • Types A, B and C count as 1.
  • Also, multiples of A, B, and/or C count as 1 (eg AB, BC, ABC).
  • Types X, Y and Z don’t contribute to the count, and no combination of X, Y or Zs count.
  • A combination of A/B/C and X/Y/Z is systematically impossible.

The Test

My colleague was busily setting up the scenarios to create types A, B, C, X, Y and Z. They also had two multiples: AB, to test this was counted once, rather than as two separate counts, and YZ.

Assuming everything goes to plan, they should get a count of 4 – that is, A, B, C and AB.

See the problem?

One possible interpretation of the count 4 is everything worked. 4! We passed the test! We have working code! We all get a free car and two months paid holiday!


Another interpretation is that AB counted as 2, A counted, B counted, but C didn’t. 4!
Another is that X counted as 2, Y and Z each counted as 1, and A, B, C and AB weren’t counted. 4!
Another is that C counted as 4, and none of the others were counted at all. 4!
Another is that A counted as 8, B and C counted as -2 each, and AB didn’t count. 4!
Etc etc etc… the permutations are essentially endless. 4!

From this sort of “one hit”, “happy path” test, in this scenario, you really can’t tell whether the system behaved as expected – just that the test passed. Has the code worked? Both yes and no… we got 4, but we don’t know how.

Killing No Birds with One Massive Stone

It’s easy to fall into this trap, isn’t it? You think you’re being sensible by building contiguous test runs, typically in system areas where the process (for example creating scenarios and then performing a lengthy export) takes a lot of time. If we can hit multiple variables in a single pass why wouldn’t we?

The problem is without testing each of these rules individually, you really don’t know how each individual rule behaves. As it happens, the code paths for A, B and C are all completely separate, as are X, Y and Z, as are combinations thereof. Finding this out was as easy as checking with the developer who was working on the story. This sort of information is crucial in determining our approach.

So, the reason we shouldn’t be testing multiple variables in a single pass in instances like this, is that we do not have visibility over individual passes or fails as we go. Where we have distinct outputs for the various variables, there may be more argument for such merging of test cases – although there is a risk of cross-pollination here, that certain combinations are problematic – but a more sensible approach would be to check each rule individually before hitting these combinations as a further round of tests.

Just because testing something is painful, that doesn’t mean we get to skip it.

The Unhappy Path of Brittleness

If we assume things will work, we’re fundamentally failing to perform our job role. This is why “happy path” testing gets so much ire – happy path meaning we assume everything will work just fine and just set out to prove it, much like the test case at the top of this article. I mean, we got 4, right? Good enough?

But this leads to the other crucial issue here is that if something’s wrong, by approaching our testing this way we have no key to diagnosing the problem. Sure, we know something went wrong. But what? Say the count of our test case ended up being 6. Or (null). Or X. What caused that? Was it the behaviour of A? The combination of B and X? What are the reproduction steps? All we have is a blank “fail”, but no real sensible intel on why it failed.

By designing and executing more tests up front, we can avoid this situation entirely – we know A worked, B worked, but C? Well, C blew up the export. We have certain code paths which a dev can exclude as unproblematic, and we have the focus area ready to go. Our whole test doesn’t live or die by its weakest link; instead we have a number of smaller, more granular tests which have individual passes or fails.

Your test plan should be able to withstand bugs – it’s there to find them!

Chopping things down into smaller chunks like this increases visibility, reduces the overhead of diagnosis, increases confidence in system behaviour and, frankly, makes an indisputable amount of sense.

Granularity: Follow The Thread

Granularity is the key to all this. Things may well be as cut and dried as the example above, but often they are less clear. Where can we make rationalisations? Where can we cut or merge tests?

When I started my life as a tester I was forever making matrices, alternating common variables and testing lots of permutations in longer runs. This was, in all honesty, immaturity as a tester. I hit every major combination, right? That’s the same as knowing every part works? It’s a heuristic I rely on less and less, as I have experienced a world of pain in false passes, undiagnosable failures and other errors introduced by this coarse granularity in my testing.

As I have developed, as I have moved into more agile, multi-functional co-located environments, I have learned to just ask the devs I work with the damn questions. Are the code paths distinct (ie do I need to hit each component individually)? Is there a point where you can expose what’s being processed and how to me (sometimes these multi-threaded pathways can be broken down behind the scenes, such as by stepping through them with a debugging tool)? If not, the tests need to stay distinct, with a fine enough granularity that I can identify that each component variable is behaving as expected.

Sometimes there’s no shortcuts – you just have to get started!

Moving Test Upstream – How to test the whole SDLC

Fourth in a series of posts summing up my thoughts on the Ministry of Testing’s latest success, TestBash Manchester 2016.

As testers, we’re used to hearing the value of “shifting testing left” in the software development life-cycle. By finding problems sooner, we can be instrumental in saving the business money, effort and time rectifying costly mistakes. But it’s not always clear how to go about this, as a ground-level tester. In this article I’ll discuss some of the techniques, ideas and strategies which can make your testing practice more holistic, ensuring quality is never just an afterthought.

Testers! Be more salmon!

At TestBash Manchester, the talk “Testers! Be More Salmon” by Duncan Nisbet specifically called for testers to drive testing practice out of its little pool at the end of the waterfall, and into other areas of development. I feel a little arrogant to say this was one talk where I felt a hint of smug satisfaction – whilst there’s always more to do, in our organisation we’ve succeeded in getting test into pretty much every stage of development, and have earned considerable buy-in and respect by doing so. But in former roles and other organisations, things haven’t been so good – and now I know how important early test involvement is to quality software.

slipping bugThe only Boehm’s Curve graph I will ever use

To those who are used to testing being a “final activity” prior to release, this brave new world of testing moving outside of its dark corner can seem a bit alien. I started life as a SIT tester, literally in a basement 2 floors below the devs, taking the work of various scrum teams from other areas of the business and giving them a final “integrated” once-over for two months at a time. I had nothing to do with the design process, little visibility of the user stories, and really only a glancing understanding of what the business wanted from its changes.

My role now couldn’t be further removed from this – I’m involved from the story formation stage, helping the Product Owner build testable stories, with a suitable granularity and slicing, ensuring acceptance criteria are not only realistic, but quantitative and testable. A few things have allowed me to keep a stronger handle on what’s being asked for, designed, built – and tested.

Story Refinement (3 Amigos)

Something fairly new to me is the idea of the Three Amigos. This is a kind of “pre-grooming” session, where a Product Owner, developer and tester get together to discuss and refine user stories prior to a major estimating session with the wider team. Prior to 3 Amigos, a lot of time and effort was expended in estimation sessions, working out if stories should be split, if investigations/spikes would be required, or if acceptance criteria were complete or appropriate.



In 3 Amigos, the Product Owner presents unrefined stories, and through a process of discussion, suggestion and gradual improvement the stories are brought up to a higher standard. If splits are required, this is the stage where the split is undertaken – perhaps some ACs are deliverables in their own right, and deserve to be viewed as separate stories, for example. Perhaps investigations are required to reduce doubt prior to estimating a story.

One of the key functions of a tester in these sessions is to develop an understanding of how one would go about proving the acceptance criteria. Many times, AC say words like “should”, as in “the button should produce a notification on-screen”. That’s not an AC! OK, so it should – that doesn’t mean it will. Another common AC is for something to be “better” or “faster” or “improved”. How do we prove that? Say we have to “improve page load time”. How much is enough? Improve it by 0.000001%? That’s satisfied the AC, after all. These are trite examples but they give the idea that by being present in the early story formation stage, a tester can ensure their requirements of testable AC can be met.


I have been a little shocked to hear several testers say they only gave test estimates after dev work was underway. My team estimates in effort, via planning poker, using a totally subjective application of the modified Fibonacci scale to represent story sizes. Test is a key consideration here – a story may imply a 1-line code change, which a dev will naturally estimate as a very very small change. But it may be a fundamental part of the system, affecting almost every transaction – it would be a MAMMOTH test task!

So, it makes a lot more sense to me for test to be considered as part of this process up-front, before the Product Owner decides which stories to prioritise. Unless, of course, the organisation’s test resource is unlimited.

planningpokerPlanning Poker – the only kind of gambling I do

Key considerations for testers during estimation are not only testing to prove the AC, but to validate an absence of regression – and, where appropriate, to support the writing and maintenance of automated tests. It’s likely that these will be in the forefront of a tester’s mind, and unlikely they will be big concerns for anyone else. I’m sure it will be no surprise to anyone reading this, they should be!


A step which I feel is often missed in this process of “continuous testing” is during the time in which developers are designing their approach. A user story specifies behaviours, but the solution is often largely in the hands of the developers, to solve as they see best given their deep technical understanding of the system, and the skills in the dev team.

Testers may feel a little intimidated by these highly technical discussions, but in my experience there is tremendous value in being present while devs do their brainstorming. I use this time to come up with my key test approaches (of course, more almost always fall out of the actual process of testing), key considerations which I feel need testing. As I hear the devs discuss their designs and approach, it’s rare that I interject (although of course I will, if I feel it’s relevant).

image-1-software-development-life-cyclesModels are inherently testable in their own right

However at the end of a particular story being discussed, I take a moment to review my test approaches with the devs. I literally “run the tests” on their design, and ask them to prove the key things I’ll need to know to show that the solution matches the AC, what I’m planning to test and what, from my distinct perspective as a tester, needs to happen for the user story to be delivered. This has been enormously valuable to both myself and the devs, and has trapped a fair few functional problems in designs or models before a single line of code has been written.

Can’t emphasis the value of this one enough – one of my secret weapons as a tester, and tremendously powerful. I’m reminded of the old saying usually misattributed to Einstein:

If you can’t explain something to a six-year-old, you really don’t understand it yourself.

That’s kind of nonsense – as Richard Feynman responded, “If I could explain it to the average person, I wouldn’t have been worth the Nobel Prize.” – but it is a great way of both checking the dev’s understanding, proving testability, and checking the suitability of the design in one step. Plus it gives you a head start in designing your test approach, which is a neat side effect.


When our developers want to merge code, they use Bitbucket to comment on changes and considerations before those changes hit our codebase. The “hive mind” comes up with better solutions, notices mistakes or pulls out approaches which don’t meet our coding standards. It’s a valuable step – although as a tester-centric sticker on my laptop says “code reviews are overrated” – and provides a lot of insight and knowledge sharing. But as testers in agile teams, we often have no such oversight in our approaches, and no such learning opportunity.

honest-review-of-comwave-telecommunicationsIn the review process, everyone has something different to offer

A key standard we’ve introduced into our test planning is a review process, whereby a high level test plan (allowing for lots of exploratory goodness when the actual testing commences) is reviewed by both a developer working on the project, and a fellow tester from another team. This ensures key test considerations are given a “technical review” (the number of times one particular dev has suggested weird character strings with special significance to the types of changes we’re making… I should keep a list), and a “methodology review” enabling unconsidered approaches to come to the fore.

As with everything in this article, it’s vital to keep these reviews at a high level and complete them before a lot of work has been done. There’s a major advantage for devs to have the test considerations reinforced while they’re still producing code (“Oh shit, I’d forgotten about that bit” syndrome), and also to complete the review before a lot of data prep or (whisper it) test cases have been written, which may need to be changed or scrapped altogether.


I won’t go too deep here as many others have written reams of good information about approaches to testing, but there’s always something to test or check. I’ve written test cases for reviewing documentation to ensure it meets requirements (these really are a “final checklist”), exploratory charters for things I need a dev to show me on a system (“Pre-Requisites: Capture a dev”) and all manner of weird and wonderful things. But those things have always started upstream.


Hopefully some of these techniques will be new to you, or at least a new way of approaching the test process. I’m a firm believer that testers can be (and should be) instrumental throughout the development process, rather than siloed off as “the first users”. By maintaining communication, visibility, asking questions and building the team’s understanding of both what the business want, and what we hope to see in the final product, testers can deliver far more value.

We are a positive part of the product development process – not just “monkeys with typewriters” trying to prove how useless everyone else is at their job. If you want to be more than just a checker with a clipboard, at the end of the factory production line, it’s time to swim a little farther upstream.

Duncan Nisbet – Testers! Be More Salmon!

Top 3 New Skills from TestBash Manchester 2016

Third in a series of posts summing up my thoughts on the Ministry of Testing’s latest success.

Note: The summaries below are based on my takeaways from the discussions, and are not necessarily representative of either the original intent or meaning of their authors! I only offer my perspective on what I heard, so my apologies if anyone feels misquoted. With that said…

3. Sketchnoting

The conference opened with the very engaging, very experienced James Bach, someone I’ve been aware of for about as long as I’ve been a tester. His insights have helped me before, and seeing him in the flesh for the first time I was doing my best impression of a diligent student, scrawling down snippets and key themes of his presentation on managing social and critical distance in testing. It was all going well, until I looked over my first complete sheet of notes and it struck me – they were useless!

cvr3kc6vmaaxqvgUm… what?

A whole load of waffle, a few detached ideas and themes, but otherwise a real waste of effort. If I had spent a little longer listening and less frantically transcribing, I may have crystallised my own personal takeaways in a few more concise (and more complete) statements. And then I looked around, and saw everyone else doing something different.

Various people could be seen scribbling away with coloured pencils, highlighters, felt tips and many others chose to draw out their reactions to the papers being presented, rather than a blank wall of scribbled handwriting. The notes were oddly beautiful, akin to mind maps or other brainstorming techniques, full of personal connections and imagery meaningful only to the author.

I feel like I’m arriving very late to this particular party, but something which seems de rigueur at software conferences is the mysterious art of sketchnoting.

Sketchnotes are purposeful doodling while listening to something interesting. Sketchnotes don’t require high drawing skills, but do require a skill to visually synthesize and summarize via shapes, connectors, and text. Sketchnotes are as much a method of note taking as they are a form of creative expression.

From Mike Rohde’s The Sketchnote Workbook:

Friends in the sketchnoting community constantly share how they use sketchnotes to document processes, plan projects, and capture ideas in books, movies, TV shows, and sporting events.

Craighton Berman at Core77 does a nice job of describing sketchnotes as:

Through the use of images, text, and diagrams, these notes take advantage of the “visual thinker” mind’s penchant for make sense of—and understanding—information with pictures.

From SketchnoteArmy

Now, I’m a largely verbal thinker. By background is in English Literature and Philosophy, I write poetry and I’m a real devotee of the written word. But the reality is, sketchnoting seems to offer something I don’t have in my arsenal, and even if I decide it’s not for me, I want to take it for a spin! So, in the weeks ahead, I’m going to be learning the dark art for myself.

Below are some of the most promising resources I’ve identified so far. I look forward to giving an update on this in the weeks to come!

ch1-final-545From The Sketchnote Handbook

Sketch note resources:
Sketchnotes 101: The Basics of Visual Note-taking
How To Get Started With Sketchnotes
Sketchnoting in Education (great list of resources)

2. Using The Dark Side

Iain Bright was on to a winner in his talk on The Psychology of Asking Questions, comparing testers to Jedi, and testing to using the Force. Who wouldn’t want to be a lightsaber-wielding warrior, against the forces of evil in the galaxy? Evidently most of the testers in the room agreed.

But the Force represents a balance, between the light and the dark. By only ever using one approach – that of positivity and harmony – we fail to acknowledge the “power of the dark side”.


A tester’s main tool is not a crystal-cored laser sword (although admittedly that would be useful in some estimation sessions) – it is the humble question. By asking the right questions, we stimulate thought, encourage fresh perspectives, and develop our own understanding.

Below is an excerpt from Iain Bright’s article on Testing Huddle:

After a bit of research, I found three ‘real-life’ psychological techniques which we probably use but at an instinctive level:

  • Why Not: when refused, ask “Why Not….?” This should only be asked after you have asked yourself “Why?” and you are clear in your objectives.
  • Foot in the door: start off small and build up.
  • Door in the face: make a large request then follow up with a smaller request if the first is refused.
From Psychology of Asking Questions

Whilst my main takeaway from the whole conference was Kim Knup’s excellent points around positivity (combined with Stephen Mounsey’s discussion of styles of listening), there is also something to be said for actively engaging in “dark side” activities – that is, proving something is brittle, be that code, an idea, an approach, a design. Considering this I was reminded of James Bach’s example of the Challenger Shuttle safety committee, who failed to speak up, thereby failing to execute their role. People died because those who were entrusted to do their job were too “light side” to rock the boat.

Testers: Rock the boat! Rattle the cage! Dissent! Disagree! Shout from the top of your lungs “I’m as mad as hell, and I’m not gonna take this anymore!!”


Maybe not that last one. But remember that friction at work is not always bad, and is often a sign people are doing their jobs properly. In my workplace we have a culture of “robust conversations”. These look a lot like arguments to outsiders, the main difference being we are all friends again at the end. That honesty, friction and – let’s face it – brutality lets us get a lot done without a lot of wasted time. It is a lean approach to interaction and in my experience many testers would benefit from a better understanding that speaking up, speaking out and saying “hell no!” is something required in the role.

Dark side resources:
Psychology of Asking Questions
Ritual Dissent (a dark side workshop)
10 Ways to Protect Yourself from NLP Mind Control (amusingly crackpot, but gives a nice “defence against the dark arts” primer with dark side tecniques implied throughout)

1. System Monitoring

Gwen Diagram’s sweary, surreal and superb spin through the wonderful world of what I’d have considered infrastructure considerations was a great eye-opener to some of the tools I’ve been missing out on as a tester. Whilst I’ve recently dabbled with performance testing (via JMeter) for the first time, other than that system performance bottlenecks, release strategies and the more techy system tools have remained in the hands of developers.

Gwen put an end to that with one simple sentence:

Monitoring is a form of testing.

This seems obvious when you put it that way, doesn’t it? I mean… what’s testing about? Providing information about the state, performance and functionality of a system. Discovering what works and what doesn’t, making observations about the quality of what’s there. There are so many definitions of the purpose of testing but whatever school of thought you subscribe to, it seems pretty clear monitoring is a very closely linked activity.

So why have so few testers traditionally looked into the infrastructural level? The answer seems fairly obvious: many testers are not technical, and as such the “stuff behind the code” seems obscure by several dimensions. If we can’t read code, we can’t work out configuration problems… right?

The reality is, modern tools like New Relic and Kibana make the arcane business of interpreting esoteric data strings something anyone can do – often at a glance. A view of the least performant screens in your app? A breakdown of your user base by browser? Why wouldn’t these things be of vital importance to a tester, someone who understands the importance of context in all scenarios?

A great start in this has been talking to my dev team, and a quick chat with the infrastructure guys. Whilst they had some suggestions for the “most relevant” screens for my purposes, those were of varying relevance – a better idea is to put on your explorer’s hat and go for a wander through the various types of data available (ideally with a dev nearby to confirm or deny any technical assumptions you make along the way).

I’m new to this and I don’t pretend to have a great grasp of it yet; however this is a real gap in my CV as a tester and one I’m relishing the opportunity to plug.

System Monitoring resources:
Zen and the Art of System Monitoring
Some of the stuff New Relic can do for Testers
Build impactful Test Automation dashboards using ELK stack

Quiz – Are You a Professional Tester, or a Well-Paid Amateur?

Second in a series of posts summing up my thoughts on the Ministry of Testing’s latest success, TestBash Manchester 2016.

Maybe you test for a living, but does that mean you’re any good? What is it that makes a tester a “professional”, as opposed to a well-intentioned amateur? As James Bach suggests, “novice testers may find some bugs by romping around like kittens”, but surely there’s more to becoming a test maestro than that?

An article which was very useful to me as a fledgling tester came up again last Friday, when its author Huib Schoots presented it as part of his presentation “A Road to Awesomeness”. Something which has always appealed to me is running his 16 (now 18) criteria on myself, to see how near or far I am from his definition. The 18 criteria Huib presented at the conference are as follows:

1. Have a paradigm of testing and can explain approach
2. Love what they do and are passionate
3. Consider context first and continuously
4. Consider testing a human activity to solve complex problems
5. Know that software development is a team sport
6. Know that things can be different
7. Ask questions before doing anything
8. Use diversified approaches
9. Know that estimation is more like negotiation
10. Use test cases and test documentation wisely
11. Continuously study their craft
12. Have courage and refuse to do bad work
13. Are curious and like to learn new things
14. Have important interpersonal skills
15. Have excellent testing skills
16. Have sufficient technical skills
17. Do not fear to learn and are not afraid to make mistakes
18. Happy to share their knowledge

In a future article I’ll measure up my own adherence in each category, but in the meantime, FOR FUN (please don’t kill me), I’ve set up a little quiz around some of the thoughts and ideas in the list, allowing you to “self assess” your professional status. Disclaimer: use at your own professional (or otherwise) discretion, I accept no liability for destroyed careers or inflated egos!

Please don’t refer back to Huib’s list until you’ve completed the quiz. To the quiz:


How did you do? Let me know in the comments below!

Top 3 Big Ideas from TestBash Manchester 2016

First in a series of posts summing up my thoughts on the Ministry of Testing’s latest success.

Note: The summaries below are based on my takeaways from the discussions, and are not necessarily representative of either the original intent or meaning of their authors! I only offer my perspective on what I heard, so my apologies if anyone feels misquoted. With that said…

3. James Bach – The Two Villas of Software Development

James was clearly a big draw at TestBash Manchester, and given the quality of his material and presentation it’s not hard to see why. His talk through the positives and negatives of Social Distance and Critical Distance was referred to throughout the conference, so fundamental were its key points. One of the illustrations of this was the idea of the testing and dev villas.

James suggests that developers operate from a “building mindset” which is not the same one testers generally use. It’s common knowledge that testers are valuable specifically because we approach software differently – James describes our mindset as “defocusing” – and therefore discover things devs wouldn’t necessarily look for or notice. We have distinct “primary roles”, based around our specific strengths; however, that doesn’t mean we should work siloed from devs; far from it, and indeed the best testing is often done in collaboration with a developer.

But how to approach this? By learning code and tech, becoming a “mini dev” and thereby losing some of the mindset which makes our testing so valuable? Or by insisting the dev become a “mini tester”, adopt our way of seeing the world and thereby lose some of their advantage? James suggests that roles like these are a “semi-private space” for which one role is most “accountable”, but which others can “alternate into”.

A particularly powerful analogy was that of the “testing villa and development villa”. James suggests that as testers, we can invite devs to the party at our place. In doing so, we do things “the test way”, with a dev in tow, understanding what we’re seeing, investigating alongside us. Once the party is over, the tester clears up – it’s their place, after all – by writing bug reports, devising new tests and otherwise doing the testerly thing.

At the same time, the devs can also hold a party at the dev villa, and testers can come along. In doing so, the tester gets to explore how devs work, see under the hood of the software and understand key decisions and behaviours whilst they are being written (or fixed). Similarly, after this has happened, the dev clears up their place – fixing code, writing new feature elements they’ve overlooked, etc.

By alternating their mindset, the tester is able to provide more value, to see from both sides of the fence and collaborate in both the traditional “test space”, and also while the code is still being written.


2. Stephen Mounsey – Reductive Vs Expansive Listening

I found Stephen to be the most quotable speaker at the conference, dropping such gems as “Testers make other people’s thinking better”, and suggesting that a key role of testers is “creating thinking spaces”, enabling other team members to better work towards their goals.

One idea which stood out for me was that of Reductive vs Expansive Listening. To add a little colour to those:

Reductive Listening is listening for “the bottom line”. Someone is listening reductively if they are only listening in order to find a solution – they want to respond, to give some insight back which the speaker doesn’t have yet.

Expansive listening is listening “for listening’s sake”. It is understanding sometimes people just need to let off steam, and need someone to listen without judgement, or an expectation of finding a solution.

Stephen suggested whilst we’re used to being highly performant members of our teams, and therefore trying to play a part in finding solutions to all the team’s problems, it’s often the case that we’re not being approached for a solution

This reminded me of the notion of rubber ducking. As a tester I’ve often played rubber duck to devs who only needed a sounding board, a non-technical one at that, before they could unlock the solution themselves. By talking things through, whilst I listen expansively – that is without the intent of providing a solution – the dev would solve the problem themselves, uncover a new avenue of investigation, or simply let off enough steam to get on with the onerous or ugly task they felt like talking about in the first place.

It’s a valuable thing to remember that sometimes, you’re not expected to take the team’s problems on your own back. Sometimes you’re more valuable by choosing the right style of listening, and letting the team solve it’s own problems.


1. Kim Knup – The Relevance of Positivity

Kim’s talk was a real goldmine of excellent ideas, and a very fresh way of viewing our approach as testers. Rather than suggesting a technology, a new way of writing test cases (more on that in a future post!) or reporting bug metrics, “recovering pessimist and misanthrope” Kim spoke to us about the relevance and power of positivity in the role of Testers.

Whilst testing can seem like an inherently negative, critical exercise – searching for the faults in other people’s work – we can still approach what we do in a positive manner. And whilst there is undeniable pleasure to be had in finding bugs (“Us testers get to raise bugs and make the devs cry!”), there may be greater rewards in working together. That can mean a greater focus on positive testing, to confirm the features actually do work as expected rather than simply seeking failures; it can also mean doing what we do with a smile on our faces, in an attitude of collaboration rather than negativity. To quote James Bach earlier in the day: “Why don’t you love me when I criticise you?!”


With statistics and CBT-based research to back this up, Kim suggests there is a key link between creativity and positivity – that by remaining positive in what we do, we do our jobs better (including designing negative tests). We move from a “fight or flight” mindset which is the antithesis of collaboration and is often highly confrontational, towards a “broaden and build” mindset, where both dev and test are producing something creatively with the goal of reaching the same end goal: quality software.

As Kim put it: “90% of long term happiness comes from the way our brain processes the world.” By choosing to adopt a more positive attitude, we not only make ourselves happier, but encourage those around us to do the same.

Kim emphasised the importance of quality information above quantity of bug tickets, and suggested we can form more positive working relationships with devs by simply pointing out bugs, rather than being caught up in documentation and metrics. I couldn’t agree more! Being a slave to documentation (beyond what’s necessary) is a surefire way to sour relationship between dev and test, and we should not see our output as “bug tickets and test cases”, but rather “collaborative approaches to software development”.

An oblique angle on what we do, and a really relevant and useful one. Today has been day one of positivity in the role for me – whilst I think some of the devs are expecting a punchline, all told I’m already finding a shift in mindset useful.

James Bach – Don’t Think So Close To Me: Managing Critical and Social Distance in Testing

Stephen Mounsey – Listening: An Essential Skill For Software Testers

Kim Knup – On Positivity – Turning That Frown Upside Down



Hello Worldb – new testing blog in town

( ^ How to drive testers to your blog? I’m going with the irritant marketing approach… ^ )

Hi, I’m Stu, and I test software for a living.

After some brief forays in the past, my recent trip to TestBash Manchester (2016) has left me fired up and very keen to start posting my ramblings and observations on the weirder than weird world of quality assurance, software testing, quality testing, software assurance and every contentious combination, rebuttal and restoration of the above.

I arrived at testing via the wholly unexpected Product Ownership/UAT (although I’d never have known to call it that at the time) of a small in-house project at a college where I was an administrative coordinator. Through the Open University’s inaugural “Testing Academy” I was driven through the ISTQB, plopped in a very-very-not-at-all-waterfall SIT team, then moved upstream to a somewhat more agile scrum team.


Since then, I’ve moved on to a very-very-actually-agile (our whole business – sales and services teams included – is fundamentally agile) team developing SaaS Smart Scheduling software for tier 1 and 2 retail customers. Whilst the coding side of the development team understood agility, the test side was deeply old school, and a lot of my presence has been building the respect for, understanding and use of modern test practice by the test team, dev team and wider business. This has included exploratory charters, 3 Amigos (I was the permanent test presence on the panel) and beginning to drive automation.


I know I’m still a beginner – I love that –  and that there’s a huge amount I don’t know about the test space. Whilst I’ve wowed the business by introducing exploratory test charters and scrapping some of the old, iron-wrought processes, what I’ve been doing up to now has been well within my comfort zone as a tester.


Made by my current lead dev, honestly wonder if the compression artefacts everywhere were deliberate

This blog is the first step in resolving that; it’s a place to publish my projects going forward, to drive myself to comment and to build on what I know, and to share what I discover along the way. I’ll try and bring you something worth reading, if you have an interest in test, and hopefully in time I’ll start sharing ideas of my own. You can also find me on twitter @tzb.

Outside of my professional life I spend most of my money on travel, this year I’ve done Spain, Norway (2000 miles of it at least), Sweden, Denmark, next week I’m off to Svalbard for a few days of the polar night, before Stockholm at Christmas. I have two beautiful children (currently 3 and 4) and a very wonderful partner in crime called Mary. I live in Woburn Sands, near Milton Keynes where I work.

Anyway, enough about me. Just wanted to say a quick hellqo.