Blogs from Other Sites
The BCS Specialist Interest Group in Software Testing recently held their December Day Conference, with automation expert Dot Graham as a keynote, the day naturally had a focus on test automation.
Mike Bartley, Programme Secretary of the SIGiST, has written up his comments from the day. The SIGiST is looking for speakers for their March event, if you are interested in speaking, please contact Mike at firstname.lastname@example.org.
“I’m new to testing – where do I start?”
I get asked the above questions A LOT. It’s a very common question for those who are brand new to testing, those who are shifting from another business function and those who are returning to testing after many years away.
I repeat roughly the same answers quite often so I’m writing a blog post to point people at.
Of course, a little self promotion – if you do nothing else then buy my own remaining relevant book – it is packed with ideas on how to learn, how to network and hints and tips on how to rock an interview You could consider it a much longer form version of this blog post.
Join the Software Testing Club. Period.
Check out the AST training courses and learn as much as possible.
Follow the big bad list of test bloggers being curated by the Software Testing Club – there are a lot of them – mostly good – pick and choose carefully though.
Download and read my Blazingly Simple Guide To Web Testing – all of the hints and tips were created from bugs I’ve found in the past.
Join twitter and follow the #softwaretesting #testing hashtags – find interesting people and follow them.
As part of the above question I also often get asked what the day to day activities of a tester are.
It’s tricky to say what the day to day activities of a tester commonly are as the role is so incredibly varied. You might be following pre-defined scripts and checking that the software matches the test case.
You might be exploring the product to discover what it does. You might be analyzing specs, writing user stories, writing automated tests, performance testing, security testing, doing customer visits, studying usability and a whole host of other stuff. You might do some of these things during one working week at some companies, you might do nothing but following scripts at others.
The industry is so varied that I would suggest, if you can, that you take the time to carefully chose the testing role you want. I would always suggest seeking out companies that put exploration and learning above scripted testing, but not everyone has the luxury of holding out for such companies.
Some companies will insist on a certification. It’s your choice as to whether you want to get one. I’m not a fan – but I’m a realist – some companies require them – and if you need a job then go for it. But take the certification for what it is – a certification that you sat the course and got a favorable result. It is NOT a marker of excellence and shouldn’t be your single point of learning.
If you follow some of the above you’ll encounter people and communities that will help you find the resources you need, the people you need to know and hopefully the sources that can help you skill up in the right way. You might even land a job through your networks and community.
It's not a typo, the title. I've been thinking about how - like a professional critic - a professional tester frequently needs to bridge the gap between the connoisseur and the consumer, to take the desires, constraints, needs, sensitivities and complaints of both into account when trying to make sense of and assess a product.
I used to spend a lot of time on music: I'd listen to tens of new records every week and go to gigs all the time. I could, and did, talk about subtleties of a drum sound, listen to tracks because of who produced them, argue about whether this or that 12" single was dark acid techno or acid dark techno or dark/acid techno or whatever. I would play records because I appreciated technical aspects of the sounds on them as much, if not more than, the music. I would deliberately seek out ever more subtle or challenging sonic experiences, such as Dog Pound Found Sound (which I actually played on the radio). Or records made by someone linked to someone who had once played with someone I had once liked a record by. I was a connoisseur.
My wife often thought the records sounded the same, or were simply crap. What she did like she just knew she liked, usually a decision based purely on how what she heard made her feel. Any aspect of how or why or where it was made was irrelevant to her. She was a consumer.
When I wrote about or broadcast a piece of music I, as the critic, would attempt to respect the respect the depth of knowledge and zealousness of the connoisseur and try to place records in some kind of context that would make sense to the consumer while at the same time not compromising the integrity of either party. Similarly, when testing, it can be important to bear in mind both the internalised view of the product and the end user view (or rather views) of it. Being able to do this, and trade them off against one another, is a skill, a craft, an art.
One example; here's the bones of a particular kind of conversation I've had many times over the years:
Person 1: When I do A, the product does B. This was a surprise to me.
Person 2: Ah, yes. It does B because of X, Y, Z. It's how we were asked to do it. It's expected.
Person 1: But as far as I can see, A has nothing to do with X; Y is not visible at this point; Z is a concept that only exists in the product internals.
Person 2: Yes, but that's how it was requested and implemented. It's expected.
Person 1: But when I do A, getting B looks wrong when all I have to go on is X.
Person 2: Yes, but given how we coded it - which was according to the requirement - it's expected.I have played both roles in this going-nowhere-fast dialogue. When I was a software engineer - or, to avoid offending actual software engineers, when I wrote the code - I was more often Person 2, a specialist with detailed knowledge and perhaps an entrenched position. (The specification as a shield. Discuss.)
As a tester I have sometimes found myself as Person 2 when discussing a feature I'm working on with someone who isn't working on it. When I've invested time, effort and emotional capital I can become defensive; closer to the end of the cycle I can become more of an apologist for flaws. Even on a feature I feel free to criticise myself - and probably have done at length - I can feel uncomfortable with others' criticism of it. Even when I agree with the criticism I can feel obliged to justify the observed behaviour.
In my experience the Person 1 role seems to come naturally to those with less baggage or, perhaps better, background. They can be gloriously indignant at the slightest provocation, focussed on the detail that offends them to the exclusion of all others and unreceptive to any justification. It seems to be a natural trajectory that, once they've been around the product for a while and inevitably been inducted into the world of Person 2, it becomes hard for them to decline its comforting, if stifling embrace. (The specification as blinkers. Discuss.)
At the other end of that trajectory, I find that testers with more experience can tend to become more comfortable maintaining both Person 1 and Person 2 positions simultaneously, expressing both without prejudice, balancing them out in context. As it happens, I don't think experience is necessary for this duality, but I do think self-awareness is and that frequently comes with experience.
So, what kind of person will you be today? A connoisseur, a consumer, a critic?
I’m a big fan of the mind mapping tool Mindmup and logged in today using the Opera browser.
Here’s what I saw:
This is an excellent approach to communicating about the limitations and restrictions around testing – you wouldn’t expect any less from Gojko (one of the guys behind mindmup).
It’s a great way of setting expectations but without limiting the choices made by the end users. I can still choose to continue using Opera, or I can switch to one of the other stated browsers. I have a choice – but I also know it might not perform as the developers expected.
For many companies it’s often tricky just saying “no” to supporting the mass of different browsers now available so they try to test them all. Using web analytics and analysis it’s now possible for many web companies to work out what their customers do actually use (and how many people use it), and then test against those.
One of the things that I have observed from a number of testing conferences is that none of them have any sustained focus on hiring or getting hired *.
There have been one or two sessions about the topic of hiring but nothing sustained.
The occasional tracks that I have seen have been mostly focused around the hiring strategies of big corporates where bums on seats is sometimes more important than cultural team fit.
Most testers don’t know how to get hired – I wrote a book to help bridge that gap. Those that do know how to get hired are truly in the minority and appear, at least on the surface, to be overall better testers. Mostly this is not true – they are good, but they are often no better at testing than others, it’s just they are much better at getting hired. Getting hired is a skill.
Hiring and getting hired is a vast topic and one which is fraught with contextual challenges, but I believe that a dedicated set of talks from hiring managers from a wide variety of contexts, and maybe some sessions and tutorials on writing CVs, interviewing etc would go down well at most testing conferences. It’s great being good at testing but how do you then go on and get hired…
There are supporting topics such as social profiles, writing clear CVs, networking, self education and interpersonal communication that might also make interesting tracks. Or maybe they wouldn’t. Maybe people go to testing conferences to learn about testing and not the other stuff that comes with our working world…
What are your thoughts?
* The conferences that I have been to
I’ve often wondered why we don’t have more centralized reports about the state of testing and future trends that are somewhat less bias than some of the big vendor reports out there.
If they get enough people responding it could be quite an illuminating report which will hopefully show that the industry is moving in to new advances and changes to meet economic and business demands.
I hope it tells us that. I can keep my fingers crossed.
I’m putting my support behind this survey as I think the results will be interesting. Let’s see how it goes.
At the moment the survey is not live, but you can subscribe to the QA Intelligence blog if you want to keep updated, or if you don’t want to subscribe you could keep an eye on Twitter for updates.
I’ll be sharing the updates (@rob_lambert) and no doubt Joel (@joelmonte) will be also (Joel is the guy behind the QA Intelligence blog).
"Testing helmets the old fashion way" the tweet said.
"It'd be better if that was a brick wall" one of my team said.
"Yeah, that is what the specs asked for" I said.
And how we all laughed, for just a little too long, those sad chuckles of shared recognition.
Some people say they “don’t have enough requirements to start testing,” or that the requirements are unclear or incomplete or contradictory or out of date. First, those people usually mean requirements documents; don’t confuse that with requirements, which may be explicit or tacit. There are plenty of sources of requirements-related information, and it’s a tester’s […]
I’ve got a second blog which will be feeding out to my main Twitter account. It can be found at – http://idlethoughts.postach.io/
Ok – so why another blog?
Well this is an experiment on two fronts.
Firstly – this blog is connected to my Evernote account. It’s really cool.
Secondly – I’m writing another book and as with most book writing there is inevitably a shed load of research to be done, notes to be made and observations to be aired – this is what the blog is for.
Expect to see interesting learning items, questions about software testing (especially about test management) and things that will inform my next book.
The blog posts on it are essentially a stream of consciousness and will be shorter in form than this Social Tester main site.
Many people seem certain about what happened to cause the healthcare.gov fiasco. Stories are starting to trickle out, and eventually they’ll be an ocean of them. To anyone familiar with software development, especially in large organizations, these stories include familiar elements of character and plot. From those, it’s easy to extrapolate and fill in the […]
Our community is not best served by one single group or organization. [Opinion piece follows ]
As an individual it’s important to be skeptical when we have just one single source of learning and direction for our community. If we tie ourselves to a single source (i.e. group, organization, business, scheme) we are tying ourselves to a narrow (and potentially narrowing) point of view.
If we do narrow our focus to a single source we will hinder our knowledge growth and our learning scope. I believe there is another side effect though – the wider community will become more fragmented and distant as we become less tolerant of alternative views…(I have no evidence for this, just observations)
Groups that were once a mouthpiece and meeting ground for the unheard and diverse minorities soon narrow as they find a niche, or attract a tipping point of like minded people – this is natural which is why there is always room for new groups and communities to emerge to fill the gaps.
As groups narrow they will focus on specific areas. Some of these groups will inevitably try to make money by selling services (or information) to survive, some will just tumble along whilst others will seek external funding. Some will disappear. Some that do disappear will leave a gap to be filled, some will not be missed.
We need to be sure to keep our mind open and notice when we start to become focused too narrowly on our learning and our community involvement. It’s not heresy to switch communities or to exist across several seemingly different communities. In fact, I would positively encourage mixing views and opinions together. Our interests and persona’s are elastic, we must try not to resist this.
Look at the standardization schemes. In order to scale (i.e. to make money – assuming you believe this is the primary goal of those behind them) the content must to be filtered down, made consistent and change as infrequently as possible (what a bind it would be to re-print the marketing and other collateral every week to keep up with industry innovation).
In order to embark on such a dramatic process those behind it will seek to own the learning material contained within. They may want to protect it. They may want to ensure they are the only ones offering it. They may tell you that you cannot get this learning elsewhere. (note: some communities do this also)
They are wrong. Some, if not all, of the information is available freely (or at least cheaply) to us, on any device or platform we care to consume it from. Not only that but it may be opinionated (in a good and/or bad way), will naturally be diverse (if we look far enough for it) and is hopefully being shared by people actually doing the work. It will therefore change often. This is good.
And as it’s freely available we could, and probably should, mash it around, mix it up, fine tune it, fix it, extend it, delete it, try it, ignore it and make of it what we need it to be. This will be where the giant leaps in our thinking about testing will come from. From us; the testing community mashing together ideas to see what works, and what doesn’t.
And once we’ve made of it what we want then we could share it so that others can do the same. This will lead us to an evolution (or a revolution) in the way we approach testing.
Instead of small incremental improvements on the standards/norms we might see a major sea change and a dramatic shifting of our craft – I look forward to this day.
I believe the testing community needs more people to seek out diversity in our sources of learning and inspiration.
I also believe we could challenge anyone and anything that suggests a single source of information and direction is the right thing for us. We could seek out the free and open source learning that is available to us. We could challenge the old guard and stale approaches to learning (and teaching) of software testing.
We could create a community of interest if one does not exist. We could seek clarity as to whether someone is protecting a mass of knowledge for the right reasons (and no-one should begrudge anyone making a living from selling what they know) or whether it is to seek conformity and standards of the masses.
But most of all we should try hard not to let ourselves sink in to the sea of conformity and oblivion that is consuming so many people where we simply become a nodding and compliant member of a single source of direction for our community. I know we can do better. Our craft is evolving and we need more people to help gain momentum to nudge it to a diverse future rather than single path of conformity. We can do that.
A long time ago I coded a now defunct modelling tool to help me with my testing. Half the battle with managing and reporting testing involves deciding how you will model it for the project you work on.
The generic set of formal modelling techniques I use, I often map on to:
Lightweight Subjective Status Reporting
On a recent project we wanted a lightweight way of tracking progress/thoughts/notes over time. I really wanted a subjective 'daily' summary report which provided interested viewers insight into the testing without having to ask.
As part of my normal routine I have become used to creating a daily log and updating it throughout the day. Ofttimes creating a summary section that I can offer to anyone who asks.
How to do this using Jira?
We created a custom entity called something similar to "Status Tracking Summary".
Every day, someone on the team would create this, and title it with the date "20 November 2013".
We only really cared about the title and the description attributes on the entity.
The description took the form of a set of bullets that we maintained over the day to document the status e.g.
- waiting for db schema to configure environment- release 23.45 received - not deployed yet- ... etc.
Over the day we would maintain this, so at the end of the day it might look like
- db schema and release 23.45 deployed to environment- initial sanity testing started see Jira-2567- ... etc.
I initially thought that the title would change at the end of the day to represent a summary of the summary e.g. "Environment setup and sanity testing", "Defect retesting after new release". But this never felt natural and added no real value so the title normally reflected the date.
Typically, as a team of 3-4, we had 5 - 15 bullets on the list.
Use Dashboards to make things visible
To make it visible, we added a "Filter" on this entity, and added Filter display gadget to the testing dashboard which displayed the last 2 status updates.
This meant that anyone viewing the testing dashboard could see subjective statements of progress throughout the day, and historical end of day summaries throughout the project.
But people don't like writing reports
I have grown used to tracking my day through bullets and actions that I take it for granted that everyone can do this. Still, I had initial concerns that not everyone on the team would add to the status and I might have to chase.
Fortunately that didn't happen.
The team used the Dashboard throughout the day to see what defects they had allocated to them, and to work on tasks and defects in the appropriate state. Therefore they always saw the subjective daily status report when they visited the Dashboard and updating it became a natural task during the day.
You can report Daily, with mininal overhead
Very often stakeholders ask us to prepare daily reports. I find that creating, and updating, a summary log throughout the day often satisfies that requirement.
As a team, building it into our documentation process throughout the day added very little overhead and made a big difference to the stakeholders had to our testing.
I was chatting to someone at EuroSTAR last week and we got talking about personal productivity.
I shared with her my way of working using a concept I’ve been calling Shipping Forecasts. It’s based around the simple premise that I will be shipping something (a project). It is called a forecast because no amount of planning is a guarantee, so I am forecasting about what is involved in shipping this project.
My view is that projects are simply containers for tasks, and completing the tasks is what’s most important. But these tasks should be viewed in the context of what I’m trying to achieve – i.e. why am I doing this project?
Anything worth shipping will take a significant amount of effort and will need some form of forecasting.
This forecasting could be a quick scribble in a notebook or a full on project plan – a lot depends on your own style and own way of working. I like to visualise my work and list out what I believe needs to be done to complete the project.
By breaking a bigger project in to smaller chunks we can start to see what is truly involved. I also believe that any project that will take more than about 1 month should be broken down in to multiple projects. Each one of those projects should be shippable and feedback should be sought before moving on to the next project.
In a sense it’s the basics of iterative software development.
I thought I would share with you my Shipping Forecast idea that I use to break down my own projects in to manageable chunks.
Since I’ve started using this technique I’ve been uber productive.
There are times when I get a little lost or don’t feel like producing anything but rarely does a project sink because I didn’t understand it, or couldn’t actually complete it, or didn’t know what was involved in completing it.
A few people have been using the Shipping Forecast for some time now so they have been through a few reviews but there is always room for improvement – don’t expect the templates and the idea to be complete – I’m still hacking it.
How to use the Shipping Forecast templates
To start with you’ll need to define a project in the format of
This……(date, time period, month, year, etc)
If you cannot fill in these sentences then you need to question why you are doing the project.
The project must have a deadline otherwise it will meander on and on. Don’t fall in to the trap of relying on your own enthusiasm and energy. Most projects require hard work and tiresome commitment – a deadline will help. You don’t always have to specify an exact date, but the information you fill in should mean something to you. For example: “This Week” is fine if you know that your weeks finish on Saturday for example.
You should be able to describe what it is you are building at a high level. You must know how to recognise the end result. Is it a product? A website? A new blog post? A new t-shirt design? A new test automation tool? You must also think about how complete you need it to be. Are you shipping the finished item, or just phase/design 1 of it?
Your project should also have a reason why you are doing it. I’ve seen too many project stumble because the project owners didn’t know why they were doing it. Don’t do something because you think you should. Do something because you need to or want to. Why are you bothering to commit to this project?
There are some prompts below the description on the template to help you think about how you will measure your progress, how you will know you are done and whether you are reliant on others. Projects can fail because they rely on other people and these other people didn’t know that.
There is an action section with 20 spaces. If your project takes more than 20 tangible actions that can be marked as complete, then it may be that your project is too large or you have broken the activity down too much.
Some people use this form to work out the 20 activities they need to complete and then break those 20 items down further in another tool, like a To Do list manager. This could work really well but I’ve found that any more than 20 deliverable items to achieve Shipping is just too much. I find it’s better to have more projects and ship each one than try to do too much.
And that’s it. The Shipping Forecast – a tool for helping you work out what you need to do to ship stuff.Some examples
Here are some examples of Shipping Forecasts that I have done.Our Garden Project
Here is the PNG (image) of the Shipping Forecast for you to download. I’m working on getting a better quality one created.
Christian Hassa and I are running a free open-space discussion on flexible scope and agile requirements in London on December 6th. We plan to talk about effective user stories, impact mapping, story mapping and so on. If this is of interest, sign up now – we have space only for 20 people.
Five years ago, Lisa Crispin and Janet Gregory brought testing kicking and screaming into agile, with their insanely influential Agile Testing book. They are now working on a follow-up. This got me thinking that it’s about time we remodelled one of our sacred cows: the Agile Testing Quadrants. Although Brian Marick introduced the quadrants a few years earlier, it is undoubtedly Crispin and Gregory that gave Agile Quadrants the wings. The Quadrants were the centre-piece of the book, the one thing everyone easily remembered. Now is the right time to forget them.
XP is primarily a methodology invented by developers for developers. Everything outside of development was boxed into the role of the XP Customer, which translates loosely from devspeak to plain English as “not my problem”. So it took a while for the other roles to start trying to fit in. Roughly ten years ago, companies at large started renaming business analysts to product owners and project managers to scrum masters, trying to put them into agile boxes. Testers, forever the poor cousins, were not an interesting target group for expensive certification. So they were left utterly confused about their role in the brave new world. For example, upon hearing that their company is adopting Scrum, the entire testing department of one of our clients quit within a week. Developers worldwide, including me, secretly hoped that they’ll be able to replace those pesky pedants from the basement with a few lines of JUnit. And for many people out there, Crispin and Gregory saved the day. As the community started re-learning that there is a lot more to quality than just unit testing, the Quadrants became my primary conversation tool to reduce confusion. I was regularly using that model to explain, in less than five minutes, that there is still a place for testers, and that only one of the four quadrants is really about rapid automation with unit testing tools. The Quadrants helped me facilitate many useful discussions on the big picture missing from typical developers’ view of quality, and helped many testers figure out what to focus on.
The Quadrants were an incredibly useful thinking model for 200x. However, I’m finding it increasingly difficult to fit the software world of 201x into the same model. With shorter iterations and continuous delivery, it’s difficult to draw the line between activities that support the team and those that critique the product. Why would performance tests not be aimed at supporting the team? Why are functional tests not critiquing the product? Why would exploratory tests be only for business stuff? Why is UAT separate from functional testing? I’m not sure if the original intention was to separate things into those during development and after development, but most people out there seem to think about the horizontal Quadrants axis in terms of time (there is nothing in the original picture that suggests that, although Marick talks about a “finished product”). This creates some unjustifiable conclusions – for example that exploratory testing has to happen after development. The axis also creates a separation that I always found difficult to justify, because critiquing the product can support the team quite effectively, if it is done timely. Taking that to the extreme, with lean startup methods, a lot of critiquing the product should happen before a single line of production code is written.
The Quadrants don’t fit well with the all the huge changes that happened in the last five years, including the surge in popularity of continuous delivery, devops, build-measure-learn, big-data analytics obsession of product managers, exploratory and context driven testing. Because of that, a lot of the stuff teams do now spans several quadrants. The more I try to map things that we do now, the more the picture looks like a crayon self-portrait that my three year old daughter drew on our living room wall.
The vertical axis of the Quadrants is still useful to me. Separation of business oriented tests and technology oriented tests is a great rule of thumb, as far as I’m concerned. But the horizontal axis is no longer relevant. Iterations are getting shorter, delivery is becoming more continuous, and a lot of the stuff is just merging across that line. For example, Specification by Example helps teams to completely merge functional tests and UAT into something that is continuously checked during development. Many teams I worked with recently run performance tests during development, primarily not to mess things up with frequent changes – more to support the team than anything else.
Dividing tests into those that support the team and those that evaluate the product is not really helping to facilitate useful discussions any more, so it’s time to break that model.
The context driven testing community argues very hard that looking for expected results isn’t really testing – instead they call that checking. Without getting into an argument what is or isn’t testing, the division was quite useful to me for many recent discussions with clients. Perhaps that is a more useful second axis for the model: the difference between looking for expected outcomes and analysing aspects without a definite yes/no answer, where results require skilful analytic interpretation. Most of the innovation these days seems to happen in the second part anyway. Checking for expected results, both from a technical and business perspective, is now pretty much a solved problem.
Thinking about checking expected vs analysing outcomes that weren’t pre-defined helps to explain several important issues:
Most importantly, by using that horizontal axis, we can raise awareness about a whole category of things that don’t fit into typical test plans or test reports, but are still incredibly valuable. The 200x quadrants were useful because they raised awareness about a whole category of things in the upper left corner that most teams weren’t really thinking of, but are now taken as common sense. The 201x quadrants can help us raise awareness about some more important issues for today.
That’s my current thinking about it. Perhaps the model can look similar to the picture below.
What do you think?