Test Management Forum - 28 January 2015
The 45th Test Management Forum will take place on Wednesday 28 January 2015 at the conference centre at Balls's Brothers, Minster Pavement
Generously sponsored by Quotium and Grid Tools.
Llyr Wyn Jones, Grid Tools, "A Critique of Testing (Design)"
Niels Malotaux, Project and Management Coach, "How to Move towards Zero Defects"
Adam Brown, Quotium, "Agility in Security Testing"
Stephen Janaway, The Net-a-Porter Group, "'How to Focus On Testing When There Are No Test Managers"
Mark Gilby, Sopra, "Non Functional Testing in an Agile world"
Joanna Newman, Ericsson, "Tapping millennials: Techniques to attract, retain and motivate millenials"
Joanna Newman, Ericsson, Tapping millennials: Techniques to attract, retain and motivate millenials
As Test Managers, you probably already have some millenials* on your team with cutting edge skills. Or perhaps you will be recruiting soon, and want to understand what millenials look for in a role to ensure you’re attractive to them. Regardless, the millenials are coming and bring with them skills in high demand coupled with very clear requirements from the workplace (not least of which is the opportunity to jump to new roles and opportunities much faster – which in some ways makes them even more suitable for test roles). Come to this session to hear the latest research on millenials and tips to integrate them into your existing team seamlessly.
This session will also have a substantial Q and A for us to share our experiences and learn from one another.
*Millenials are the renamed “generation Y”, sometimes called the “digital generation”.
Llyr Wyn Jones, Senior Programmer, Grid Tools
There are many tools and techniques out there for designing the perfect test cases, but up to now there have been few objective techniques and how they compare to each other. Part of the problem is the absence of a generic framework where objective analysis is possible. Using a mathematical framework based on information theory, this presentation will talk about as many test case design techniques as possible. One key criteria is the idea of measuring the amount of application knowledge that can be encoded into the test case design: it can be shown that the quality of testing is directly proportional to the amount of information encoded. This talk will give a brief overview of the framework (with no intimidating mathematics) and will go on to talk about how each of the techniques under review fare according to the criteria. Unsurprisingly, formal models fare very well under this treatment, and further benefits of such modelling will also be outlined.
Stephen Janaway, Net-a-Porter Group, How to Focus On Testing When There Are No Test Managers
About 3 months ago my employer decided to remove the Test Management role and transition to a purely product management based structure. This has meant many changes to testing, touching everything from test management to test planning and test execution. In this session we will look at what changes were made, why and how the changes happened, and discuss whether this new view of test management is a better fit for the future in general.
I believe test management needs to adapt more quickly to the new world of cross-functional, agile teams and continuous delivery. I hope through this workshop and structured discussion that the audience will be able to compare the changes that I've experienced to their own situation, and decide whether there's a need for change in their own roles and the Test Manager role in general.
- The role of Test Management in the agile world today.
- How Test Managers can adapt.
- Ways in which the whole team benefits from a new approach to the Test Management role.
- How to form a strong testing community and foster bottom up learning.
- Sharing of experiences of having gone through the move away from Test Manager.
- How you can prepare for the future.
Niels Malotaux, Project and Management Coach, How to move towards Zero Defects
How many defects would you like to find? How many issues would you accept the users to experience? If you don’t think the answer should be “Zero!” you’d better come and discuss, otherwise your team may gradually be put out of business by those who did learn to achieve Zero Defects.
Of course it’s the development team that should prevent defects and make sure the users don’t experience any problem. Some testers fear that if there are no defects, they may not be needed anymore. Don’t worry. We should better discuss what the testers can do to help the developers to achieve this goal. Testing becomes even more challenging and interesting once no issues are found.
In this session I’d like to discuss with you what Zero Defects actually means. That it is not ‘turning a switch’ upon which we suddenly don’t make mistakes anymore. That it is an attitude, which results in improving the quality of the deliveries dramatically almost overnight. What ‘Root Cause Analysis’ really means, as, also among testers, there seems to be a misunderstanding here. And how a simple mantra: “No questions, no issues” proved a good technique for turning the Zero Defects concept into practice.
Adam Brown, Quotium, Agility in Security Testing
In the run up to Christmas we saw some interesting details about the affairs of a large media company, from internal emails to wage discrepancies and casual racism. As a member of the public, this might be interesting, but not being directly affected, it probably wouldn't keep you awake at night. But if Santa brought your kids a certain console and you tried to connect it to certain online games - you would think differently. At that point it becomes personal.
Security is of course risk based; what's really at risk is data. All organisations have critical data and all that data has risks associated with confidentiality, integrity and availability (CIA). Testers have a lot of expertise when it comes to risk that I believe could be very valuable when calculating and managing application security risk.
For every build and every release of every application that risk is present and must be managed. If we test an application at one point in time we can say we know the state of that application at that time, but what can we say about the next release? Like we did with testing, a process should be built around application security and judging by the leaks we see in the press that process isn't always there.
In the talk I'd like to get us started with some real world examples from the press & some of the top 10 Application Security risks today, and then discuss:
- How can we calculate the cost of data risk and what is the cost of security testing in an agile project?
- What can we do to reduce risk from each release of an applications? Is it important to test each build?
- What methods, technologies, services and methods exist to help evaluate the security of an agile application?
Other abstracts to follow.