Test Management Forum - 29 July 2015

The 47th Test Management Forum will take place on Wednesday 29 July 2015 at the conference centre at Balls's Brothers, Minster Pavement

Eventbrite - Test Management Summit - 28-29 April 2015  Book here

Timetable

13:00pm Tea/Coffee
14:00pm Introductions
14:15pm

Session A

Mike Bartley, Test and Verification Solutions, 'Constrained Random verification for Software'

 

Session B

James Thomas, Linguamatics, 'You're Having a Laugh'

Session C

Mark Winteringham, MW Test Consultancy , 'What's so great about WebDriver?'

15:30pm Tea/Coffee
16:00pm

Session D

James Walker, Grid-Tools, 'Is size everything? An investigation of testability in regards to software complexity'

Session E

Chris Ambler, Edge Testing Solutions, '1984 in 2015'

Session F

Paul Gerrard, Gerrard Consulting, 'State of the Automation'

17:15pm Drinks Reception

Programme

James Thomas, Linguamatics, 'You're Having a Laugh'

As a test manager I don't test as much as I'd like to so I try to find ways to stay loose and ready for those occasions where I get the chance.

In this talk I'll describe one activity based on joking that I think can fit the bill. How? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection.  Jokes can make you think and then laugh. But they don't always work. Does that sound familiar?

It started with the weekly caption competition at Linguamatics where I noticed parallels in my approach to it and testing. For instance, I might take each of the key entities in the picture and "factor" them - generate a list of features, related concepts, synonyms and so on. In testing I might then look for overlapping factors for potentially interesting test ideas, in the quest for a caption I might try to use the same approach to find an ambiguity and hence a joke. Doing this, I've found analogies between joking and concepts from testing such as oracles, heuristics, factoring, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.

I'm interested to find out from the audience what things they "just do" that they feel helps them.

Mark Winteringham, Surevine, 'What's so great about WebDriver?'

'As testers we seem to be doomed to make the same mistakes again and again when it comes to test automation, because we have limited ideas in our approach to solving problems with automation.  But what if these limitations aren't the fault of the tester but the testing community and industry, what if we aren't doing enough to promote different tools and ideas.  In this discussion I will explore this issue in detail before opening up to a discussion about how we can improve as a community and industry in promoting different tools and approaches.'

Chris Ambler, Edge Testing Solutions, '1984 in 2015'

Sometimes things come together in my mind and end up as big questions. The whole Big Data concept has made me think about George Orwell - 1984 and how powerful information really is in our technology driven world. We are already seeing how data can be used to manipulate habits and drive people's thinking. The Internet of Things is leading us towards a 'big brother world' where gathering information is becoming easier and more apparent with storage methods and technology increasing every day. I'm not trying to create a bleak picture of the future, more attempting to provoke thinking about things we need to get control of to ensure our future technologies work for us and not against us. This talk discusses these issues and how we need to deal with this as testers.

Mike Bartley, Test and Verification Solutions, 'Constrained Random verification for Software'

Based on this paper: Constrained Random verification for Software

James Walker, Grid-Tools, 'Is size everything? An investigation of testability in regards to software complexity'

Testability is formally defined as the capability of a software product to be validated, or in other words, the level of effort required to test a system against the requirements. A high level of testability is desirable to improve quality and detect defects utilising the minimal amount of resources, however, often this is unrealistic to achieve given the ever increasing number of inter-dependencies which arise in software through maintenance and enhancements, this is referred to as software complexity. Improper design can lead to an increase in software complexity, and therefore lower testability. Many methods have been proposed for measuring complexity. However, measuring testability is still often regarded as an unsolved problem in the testing domain. In this session we explore methods for increasing the testability of a system (to localise faults) with the minimal number of resources by exploiting good system design.

Paul Gerrard, Gerrard Consulting, 'State of the Automation'

Approaches such as Behaviour-Driven, Acceptance Test and Test-Driven Development are becoming increasingly popular. DevOps, Test Analytics and production experimentation are emerging and influencing more and more software businesses. The Internet of Things (soon to be 'Everything') is on the horizon but will soon be upon us.

The disciplines of DevOps and dependency on automation (of build, test, deployment and monitoring) are succeeding where structured/Waterfall approaches failed and the 'softer' methods of Agile are inappropriate.

"Automation is the future!" But what exactly is possible and impossible with automation, right here, right now? Where are the DevOps and Continuous Delivery crazes leading us? Where do testers fit? How do testers work in high-paced, automation-dominated environments?

Let's look into the near future and discuss how we survive or thrive.

Eventbrite - Test Management Summit - 28-29 April 2015  Book here