9.2.6 Test Tool Deployment
Every automated tool is software in its own right and may have hardware or software dependencies. A tool should be documented and tested itself regardless of whether it is purchased as-is, adapted or created in house. Some tools are more integrated with the environment, and other tools work better as stand-alone tools.
What is meant by tested, documented?
When the system under test runs on proprietary hardware, operating systems, embedded software or uses non-standard configurations, it may be necessary to create (develop) a tool or adapt a tool to fit the specific environment.
Need to be more specific. Is this talking about developing one's own tool (Duplicates section 9.2.8?)
It is always advisable to do a cost-benefit analysis that includes initial implementation as well as long-term maintenance.
Duplicates Cost-Benefit Analysis (9.2.1)
During deployment of a test automation tool it is not always wise to automate manual test cases as is, but to redefine the test cases for better automation use.
Too weak. Better: it is highly inadvisable... what does 'redefine' mean?
This includes formatting the test cases, Meaning? considering re-use patterns Meaning?, expanding input by using variables instead of using hard-coded values Parameterisation and data-driving and utilizing the benefits achieve benefits, don't use them? of the test tool, which has abilities to traverse, repeat and change order of what? with better analysis and reporting facilities. For many almost all? test automation tools, programming skills are necessary to create efficient and effective test programs (scripts) and test suites. It is common that large test suites are very difficult to update and manage if not designed with care. Appropriate training in test tools, programming and design techniques is valuable to make sure the full benefits of the tools are leveraged.
Even when manual test cases have been automated What does that mean? Scripted tests? ALL tests are manual until automated aren't they?, it is important to periodically execute the test cases manually to retain the knowledge of how the test works and to verify correct operation. ??? How will running a test help you understand? It will either pass (you learn not much) or fail (you have to change it presumably). Don't understand.
When a tool gets into use and the number of test scripts grows, there may be a need to add on features that could be provided by other tools.
'Add on features'? Means what?
This is not always possible since tools do not always have open interfaces and sometime use proprietary non-standard scripting languages.
Two issues here. One is integration, the other is common languages. If a tool had a common language, how would that possibly help? Explain.
It is wise to use tools that have plug-ins to open frameworks or API (Application Programming Interface,. This will guarantee a better future-proofing of the test scripts as testware.
Don't understand. Plug-ins, API aren't related to 'future-proofing'. If anything they could jeopardise future use. Explain.
For each type of tool, regardless of the phase in which it is to be used, consider the characteristics listed below. These characteristics can be used both in tool evaluations and when building a tool. In each of these areas, a tool can be weak or strong. A list such as this is useful when comparing the capabilities of similar tools.
Not clear what these 'characteristics' are. Are they appropriate for ALL tools or a subset? Which tools use which subset? Not helpful. Much better advice than this exists on the web.