Movatterモバイル変換


[0]ホーム

URL:


Skip links

Try QA

Main navigation

Study material for ISTQB Exam Certification Foundation level, Premium & Free for ISTQB and ASTQB Exam, Certification questions, answers, software testing tutorials and more

What are Test execution tools in software testing?

When people talk about the ‘testing tool’, it is mostly atest execution toolthat they think of, basically a tool that can run tests. This type of tool is also known as a ‘test running tool’. Most tools of this type get started by capturing or recording manual tests; hence they are also known as‘capture/playback’ tools, ‘capture/replay’ tools or ‘record/playback’ tools. It is similar as recording a television program, and playing it back.

The Test execution tools need a scripting language in order to run the tool. The scripting language is basically a programming language. So anysoftware tester who wants to run a test execution tool directly will need to use programming skills to create and modify the scripts.

The basic advantage of programmable scripting is that tests can repeat actions (in loops) for different data values (i.e. test inputs), they can take different routes depending on the outcome of a test (e.g. if a test fails, go to a different set of tests) and they can be called from other scripts giving some structure to the set of tests.

However, duringtesting, the tests are not something which is just played back for someone to watch the tests interact with the system, which may react slightly differently when the tests are repeated.

Hence captured tests are not suitable if you want to achieve long-term success with a test execution tool because:

  • The script doesn’t know what the expected result is until you program in it -it only stores inputs that have been recorded, not test cases.
  • A small change to the software may invalidate some or hundreds of scripts.
  • The recorded script can only deal with exactly the same conditions as when it was recorded. Unexpected events (e.g. a file that already exists) will not be interpreted correctly by the tool.
  • The test input information is ‘hard-coded’, i.e. it is embedded in the individual script for each test.

There are many better ways to use test execution tools so that they can work well and actually deliver the benefits of running unattended automated tests.

There are at leastfive levels of scripting which are described below and also different comparison techniques which are as follows:

  • Linear scripts which could be created manually or captured by recording a manual test
  • Structured scripts using selection and iteration programming structures
  • Shared scripts where a script can be called by other scripts so can be re-used – shared scripts also require a formal script library under configuration management
  • Data-driven scriptswhere test data is in a file or spreadsheet to be read by a control script
  • Keyword-driven scriptswhere all of the information about the test is stored in a file or spreadsheet, with a number of control scripts that implement the tests described in the file.

Data driven scripting is an advance over captured scripts but keyword-driven scripts give significantly more benefits. They have also been described as ‘control synchronized data-driven testing’.

Although they are commonly referred to as testing tools, they are actually best used forregression testing, so they could be referred to as ‘regression testing tools’ rather than ‘testing tools’.

Atest execution tool mostly runs tests that have already been run before. One of the most significant benefits of using this type of tool is that whenever an existing system is changed (e.g. for a defect fix or an enhancement),allof the tests that were run earlier can be run again, to make sure that the changes have not disturbed the existing system by introducing or revealing a defect.

Features or characteristics of test execution tools are:

  • To capture (record) test inputs while tests are executed manually;
  • To store an expected result in the form of a screen or object to compare to, the next time the test is run;
  • To execute tests from stored scripts and optionally data files accessed by the script (if data-driven or keyword-driven scripting is used);
  • To do the dynamic comparison (while the test is running) of screens, elements, links, controls, objects and values;
  • To initiate post-execution comparison;
  • To log results of tests run (pass/fail, differences between expected and actual results);
  • To mask or filter the subsets of actual and expected results, for example excluding the screen-displayed current date and time which is not of interest to a particular test;
  • To measure the timings for tests;
  • To synchronize inputs with the application under test, e.g. wait until the application is ready to accept the next input, or insert a fixed delay to represent human interaction speed;
  • To send the summary results to atest management tool.

Other popular articles:

Reader Interactions

Leave a ReplyCancel reply

Your email address will not be published.Required fields are marked*

Primary Sidebar

Search

  

ISTQB Certification Exam Study Material

Chapter 1. Fundamentals of testing
What is Software testing?
Whyis testing necessary?
Software testing objectives and purpose
Whatis Defect or bugs or faults?
Whatis a Failure?
Fromwhere do Defects and failures arise?
Whendo defects arise?
Whatis the cost of defects?
Defector Bug Life Cycle
What is the difference between Severity and Priority?
Principlesof testing
Fundamentaltest process
Psychologyof testing
Independenttesting- it’s benefits and risks
SoftwareQuality
Chapter 2. Testing throughout the testing lifecycle
Whatis Verification?
Whatis Validation?
CapabilityMaturity Model (CMM-Levels)
SoftwareDevelopment Life Cycle
SoftwareDevelopment Life Cycle (SDLC) phases
SoftwareDevelopment Models
Waterfallmodel
V-model
Incrementalmodel
RADmodel
Agilemodel
Iterativemodel
Spiralmodel
PrototypeModel
SoftwareTesting Levels
Unittesting
Componenttesting
Integrationtesting
BigBang integration testing
Incrementaltesting
Componentintegration testing
Systemintegration testing
Systemtesting
Acceptancetesting
Alphatesting
Betatesting
SoftwareTest Types
Functionaltesting
Non functionaltesting
Functionalitytesting
Reliabilitytesting
Usabilitytesting
Efficiencytesting
Maintainabilitytesting
Portabilitytesting
Baselinetesting
Compliancetesting
Documentationtesting
Endurancetesting
Loadtesting
Performancetesting
Compatibilitytesting
Securitytesting
Scalabilitytesting
Volumetesting
Stresstesting
Differencebetween Volume, Load and stress testing in software
Recoverytesting
Internationalizationtesting and Localization testing
Confirmationtesting
Regressiontesting
Structuraltesting
MaintenanceTesting
Impactanalysis
Chapter 3. Static Techniques
Testdesign techniques
Statictest technique
Whatis static Testing?
Usesof Static Testing
Informalreviews
Formalreviews
Theroles and responsibilities of the moderator, author, scribe, reviewers and managers involved during a review
Typesof review
Walkthrough
Technicalreview
Inspection
Whatis static analysis?
Whatis a static analysis tools?
Chapter 4. Test design techniques
Testanalysis
Traceability
Testdesign
Testimplementation
Testdesign technique
Categoriesof test design techniques
Static testing techniques
Dynamictesting technique
i.Black box testing or Specification-based
Equivalencepartitioning (EP)
Boundary Value Analysis (BVA)
whyit is important to do both EP and BVA
Decisiontables
Statetransition testing
Usecase testing
ii.White box testing or Structure-based
iii.Experience-based testing
Errorguessing
Exploratorytesting
Structurebased technique
Testcoverage
Whereto apply this test coverage?
Whyto measure code coverage?
Howwe can measure the coverage?
Typesof coverage
Statementcoverage
Branch Coverage or DecisionCoverage
Conditioncoverage
Howto choose that which technique is best?
Chapter 5. Test management
Rolesand responsibilities of a Test Leader
Rolesand responsibilities of a Tester
Purposeand importance of test plans
Thingsto keep in mind while planning tests
Whattesting will involve and what it will cost?
Estimationtechniques
Factorsaffecting test effort
Test strategy
Testmonitoring
Testcontrol
Configurationmanagement
Risksin software testing
Productrisk
Projectrisk
Risk-basedtesting
Riskanalysis
Incidentmanagement
Incidentlogging Or How to log an Incident
Whatare incident reports?
Howto write a good incident report?
Whatis test status report?
Chapter 6. Tool support for testing
Typesof test tools
Toolfor management of testing and tests
Testmanagement tools
Requirementsmanagement tools
Incidentmanagement tools
Configurationmanagement tools
Statictesting tools
Reviewprocess support tools
Staticanalysis tools (D)
Modellingtools (D)
Testspecification tools
Testdesign tools
Testdata preparation tools
Testexecution and logging tools
Testexecution tools
Testharness/ Unit test framework tools (D)
Testcomparators
Coveragemeasurement tools (D)
Securitytools
Performanceand monitoring tools
Dynamicanalysis tools (D)
Performancetesting, Load testing and stress-testing tools
Monitoringtools
Advantagesand benefits of using testing tools
Disadvantagesand risks of testing tools
Factorsfor software testing tool selection
Proof-of-conceptor piloting phase for tool evaluation

Secondary Sidebar

Popular Posts

Trending Posts

Categories

All content is copyright of tryqa.com, tryqa.com was earlier called ISTQBExamCertification.com

[8]ページ先頭

©2009-2025 Movatter.jp