Online Test :: (Intermediate) Programming Test
Total Number of Questions67
Correct Answer0
Incorrect Answer0
Question Attempted0
Net Score0
Test Analysis and Design.
Test Implementation and execution.
Test Closure Activities.
Evaluating the Exit Criteria and reporting.
Statement coverage.
Cause and effect coverage.
Multiple condition coverage.
Decision coverage.
Usability Assessment.
Installation Test.
Coverage Analysis.
Code Inspection.
A fault.
An error.
A failure.
A mistake.
Test Planning and control.
Test Analysis and Design
Evaluating exit criteria and reporting.
Equivalence class partitioning.
Boundary value testing.
Decision tables.
Boundary value testing AND Equivalence class partitioning.
18, 29, 30
29, 30, 31
17, 29, 31
17, 18, 19
Designing the Tests.
Comparing actual results.
Creating test suites from the test cases.
Executing test cases either manually or by using test execution tools.
A set of test cases for testing classes of objects.
An input or output ranges of values such that every tenth value in the range becomes a test case.
An input or output ranges of values such that each value in the range becomes a test case.
An input or output ranges of values such that only one value in the range becomes a test case.
A small team to establish the best way to use the tool.
The independent testing team.
Everyone who may eventually have some use for the tool.
The managers to see what projects it should be used in.
Supplements formal test design techniques.
is not repeatable and should not be use.
Can only be used in component, integration and system testing.
Is only performed in user acceptance testing.
Modules are not tested by team again and again.
Major decision points are tested early.
none of the mentioned.
No stubs need to be written.
Incidents are raised when expected and actual results differ.
Incident resolution is the responsibility of the author of the software under test.
Incidents may be raised against user requirements.
Incidents require investigation and/or correction.
Each test stage has a different purpose.
It is easier to manage testing in stages.
We can run different tests in different environments.
The more stages we have, the better the testing.
To ensure that the test case specification is complete.
To set the criteria used in generating test inputs.
To plan when to stop testing.
To know when test planning is complete.
Software fault.
Testing fault.
Documentation Fault.
Environment Fault.
It decreases the software development speed.
It is usually conducted by the development team.
It can’t be expected to catch every error in a program.
In this tester evaluates if individual units of source code are fit for use.
Combination of all.
Black box.
Grey box.
White box.
Integration Testing.
System Testing.
Unit Testing.
Acceptance Testing.
Facilities to compare test results with expected results.
The precise differences in versions of software component source code.
Restricted access to the source code library.
Linkage of customer requirements to version numbers.
None of these.
Defect.
Failure.
Both Failure and Defect.
Being diplomatic.
Able to be relied on.
Able to write software.
Having good attention to detail.
Smoke and sanity tests can be executed using an automation tool.
Sanity Testing is also called tester acceptance testing.
When executing both, then first execute sanity testing tests and then smoke Testing.
Smoke testing performed on a particular build is also known as a build verification test.
It states that modules are tested against user requirements.
It only models the testing phase.
It specifies the test techniques to be used.
It includes the verification of designs.
Improve super vision, more reviews of artifacts or program means stage containment of the defects.
Extend the test plan so that you can test all the inter dependencies.
Test the interdependencies first, after that checks the system as a whole.
Divide the large system in to small modules and test the functionality.
Analysis and Design.
Implementation and execution.
Planning and Control.
Evaluating exit criteria and Reporting.
A distance set of test activities collected into a manageable phase of a project.
A test environment comprised of stubs and drives needed to conduct a test.
A high level document describing the principles, approach and major objectives of the organization regarding testing.
A set of several test cases for a component or system under test.
Lifestyle.
Management.
Vocabulary.
Internal.
Are most useful in uncovering defects in the process flows during the testing use of the system.
Are most useful in covering the defects at the Integration Level.
Are most useful in uncovering defects in the process flows during real world use of the system.
Are most useful in covering the defects in the process flows during real world use of the system.
The answer depends on the risks for your industry, contract and special requirements.
The answer should be standardized for the software development industry.
The answer depends on the maturity of your developers.
This question is impossible to answer.
Test manager and project manager faults.
Test lead faults only.
Test manager’s faults only.
Testers faults only.
Expected outcomes should be predicted before a test is run.
Expected outcomes include outputs to a screen and changes to files and databases.
Expected outcomes are defined by the software’s behavior.
Expected outcomes are derived from a specification, not from the code.
Retesting
Sanity Testing
Regression Testing
Ad hoc Testing
alpha testing
beta testing
None of the mentioned
regression testing
Error Guessing Technique
Design Based Testing
Experience Based Technique
Structural Testing
Greybox testing
Test Automation
White box testing
Beta Testing
Testing a system feature using only the software required for that action
Testing a system feature using only the software required for that function
Testing a system feature does not using software required for that function
Testing quality attributes of the system including performance and usability
Both Performance Testing and Usability Testing
Performance Testing
Usability Testing
System Testing
Are the supporting utilities, accessories and prerequisites available in forms that testers can use
Is the test environment-lab, hardware, software and system administration support ready?
Are the necessary documentation, design and requirements information available that will allow testers to operate the system and judge correct behavior.
All of these
Only uses components that form part of the live system.
Tests interactions between modules or subsystems.
Tests the individual components that have been developed.
Tests interfaces to other systems
Functionality Testing
Security Testing
Recovery Testing
Quality Control
Verification
Quality Assurance
Validation
Product Metric
Test Metric
Process Metric
None of these
Label of the user name and password field must be visible.
Check the field validation of the username and password.
Check that password must be visible in encrypted form.
Check the URL of the login screen is working.
To prevent propagation of defect in next level
To gain the confidence in the system
To find defects during dynamic testing
To meet project deadline
Re-testing ensures the original fault has been removed; regression testing looks for unexpected side-effects
Re-testing uses different environments, regression testing uses the same environment
Re-testing looks for unexpected side effects; regression testing is repeating those tests
Re-testing is done after faults are fixed; regression testing is done earlier
Neither Validation nor Verification
Both Validation and Verification
When no faults have been found by the tests run
When the test completion criteria have been met
When all planned tests have been run
When time for testing has run out
Beta Test
Smoke Test
Regression Test
Alpha Test
Top-down Integration
Bottom-up Integration
Module Integration
Parallel Integration
Technical knowledge of development team
Project types and associated risks
Users
Unit Testing
Regression testing
Black box testing
White box Testing
Boundary value analysis
State transition testing
LCSAJ
Equivalence partitioning
When resources (time and budget) are over
When quality criterion is reached
Testing never ends.
When some coverage is reached
Book
book
BOOK
Boo01k
Poor software and poor testing
bad luck
Poor quality software
Insufficient time for testing
Component Testing
Integration Level Testing
System Level Testing
Unit Level Testing
Black Box Testing
White Box Testing
Grey Box Testing
Inter system testing
Integration testing
Re-testing
Path testing
Statement testing
Data flow testing
Tests combinations of input circumstances
Is used in white box testing strategy
Is the same as equivalence partitioning tests
Test boundary conditions on, below and above the edges of input and output equivalence classes.
After the software has changed
Every week
As often as possible
When the project manager says
Stop execution at a particular point
Examine memory & registers
Search for references for particular variables, constant and registers
All of the mentioned
It is executed in parallel with software development activities.
It provides management with insights into the state of a software project.
It strives to ensure that quality is built into software.
Behavior or performance errors
Incorrect or missing functions
Interface errors
Increases as we move the product towards live use
Can never be determined
Decreases as we move the product towards live use
Is more expensive if found in requirements than functional design