Which of the following statements about static testing and dynamic testing is true?
Static testing is better suited than dynamic testing for highlighting issues that could indicate inappropriate code modularization
Dynamic testing can only be applied to executable work products, while static testing can only be applied to non-executable work products
Both dynamic testing and static testing cause failures, but failures caused by static testing are usually easier and cheaper to analyze
Security vulnerabilities can only be detected when the software is being executed, and thus they can only be detected through dynamic testing, not through static testing
Dynamic testing requires the execution of the software to evaluate its behavior and performance. In contrast, static testing involves examining the software's code, design, and documentation without executing the software. This makes static testing applicable to non-executable work products such as requirement documents, design documents, and source code.
Which of the following statements about traceability is false?
Traceability between test basis items and thetest casesdesigned tocover them,makes it possible to determine which test basis items have been covered by the executedtest cases
Traceability between test basis items and thetest casesdesigned tocover them,enablesexperience-based test techniques to be applied
Traceability between test basis items and thetest casesdesigned tocover them,enablesidentification of which test cases will be affected by changes to the test basisitems
Traceability can be established and maintained throughall test documentation for a giventest level, such as from test conditions through test cases to test scripts
Traceability is an essential aspect of software testing that ensures each test case can be traced back to its corresponding test basis items, such as requirements, design documents, or user stories. This linkagehelps in determining which test basis items have been covered by executed test cases, identifying the impact of changes, and maintaining overall test documentation. However, the statement that traceability enables experience-based test techniques to be applied is false, as experience-based test techniques, such as exploratory testing, rely on the tester's skills and experience rather than documented traceability.
References:
ISTQB® CTFL Syllabus 4.0, Chapter 1.4.4, page 19: Importance of Traceability
Which of the following statements is true?
Experience-based test techniques rely on the experience of testers to identify the root causes of defects found by black-box test techniques
Some of the most common test basis used by white-box test techniques include user stories, use cases and business processes
Experience-based test techniques are often useful to detect hidden defects that have not been targeted by black-box test techniques
The primary goal of experience-based test techniques is to design test cases that can be easily automated using a GUI-based test automation tool
Experience-based test techniques are test design techniques that rely on the experience, knowledge, intuition, and creativity of the testers to identify and execute test cases that are likely to find defects in the software system. Experience-based test techniques are often useful to detect hidden defects that have not been targeted by black-box test techniques, which are test design techniques that use the external behavior and specifications of the software system as the test basis, without considering its internal structure or implementation. Experience-based test techniques can complement black-box test techniques by covering aspects that are not explicitly specified, such as usability, security, reliability, performance, etc. The other statements are false, because:
Experience-based test techniques do not rely on the experience of testers to identify the root causes of defects found by black-box test techniques, but rather to identify the potential sources of defects based on their own insights, heuristics, or exploratory testing. The root causes of defects are usually identified by debugging or root cause analysis, which are activities that involve examining the code or the development process to find and fix the errors that led to the defects.
Some of the most common test basis used by white-box test techniques include the source code, the design documents, the architecture diagrams, and the control flow graphs of the software system. White-box test techniques are test design techniques that use the internal structure and implementation of the software system as the test basis, and aim to achieve a certain level of test coverage based on the code elements, such as statements, branches, paths, etc. User stories, use cases, and business processes are examples of test basis used by black-box test techniques, as they describe the functional and non-functional requirements of the software system from the perspective of the users or the stakeholders.
The primary goal of experience-based test techniques is not to design test cases that can be easily automated using a GUI-based test automation tool, but rather to design test cases that can reveal defects that are not easily detected by other test techniques, such as boundary value analysis, equivalence partitioning, state transition testing, etc. Test automation is the use of software tools to execute test cases and compare actual results with expected results, without human intervention. Test automation can be applied to different types of test techniques, depending on the test objectives, the test levels, the test tools, and the test resources. However, test automation is not always feasible or beneficial,especially for test cases that require human judgment, creativity, or exploration, such as those designed by experience-based test techniques. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.1, Black-box Test Design Techniques
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.2, White-box Test Design Techniques
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.3, Experience-based Test Design Techniques
ISTQB® Glossary of Testing Terms v4.0, Experience-based Test Technique, Black-box Test Technique, White-box Test Technique, Test Basis, Test Coverage, Test Automation
As a result of the joint evaluation of a product version with the customer, it has been concluded that it would be appropriate to retrieve an earlier version of the product and carry out a benchmark. Depending on the result, further development will be carried out based on the current version or the retrieved version.
Which mechanism, process, and/or technique will allow the specific version (including the testing work products) of a given software product to be restored?
Defect management
Configuration management
Change management
Risk management
Configuration management (B)ensures thatversions of software and test artifactsare properly tracked, stored, and retrievable. It allows teams to:
Restoreearlier versions of software and test work products
Maintaintraceability between requirements, tests, and code
Avoid discrepancies due tomismanaged versions
(A) is incorrectbecause defect managementtracks issues but does not restore versions.
(C) is incorrectbecause change managementcontrols changes but does not track past versions.
(D) is incorrectbecause risk managementassesses risks but does not manage software versions.
Effective configuration management ensures the ability to roll back changesand maintain system stability.
Which ONE of the following options is NOT a test objective?
Verifying whether specified requirements have been fulfilled
Triggering failures and finding defects
Finding errors
Validating whether the test object is complete and works as expected by the stakeholders
The primary objectives of testing, as outlined in the ISTQB CTFL v4.0 syllabus, include verifying whether specified requirements are met (A), detecting failures and defects (B), and validating that the test object functions as expected (D). However, "finding errors" (C) is not a direct objective. Errors result from human mistakes, but testing primarily identifies defects, which are flaws in the system that cause failures. Testing aims to reveal defects rather than directly identify errors in the code.
Which ONE of the following is a characteristic of exploratory testing?
Effectiveness depends on the individual testers' skills
Usually conducted when there is sufficient time for testing
Test cases are written once the specifications become available
Testing without defined time-boxes
Exploratory testing is characterized by its reliance on the skills and experience of the tester. The effectiveness of exploratory testing depends heavily on the tester's ability to design and execute tests based on their intuition and knowledge of the application. This type of testing is often performed without predefined test cases, making the individual tester's expertise crucial.
References:
ISTQB CTFL Syllabus V4.0, Section 4.4 on experience-based testing techniques, including exploratory testing, which highlights the importance of the tester's skills.
Shripriya is defining the guidelines for the review process implementation in her company. Which of the following statements is LEAST likely to have been recommended by her?
Independent of the size of the work products, planning for the review should be performed
Review initiation is the stage when the review team starts the discussion on the review comments
Large sized work products should be reviewed in one go because you will have to spend too much time if you split it into multiple reviews.
Defect reports should be created for every review found
In a structured review process, it is essential to plan reviews carefully and manage them effectively. Reviewing large work products in one go is not recommended because it can lead to oversight of issues due to fatigue or information overload. It is more efficient to break down large work products into smaller, manageable parts and review them incrementally. This ensures a thorough and effective review process. Additionally, other practices such as planning for the review, starting discussions during review initiation, and creating defect reports for found issues are standard recommendations for an effective review process.
Top of Form
Bottom of Form
Which ONE of the following options BEST describes Behavior-Driven Development (BDD)?
Expresses the desired behavior of an application with test cases written in a simple form of natural language that is easy to understand by stakeholders—usually using the Given/When/Then format. Test cases are then automatically translated into executable tests.
Defines test cases at a low level, close to the implementation, using unit test frameworks.
Is primarily focused on non-functional testing techniques to ensure system reliability and performance.
Requires testing to be performed after development is completed to validate software functionality.
BDD emphasizes collaboration between developers, testers, and business stakeholders to define system behavior in a readable format (A). It typically uses theGiven-When-Thensyntax. Unlike unit testing (B), BDD is at a higher level of abstraction. It does not focus solely on non-functional testing (C) and encourages early testing rather than post-development validation (D).
Which of the following applications will be the MOST suitable for testing by Use Cases
Accuracy and usability of a new Navigation system compared with previous system
A billing system used to calculate monthly charge based or large number of subscribers parameters
The ability of an Anti virus package to detect and quarantine a new threat
Suitability and performance of a Multi media (audio video based) system to a new operating system
A new navigation system compared with a previous system is the most suitable application for testing by use cases, because it involves a high level of interaction between the user and the system, and the expected behavior and outcomes of the system are based on the user’s needs and goals. Use cases can help to specify the functional requirements of the new navigation system, such as the ability to enter a destination, select a route, follow the directions, receive alerts, etc. Use cases can also help to compare the accuracy and usability of the new system with the previous system, by defining the success and failure scenarios, the preconditions and postconditions, and the alternative flows of each use case. Use cases can also help to design and execute test cases that cover the main and exceptional paths of each use case, and to verify the satisfaction of the user’s expectations.
The other options are not the most suitable applications for testing by use cases, because they do not involve a high level of interaction between the user and the system, or the expected behavior and outcomes of the system are not based on the user’s needs and goals. A billing system used to calculate monthly charge based on a large number of subscriber parameters is more suitable for testing by data-driven testing, which is a technique for testing the functionality and performance of a system or component by using a large set of input and output data. The ability of an antivirus package to detect and quarantine a new threat is more suitable for testing by exploratory testing, which is a technique for testing the functionality and security of a system or component by using an informal and flexible approach, based on the tester’s experience and intuition. The suitability and performance of a multimedia (audio video based) system to a new operating system is more suitable for testing by compatibility testing, which is a technique for testing the functionality and performance of a system or component by using different hardware, software, or networkenvironments. References = CTFL 4.0 Syllabus, Section 3.1.1, page 28-29; Section 4.1.1, page 44-45; Section 4.2.1, page 47-48.
What type of testing measures its effectiveness by tracking which lines of code were executed by the tests?
Acceptance testing
Structural testing
Integration testing
Exploratory testing
Structural testing is a type of testing that measures its effectiveness by tracking which lines of code were executed by the tests. Structural testing, also known as white-box testing or glass-box testing, is based on the internal structure, design, or implementation of the software. Structural testing aims to verify that the software meets the specified quality attributes, such as performance, security, reliability, or maintainability, by exercising the code paths, branches, statements, conditions, or data flows. Structural testing uses various coverage metrics, such as function coverage, line coverage, branch coverage, or statement coverage, to determine how much of the code has been tested and to identify any untested or unreachable parts of the code. Structural testing can be applied at any level of testing, such as unit testing, integration testing, system testing, or acceptance testing, but it is more commonly used at lower levels, where the testers have access to the source code.
The other options are not correct because they are not types of testing that measure their effectiveness by tracking which lines of code were executed by the tests. Acceptance testing is a type of testing that verifies that the software meets the acceptance criteria and the user requirements. Acceptance testing is usually performed by the end-users or customers, who may not have access to the source code or the technical details of the software. Acceptance testing is more concerned with the functionality, usability, or suitability of the software, rather than its internal structure or implementation. Integration testing is a type of testing that verifies that the software components or subsystems work together as expected. Integration testing is usually performed by the developers or testers, who may use both structural and functional testing techniques to check the interfaces, interactions, or dependencies between the components or subsystems. Integration testing is more concerned with the integration logic, data flow, or communication of the software, rather than its individual lines of code. Exploratory testing is a type of testing that involves simultaneous learning, test design, and test execution. Exploratory testing is usually performed by the testers, who use their creativity, intuition, or experience to explore the software and discover any defects, risks, or opportunities for improvement. Exploratory testing is more concerned with the behavior, quality, or value of the software, rather than its internal structure or implementation. References = ISTQB Certified Tester Foundation Level (CTFL) v4.0 syllabus, Chapter 4: Test Techniques, Section 4.3: Structural Testing Techniques, Pages51-54; Chapter 1: Fundamentals of Testing, Section 1.4: Testing Throughout the Software Development Lifecycle, Pages 11-13; Chapter 3: Static Testing, Section 3.4: Exploratory Testing, Pages 40-41.
Which of the following lists factors That contribute to PROJECT risks?
skill and staff shortages; problems in defining the right requirements, contractual issues.
skill and staff shortages; software does not perform its intended functions; problems in defining the right requirements.
problems in defining the right requirements; contractual issues; poor software quality characteristics.
poor software quality characteristics; software does not perform its intended functions.
Project risks are the uncertainties or threats that may affect the project objectives, such as scope, schedule, cost, and quality. According to the ISTQB Certified Tester Foundation Level (CTFL) v4.0 syllabus, some of the factors that contribute to project risks are:
Skill and staff shortages: This factor refers to the lack of adequate or qualified human resources to perform the project tasks. This may result in delays, errors, rework, or low productivity.
Problems in defining the right requirements: This factor refers to the difficulties or ambiguities in eliciting, analyzing, specifying, validating, or managing the requirements of the project. This may result in misalignment, inconsistencies, gaps, or changes in the requirements, affecting the project scope and quality.
Contractual issues: This factor refers to the challenges or disputes that may arise from the contractual agreements between the project parties, such as clients, suppliers, vendors, or subcontractors. This may result in legal, financial, or ethical risks, affecting the project delivery and satisfaction.
The other options are not correct because they list factors that contribute to PRODUCT risks, not project risks. Product risks are the uncertainties or threats that may affect the quality or functionality of the software product or system. Some of the factors that contribute to product risks are:
Poor software quality characteristics: This factor refers to the lack of adherence or compliance to the quality attributes or criteria of the software product or system, such as reliability, usability, security, performance, or maintainability. This may result in defects, failures, or dissatisfaction of the users or stakeholders.
Software does not perform its intended functions: This factor refers to the deviation or discrepancy between the expected and actual behavior or output of the software product or system. This may result in errors, faults, or malfunctions of the software product or system.
References = ISTQB Certified Tester Foundation Level (CTFL) v4.0 syllabus, Chapter 1: Fundamentals of Testing, Section 1.5: Risks and Testing, Pages 14-16.
Which of the following statements is CORRECT about White-box testing?
White-box testing helps find defects because they can be used to measure statement coverage
White-box testing helps find defects even when specifications are vague because it takes into account the code.
White-box testing helps find defects because it provides for requirements based coverage
White-box testing helps find defects because it focuses on defects rather than failures
White-box testing, also known as structural testing, involves testing the internal structures or workings of an application, as opposed to its functionality (which is tested by black-box testing). The correct statement about white-box testing is that it helps find defects by measuring aspects such as statement coverage.
Statement Coverage: White-box testing techniques like statement coverage measure whether each statement in the code has been executed at least once. This helps ensure that all parts of the code are tested and can reveal defects in areas that might not be reached by black-box testing alone.
Other statements are less accurate in the context of white-box testing:
Specifications being vague: White-box testing is code-focused, not requirement-focused. If specifications are vague, it affects both white-box and black-box testing. The main advantage of white-box testing is that it allows testers to create tests based on the code's structure and logic.
Requirements-based coverage: This is typically associated with black-box testing, which derives tests from specifications and requirements. White-box testing, on the other hand, derives tests from the code itself.
Focus on defects rather than failures: Both white-box and black-box testing aim to identify defects, but white-box testing does this through code coverage and examining the code paths directly. It does not focus exclusively on defects rather than failures; it is just another method to identify potential issues.
Consider a review for a high-level architectural document written by a software architect. The architect does most of the review preparation work, including distributing the document to reviewers before the review meeting. However, reviewers are not required to analyze the document in advance, and during the review meeting the software architect explains the document step by step. The only goal of this review is to establish a common understanding of the software architecture that will be used in a software development project.
Which of the following review types does this review refer to?
Inspection
Audit
Walkthrough
Informal review
This answer is correct because a walkthrough is a type of review where the author of the work product leads the review process and explains the work product to the reviewers. The reviewers are not required to prepare for the review in advance, and the main objective of the walkthrough is to establish a common understanding of the work product and to identify any major defects or issues. A walkthrough is usually informal and does not follow a defined process or roles. In this case, the review for a high-level architectural document written by a software architect matches the characteristics of a walkthrough. References: ISTQB Glossary of Testing Terms v4.0, ISTQB Foundation Level Syllabus v4.0, Section 2.4.2.2
What is test oracle?
The source of lest objectives
The source for the actual results
The source of expected results
The source of input conditions
A test oracle is a mechanism or principle that can be used to determine whether the observed behavior or output of a system under test is correct or not1. A test oracle can be based on various sources of expected results, such as specifications, user expectations, previous versions, comparable systems, etc2. References: ISTQB Certified Tester Foundation Level(CTFL) v4.0 Syllabus, Section 1.2.1, Page 91; ISTQB Glossary of Testing Terms, Version 4.0, Page 332.
Which ONE of the following is a CORRECT example of the purpose of a test plan?
A test manager should always create a very simple test plan because the purpose of test plan is to ensure that there is documentation for the purpose of audits.
A testmanager decides to create a one page test plan for an agile project for communicating the broad activities and explaining why detailed test caseswillnot be written as
mandated by the test policy.
A test plan is a good document to have for the agile projects because it helps the test manager assign tasks to different people
A test lead decides to write a detailed test plan so that in future, in case of project failure responsibilities could be assigned to the right person
A test plan serves multiple purposes, such as defining the scope, approach, resources, and schedule of the testing activities. It also helps in communicating important information and managing stakeholder expectations. In agile projects, test plans might be concise to align with agile principles of simplicity and flexibility. A one-page test plan can effectively communicate broad activities and strategic decisions, such as not writing detailed test cases due to the project's agile nature. This approach ensures that essential information is conveyed without unnecessarydocumentation overhead, adhering to the agile manifesto's value of "working software over comprehensive documentation".
A typical test objective is to:
determine the most appropriate level of detail with which to design test cases
verify the compliance of the test object with regulatory requirements
plan test activities in accordance with the existing test policy and test strategy verify the correct creation and configuration of the test environment
In the ISTQB CTFL Syllabus, it is stated that a key objective of testing is to verify that the test object meets regulatory requirements. This is crucial as compliance with regulatory standards ensures that the software adheres to necessary laws, guidelines, and safety standards which are often mandatory in various industries such as healthcare, finance, and aviation. Ensuring regulatory compliance helps prevent legal issues and promotes user safety and trust.
You are working on creating test cases for a user story -
As a customer, I want to be able to book an ISTQB exam for a particular date, so that I can find choose my time slot and pay the correct amount, including discounts, if any.
The acceptance criteria for this :
1.The dates shown should be from the current date to 2 years in future
2.Initially there should be 10 timeslots available for each day, 1 hour each, starting at 8 AM GMT
3.Maximum 5 persons should be able to select a time slot after which that time slot should become unavailable
4.First timeslot should have a 10% discount.
Which of the following is the BEST example of a test case for this user story?
Logon to the site and book an exam for the 8 AM (GMT) timeslot. Expected result: You should get 10% discounted price. Change the time to any other timeslot. Expected result: Discount should be removed
Logon to the site. Book 5 exams for the current date. Expected result: Exams should be booked. Book 6th timeslot for the same date. Expected result: The exam should be booked but no discount should be given.
Logon to the site. Expected result: Default 8 AM (GMT) timeslot should be selected. Change the time to any other timeslot. Expected result: New slot should be booked
Logon to the site. Book an exam for the current date. Expected result: timeslots should be shown. Change the time to any other date prior to the selected date. Expected result: New slot should become visible.
The best example of a test case for this user story should cover the acceptance criteria comprehensively. Option A addresses the critical aspects of the acceptance criteria:
Verifying the discount for the first timeslot (8 AM GMT) - ensuring it provides a 10% discount.
Verifying that changing the time slot removes the discount - ensuring the discount logic is correctly applied.
This test case effectively validates the functionality related to both the discount and the ability to change time slots, which are key parts of the user story's requirements.
Consider the following defect report for an Exam Booking System
Defect ID: ST1041 | Title: Unable to cancel an exam booking | Severity: Very High | Priority: Very High | Environment: Windows 10, Firefox
Description: When attempting to cancel an exam booking using the cancel exam feature, the system does not cancel the exam even though it shows a message that the exam has been cancelled.
Which ONE of the following indicates the information that should be added to the description for reproducing the defect easily?
Repeating the test case with different browsers and logging a separate defect for each one of them
Providing exact steps that lead to this defect. It is not clear from the description if this is a problem for any scheduled exam or for a specific user.
Providing better severity and priority. It is not clear why this is a high severity problem as the exams can be booked without any problem.
The defect Id used is alphanumeric in nature. It should be a number only.
To reproduce a defect easily, the defect report should include detailed steps that clearly describe how to encounter the issue. This includes the specific actions taken, the expected result, and the actual result observed. In the given defect report, it is not clear if the issue occurs for any scheduled exam or if it is user-specific. Providing exact steps helps developers and testers replicate the issue and understand its context better, leading to quicker and more effective resolution.
In a review, which of the following is the responsibility of the manager?
Organizing the logistics of the review meeting
Measuring the performance of each reviewer
Ensuring that sufficient time is available for review
Performing detailed review based on past experience
In a review process, the manager's responsibility is to ensure that sufficient time is allocated for the review activities. This includes planning and scheduling the review sessions, making sure that the team has enough time to conduct a thorough and effective review.
References:
ISTQB CTFL Syllabus V4.0, Section 3.2.3 on the roles and responsibilities in a review process, specifically mentioning the manager's role in ensuring adequate time for reviews.
Consider a given test plan which, among others, contains the following three sections: "Test Scope", "Testing Communication", and "Stakeholders". The features of the test object to be tested and those excluded from the testing represent information that is:
not usually included in a test plan, and therefore in the given test plan it should not be specified neither within the three sections mentioned, nor within the others
usually included in a test plan and, in the given test plan, it is more likely to be specified within "Test Scope" rather than in the other two sections mentioned
usually included in a test plan and, in the given test plan, it is more likely to be specified within "Testing Communication" rather than in the other two sections mentioned
usually included in a test plan and, in the given test plan, it is more likely to be specified within "Stakeholders" rather than in the other two sections mentioned
The features of the test object to be tested and those excluded from the testing represent information that is usually included in a test plan and, in the given test plan, it is more likely to be specified within “Test Scope” rather than in the other two sections mentioned. The test scope defines the boundaries and limitations of the testing activities, such as the test items, the features to be tested, the features not to be tested, the test objectives, the test environment, the test resources, the test assumptions, the test risks, etc. The test scope helps to establish a common understanding of what is included and excluded from the testing, and to avoid ambiguity, confusion, or misunderstanding among the stakeholders. The other two sections, “Testing Communication” and “Stakeholders”, are also important parts of a test plan, but they donot directly address the features of the test object. The testing communication describes the methods, frequency, and responsibilities for the communication and reporting of the testing progress, status, issues, and results. The stakeholders identify the roles and responsibilities of the people involved in or affected by the testing activities, such as the test manager, the test team, the project manager, the developers, the customers, the users, etc. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.1, Test Planning1
ISTQB® Glossary of Testing Terms v4.0, Test Plan, Test Scope2
Which of the following is not an example of a typical generic skill required for testing?
Be able to apply test-driven development
Be able to use test management tools and defect tracking tools
Be able to communicate defects and failures to developers as objectively as possible
Possess the necessary social skills that support effective teamwork
Test-driven development is not an example of a typical generic skill required for testing, but rather an example of a specific technical skill or a development practice that may or may not be relevant for testing, depending on the context and the objectives of the testing activities. Test-driven development is an approach to software development and testing, in which the developers write automated unit tests before writing the source code, and then refactor the code until the tests pass. Test-driven development can help to improve the quality, the design, and the maintainability of the code, as well as to provide fast feedback and guidance for the developers. However, test-driven development is not a skill that is generally expected or needed for testers, especially for testers who are not involved in unit testing or who do not have access to the source code. The other options are examples of typical generic skills required for testing, which are skills that are applicable and beneficial for testing in any context or situation, regardless of the specific testing techniques, tools, or methods used. The typical generic skills required for testing include:
Be able to use test management tools and defect tracking tools: These are tools that help testers to plan, organize, monitor, and control the testing activities and resources, as well as to record, track, analyze, and resolve the defects detected during testing. These tools can improve the efficiency, the effectiveness, and the communication of the testing process, as well as to provide traceability, metrics, and reports for the testing outcomes.
Be able to communicate defects and failures to developers as objectively as possible: This is a skill that involves the ability to report and describe the defects and failures found during testing in a clear, concise, accurate, and unbiased manner, using relevant information, evidence, and terminology, without making assumptions, judgments, or accusations. This skill can facilitate the collaboration, the understanding, and the resolution of the defects and failuresbetween the testers and the developers, as well as to prevent conflicts, misunderstandings, or blame games.
Possess the necessary social skills that support effective teamwork: These are skills that involve the ability to interact, cooperate, and coordinate with other people involved in or affected by the testing activities, such as the test manager, the test team, the project manager, the developers, the customers, the users, etc. These skills can include communication, negotiation, leadership, motivation, feedback, conflict resolution, etc. These skills can enhance the quality, the productivity, and the satisfaction of the testing process, as well as to foster a positive and constructive testing culture. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.1.1, Testing and the Software Development Lifecycle
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.1.2, Testing and Quality
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.1, Testing Principles
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.2, Testing Policies, Strategies, and Test Approaches
ISTQB® Glossary of Testing Terms v4.0, Test-driven Development, Test Management Tool, Defect Tracking Tool, Defect Report, Failure, Social Skill2
Which of the following statements about exploratory testing is true?
Exploratory testing is an experience-based test technique in which testers explore the requirements specification to detect non testable requirements
When exploratory testing is conducted following a session-based approach, the issues detected by the testers can be documented in session sheets
Exploratory testing is an experience-based test technique used by testers during informal code reviews to find defects by exploring the source code
In exploratory testing, testers usually produce scripted tests and establish bidirectional traceability between these tests and the items of the test basis
Exploratory testing is an experience-based test technique in which testers dynamically design and execute tests based on their knowledge, intuition, and learning of the software system, without following predefined test scripts or test cases. Exploratory testing can be conducted following a session-based approach, which is a structured way of managing and measuring exploratory testing. In a session-based approach, the testers perform uninterrupted test sessions, usually lasting between 60 and 120 minutes, with a specific charter or goal, and document the issues detected, the test coverage achieved, and the time spent in session sheets. Session sheets are records of the test activities, results, and observations during a test session, which can be used for reporting, debriefing, and learning purposes. The other statements are false, because:
Exploratory testing is not a test technique in which testers explore the requirements specification to detect non testable requirements, but rather a test technique in which testers explore the software system to detect functional and non-functional defects, as well as to learn new information, risks, or opportunities. Non testable requirements are requirements that are ambiguous, incomplete, inconsistent, or not verifiable, which can affect the quality and effectiveness of the testing process. Non testable requirements can be detected by applying static testing techniques, such as reviews or inspections, to the requirements specification, before the software system is developed or tested.
Exploratory testing is not a test technique used by testers during informal code reviews to find defects by exploring the source code, but rather a test technique used by testers during dynamic testing to find defects by exploring the behavior and performance of the software system, without examining the source code. Informal code reviews are static testing techniques, in which the source code isanalyzed by one or more reviewers, without following a formal process or using a checklist, to identify defects, violations, or improvements. Informal code reviews are usually performed by developers or peers, not by testers.
In exploratory testing, testers usually do not produce scripted tests and establish bidirectional traceability between these tests and the items of the test basis, but rather produce unscripted tests and adapt them based on the feedback and the findings of the testing process. Scripted tests are tests that are designed and documented in advance, with predefined inputs, outputs, and expected results, and are executed according to a test plan or a test procedure. Bidirectional traceability is the ability to trace both forward and backward the relationships between the items of the test basis, such as the requirements, the design, the risks, etc., and the test artifacts, such as the test cases, the test results, the defects, etc. Scripted tests and bidirectional traceability are usually associated with more formal and structured testing approaches, such as specification-based or structure-based test techniques, not with exploratory testing. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.3, Experience-based Test Design Techniques1
ISTQB® Glossary of Testing Terms v4.0, Exploratory Testing, Session-based Testing, Session Sheet, Non Testable Requirement, Static Testing, Informal Review, Dynamic Testing, Scripted Testing, Bidirectional Traceability2
Which of the following statements is not correct?
Looking for defects in a system may require Ignoring system details
Identifying defects may be perceived as criticism against product
Looking for defects in system requires professional pessimism and curiosity
Testing is often seen as a destructive activity instead of constructive activity
Looking for defects in a system does not require ignoring system details, but rather paying attention to them and understanding how they affect the system’s quality, functionality, and usability. Ignoring system details could lead to missing important defects or testing irrelevant aspects of the system.
Identifying defects may be perceived as criticism against product, especially by the developers or stakeholders who are invested in the product’s success. However, identifying defects is not meant to be a personal attack, but rather a constructive feedback that helps to improve the product and ensure its alignment with the requirements and expectations of the users and clients.
Looking for defects in system requires professional pessimism and curiosity, as testers need to anticipate and explore the possible ways that the system could fail, malfunction, or behave unexpectedly. Professional pessimism means being skeptical and critical of the system’s quality and reliability, while curiosity means being eager and interested in finding out the root causes and consequences of the defects.
Testing is often seen as a destructive activity instead of constructive activity, as it involves finding and reporting the flaws and weaknesses of the system, rather than creating or enhancing it. However, testing is actually a constructive activity, as it contributes to the system’s improvement, verification, validation, and optimization, and ultimately to the delivery of a high-quality product that meets the needs and expectations of the users and clients.
Which ONE of the following options CORRECTLY describes one of the seven principles of the testing process?
The objective of testing is to implement exhaustive testing and execute as many test cases as possible.
Exhaustive testing can only be carried out using behavior-based techniques.
It is impossible to test all possible combinations of inputs and preconditions of a system.
Automated testing enables exhaustive testing.
Exhaustive testing (testing all input combinations) is practically impossible except in trivial cases (C). Instead, testers focus on risk-based, prioritized, and efficient test techniques. The seven principles of testing in the ISTQB syllabus highlight that exhaustive testing is infeasible, and therefore, techniques such as equivalence partitioning, boundary value analysis, and risk-based testing are used to optimize test coverage.
Which of the following coverage criteria results in the highest coverage for state transition based test cases?
Can't be determined
Covering all transitions at least once
Covering only start and end states
Covering all states at least once
Covering all transitions at least once is the highest coverage criterion for state transition based test cases, because it ensures that every possible change of state is tested at least once. This means that all the events that trigger the transitions, as well as the actions and outputs that result from the transitions, are verified. Covering all transitions at least once also implies covering all states at least once, but not vice versa. Therefore, option D is not the highest coverage criterion. Option C is the lowest coverage criterion, because it only tests the initial and final states of the system or component, without checking the intermediate states or transitions. Option A is incorrect, because the coverage criteria for state transition based test cases can be determined and compared based on the number of transitions and states covered. References = CTFL 4.0 Syllabus, Section 4.2.3, page 49-50.
Which ONE of the following options MOST ACCURATELY describesstatement testing?
In statement testing, the coverage items are control flow transfers between branches. The aim is to design test cases to exercise branches in the code until an acceptable level of coverage is achieved, expressed as a percentage.
In statement testing, the coverage items are decisions and statements. The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved, expressed as a percentage.
In statement testing, the coverage items are branches, and the aim is to design test cases to exercise branches in the code until an acceptable level of coverage is achieved, expressed as a percentage.
In statement testing, the coverage items are executable statements. The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved, expressed as a percentage.
Statement testingaims to executeevery executable statementin the source code at least once.
(D) is correctas statement testing ensuresmaximum statement execution.
(A) describes branch testing, which focuses on flow transfers.
(B) and (C) incorrectly mix decision testing and branch testing concepts.
Scenario 2 “Big Drop”:
A company“The Big Drop”providesbulk discounts and frequent customer discountsas follows:
How manypossible decision rulescan be extracted from this table?
5 decision rules
6 decision rules
32 decision rules
8 decision rules
Decision rules define possiblecombinations of conditions and outcomes.
For this scenario:
Bulk discount has 3 categories(0%, 5%, 10%)
Frequent customer discount has 2 categories(Yes = +5%, No = 0%)
The total number of decision rules is:3 bulk discount options × 2 frequent customer options = 6 rules.
Thus, the correct answer is6 decision rules (B).
A virtual service emulating a real third-party service and the automated test scripts (aimed at testing the system under test) that interact with that service, are test work products that are typically created during:
Test monitoring and control
Test implementation
Test design
Test analysis
This answer is correct because test implementation is the activity where test work products, such as test cases, test data, test scripts, test harnesses, test stubs, or virtual services, are created and verified. Test implementation also involves setting up the test environment and preparing the test execution schedule. A virtual service emulating a real third-party service and the automated test scripts that interact with that service are examples of test work products that are typically created during test implementation. References: ISTQB Glossary of Testing Terms v4.0, ISTQB Foundation Level Syllabus v4.0, Section 2.2.2.3
Which of the following is not an example of a typical content of a test completion report for a test project?
The additional effort spent on test execution compared to what was planned
The unexpected test environment downtime that resulted in slower test execution
The residual risk level if a risk-based test approach was adopted
The test procedures of all test cases that have been executed
This answer is correct because the test procedures of all test cases that have been executed are not a typical content of a test completion report for a test project. A test completion report is a document that summarizes the test activities and results at the end of a test project. It usually includes information such as the test objectives, scope, approach, resources, schedule, results, deviations, issues, risks, lessons learned, and recommendations for improvement. The test procedures of all test cases that have been executed are part of the test documentation, but they are not relevant for the test completion report, as they do not provide a high-level overview of the test project outcomes and performance. References: ISTQB Foundation Level Syllabus v4.0, Section 2.5.3.2
Which of the following statements about error guessing is true?
Error guessing is a system that adopts artificial intelligence to predict whether software components are likely to contain defects or not
Experienced testers, when applying error guessing, rely on the use of a high-level list of what needs to be tested as a guide to find defects
Error guessing refers to the ability of a system or component to continue normal operation despite the presence of erroneous inputs
Experienced testers, when applying error guessing technique, can anticipate where errors, defects and failures have occurred and target their tests at those issues
This answer is correct because error guessing is a test design technique where the experience and intuition of the tester are used to anticipate where errors, defects and failures have occurred or are likely to occur, and to design test cases to expose them. Error guessing can be based on factors such as the complexity of the system or component, the known or suspected weaknesses of the system or component, the previous history of defects, or the common types of errors in the domain or technology. Error guessing can be used as a complementary technique to other more systematic or formal techniques, or when there is insufficient information or time to apply them. References: ISTQB Glossary of Testing Terms v4.0, ISTQB Foundation Level Syllabus v4.0, Section 2.3.2.5
Confirmation testing is performed after:
a defect is fixed and after other tests do not find any side-effect introduced in the software as a result of such fix
a failed test, and aims to run that test again to confirm that the same behavior still occurs and thus appears to be reproducible
the execution of an automated regression test suite to confirm the absence of false positives in the test results
a defect is fixed, and if such testing is successful then the regression tests that are relevant for such fix can be executed
Confirmation testing is performed after a defect is fixed, and if such testing is successful then the regression tests that are relevant for such fix can be executed. Confirmation testing, also known as re-testing, is the process of verifying that a defect has been resolved by running the test case that originally detected the defect. Confirmation testing is usually done before regression testing, which is the process of verifying that no new defects have been introduced in the software as a result of changes or fixes. Therefore, option D is the correct answer.
References: ISTQB® Certified Tester Foundation Level Syllabus v4.01, Section 2.4.1, page 28; ISTQB® Glossary v4.02, page 15.
Can "cost" be regarded as Exit criteria?
Yes. Spending too much money on test ng will result in an unprofitable product, and having cost as an exit criterion helps avoid this
No. The financial value of product quality cannot be estimated, so it is incorrect to use cost as an exit criterion
Yes. Going by cost as an exit criterion constrains the testing project which will hello achieve the desired quality level defined for the project
No The cost of testing cannot be measured effectively, so it is incorrect to use cost as an exit criterion
Cost can be regarded as an exit criterion for testing, because it is a factor that affects the profitability and feasibility of the software product. Testing is an investment that aims to improve the quality and reliability of the software product, but it also consumes resources, such as time, money, and human effort. Therefore, testing should be planned and executed in a way that balances the cost and benefit of testing activities. Having cost as an exit criterion helps to avoid spending too much money on testing, which may result in an unprofitable product or a loss of competitive advantage. Cost can also help to prioritize and focus the testing efforts on the most critical and valuable features and functions of the software product. However, cost should not be the only exit criterion for testing, as it may not reflect the true quality and risk level of the software product. Other exit criteria, such as defect rate, test coverage, user satisfaction, etc., should also be considered and defined in the test plan.
The other options are incorrect, because they either deny the importance of cost as an exit criterion, or they make false or unrealistic assumptions about the cost of testing. Option B is incorrect, because the financial value of product quality can be estimated, for example, by using cost-benefit analysis, return on investment, or cost of quality models. Option C is incorrect, because going by cost as an exit criterion does not necessarily constrain the testing project or help achieve the desired quality level. Cost is a relative and variable factor that depends on the scope, complexity, and context of the software product and the testing project. Option D is incorrect, because the cost of testing can be measured effectively, for example, by using metrics, such as test effort, test resources, test tools, test environment, etc.
Which of the following is a typical potential risk of using test automation tools?
Reduced feedback times regarding software quality compared to manual testing
Reduced test execution times compared to manual testing
Reduced repeatability and consistency of tests compared to manual testing
Underestimation of effort required to maintain test scripts
One of the typical potential risks associated with using test automation tools is the underestimation of the effort required to maintain test scripts. While test automation can reduce test execution times and provide more consistent and repeatable tests compared to manual testing, maintaining test scripts can be labor-intensive and often requires significant effort. Changes in the application under test can lead to frequent updates in the test scripts to keep them functional and relevant.
References:
ISTQB CTFL Syllabus V4.0, Section 6.2 on the benefits and risks of test automation tools
The syllabus outlines that while automation can improve efficiency, it also introduces maintenance challenges.
Use Scenario 1 “Happy Tomatoes” (from the previous question).
When running test caseTC_59, the actual result fort = 35degrees Celsius isOUTPUT = Xinstead of the expected output.
Which information should NOT be included in the defect report?
Identification of the test object and test environment
A concise title and a short summary of the defect being reported
Description of the structure of the test team
Expected results and actual results
A defect report should contain relevant details to help developersreproduce and fix the defect efficiently. The essential elements include:
Test object & environment (A)– To ensure reproducibility.
Title & summary (B)– For quick identification.
Expected vs. actual results (D)– To describe the discrepancy.
Thestructure of the test team (C)is irrelevant for defect tracking and resolution.
What does the "absence-of-defects fallacy" refer to in software development?
The belief that thoroughly testing all requirements guarantees system success.
The need for constant system quality assurance and improvements.
A misconception that software verification is unnecessary
The idea that fixing defects is NOT important to meeting user needs.
The "absence-of-defects fallacy" in software development refers to the mistaken belief that if a software system has been thoroughly tested and all requirements have been met without any defects, it guarantees the success of the system. However, this is not necessarily true. Even if no defects are found, the system might still fail to meet the user's needs or business objectives. This fallacy highlights the importance of validation in addition to verification to ensure that the system fulfills the intended use and requirements.
Use Scenario 1 “Happy Tomatoes” (from the previous question).
Using theBoundary Value Analysis (BVA)technique (in its two-point variant), identify the set of input values that provides the HIGHEST coverage.
{7,8,21,22,29,30}
{7,8,22,23,29,30}
{6,7,8,21,22,29,31}
{6,7,21,22,29,30}
Boundary Value Analysis (BVA)focuses on test cases at the edges of partitions because defects often occur at boundaries. The temperature ranges are:
≤7 (Too cold → W)
[8-21] (Standstill → X)
[22-29] (Ideal → Y)
≥30 (Too hot → Z)
Atwo-point BVAmeans testing both thelower and upper boundary valuesof each partition.The correct selection{7,8,21,22,29,30}includes:
7 → Boundary of Too Cold (W)
8 → Lower boundary of Standstill (X)
21 → Upper boundary of Standstill (X)
22 → Lower boundary of Ideal (Y)
29 → Upper boundary of Ideal (Y)
30 → Lower boundary of Too Hot (Z)
This ensures maximum boundary coverage.
You are testing a room upgrade system for a hotel. The system accepts three differed types of room (increasing order of luxury): Platinum. Silver and Gold Luxury. ONLY a Preferred Guest Card holder s eligible for an upgrade.
Below you can find the decision table defining the upgrade eligibility:
What is the expected result for each of the following test cases?
Customer A: Preference Guest Card holder, holding a Silver room
Customer B: Non Preferred Guest Card holder, holding a Platinum room
Customer A; doesn't offer any upgrade; Customer B: offers upgrade to Gold luxury room
Customer A: doesn't offer any upgrade; Customer B: doesn't offer any upgrade.
Customer A: offers upgrade to Gold Luxury room; Customer B: doesn't offer any upgrade
Customer A: offers upgrade to Silver room; Customer B: offers upgrade to Silver room.
According to the decision table in the image, a Preferred Guest Card holder with a Silver room is eligible for an upgrade to Gold Luxury (YES), while a non-Preferred Guest Card holder, regardless of room type, is not eligible for any upgrade (NO). Therefore, Customer A (a Preferred Guest Card holder with a Silver room) would be offered an upgrade to Gold Luxury, and Customer B (a non-Preferred Guest Card holder with a Platinum room) would not be offered anyupgrade. References = The answer is derived directly from the decision table provided in the image; specific ISTQB Certified Tester Foundation Level (CTFL) v4.0 documents are not referenced.
The following chart represents metrics related to testing of a project that was competed. Indicate what is represented by tie lines A, B and the axes X.Y
A)
B)
C)
D)
Option A
Option B
Option C
Option D
Option D correctly explains what is represented by the lines A, B and the axes X, Y in a testing metrics chart. According to option D:
X-axis represents Time
Y-axis represents Count
Line A represents Number of open bugs
Line B represents Total number of executed tests
This information is essential in understanding and analyzing the testing metrics of a completed project.
References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 Syllabus, Section 2.5.1, Page 35.
Which of the following answers describes a reason for adopting experience-based testing techniques?
Experience-based test techniques provide more systematic coverage criteria than black-box and white-box test techniques
Experience-based test techniques completely rely on the tester's past experience for designing test cases
Experience-based test techniques allow designing test cases that are usually easier to reproduce than those designed with black-box and white-box test techniques
Experience-based test techniques tend to find defects that may be difficult to find with black-box and white-box test techniques and are often useful to complement these more systematic techniques
Experience-based testing techniques are adopted for several reasons, most importantly because they can identify defects that might be missed by more systematic approaches like black-box and white-box testing. While systematic techniques follow predefined procedures and cover specific criteria, experience-based techniques leverage the tester's knowledge, intuition, and experience, which can be especially effective in uncovering subtle and complex issues.
Experience-based testing techniques include methods such as error guessing and exploratory testing. These methods rely on the tester’s background and intuition to predict where defects might be located and how the system might fail. These techniques are particularly useful in situations where the requirements and specifications are incomplete or ambiguous, and where creative and ad-hoc approaches can provide significant value.
References:
The official ISTQB® CTFL syllabus emphasizes that experience-based techniques can find defects that more systematic techniques might miss, which makes them valuable complements to other testing methods.
The fact that defects are usually not evenly distributed among the various modules that make up a software application, but rather their distribution tend to reflect the Pareto principle:
is a false myth
is expressed by the testing principle referred to as Tests wear out'
is expressed by the testing principle referred to as 'Defects cluster together'
is expressed by the testing principle referred to as 'Bug prediction'
The fact that defects are usually not evenly distributed among the various modules that make up a software application, but rather their distribution tend to reflect the Pareto principle, is expressed by the testing principle referred to as ‘Defects cluster together’. This principle states that a small number of modules contain most of the defects detected, or that a small number of causes are responsible for most of the defects. This principle can be used to guide the test analysis and design activities, by prioritizing the testing of the most critical or risky modules, or by applying more rigorous test techniques to them. Therefore, option C is the correct answer.
References: ISTQB® Certified Tester Foundation Level Syllabus v4.01, Section 1.2.1, page 11; ISTQB® Glossary v4.02, page 16.
Calculate the measurement error SD for the following estimates done using three point estimation technique-
Most optimistic effort (a) -120 person days
Most likely effort (m) -180 person days
Most pessimistic effort (b) - 240 person days
20
180
197
120
A test-first approach involves writing tests before writing the code. In this approach, initially, the tests fail because the corresponding functionality is not yet implemented. Afterward, the code is written or modified to make the tests pass. This cycle is repeated iteratively. This method ensures that the code is developed based on predefined tests and helps in identifying issues early in the development process.
Which of the following statement about the shift-left approach is false?
The shift-left approach can only be implemented with test automation
The shift-left approach in testing is compatible with DevOps practices
The shift-left approach can involve security vulnerabilities
The shift-left approach can be supported by static analysis tools
The statement that the shift-left approach can only be implemented with test automation is false. The shift-left approach emphasizes early testing activities in the software development lifecycle to detect and address defects as soon as possible. While test automation can support shift-left practices, it is not the only method. The shift-left approach can also involve practices such as static analysis, early requirement reviews, and integrating security vulnerability assessments early in the development process.
Which of the following is an advantage of the whole team approach?
It helps avoid the risk of tasks associated with a user story not moving through the Agile task board at an acceptable rate during an iteration
It helps team members understand the current status of an iteration by visualizing the amount of work left to do compared to the time allotted for the iteration
It helps the whole team be more effective in test case design by requiring all team members to master all types of test techniques
It helps team members develop better relationships with each other and make their collaboration more effective for the benefit of the project
The "whole team approach" in Agile methodologies emphasizes collaboration and communication among all team members, including developers, testers, and business representatives. This approach fosters better relationships and effective collaboration, which ultimately benefits the project by leveraging diverse skills and perspectives. It helps ensure that everyone is aligned with the project's goals and quality standards, thus improving the overall effectiveness and efficiency of the team.
Which ONE of the following elements is TYPICALLY part of atest plan?
The budget and schedule for the test project.
A detailed analysis of the defects found and their causes.
A detailed report on the test results after the test project is completed.
A list of test logs from the test execution.
Atest planis amanagement documentthat outlines thescope, objectives, schedule, resources, and risksof the testing process. Thebudget and schedule (A)are essential components as they help plan resources and timeline constraints.
(B) is incorrectbecause defect analysis is part of thetest summary report, not the test plan.
(C) is incorrectbecausefinal reports summarize execution, while the test plan is createdbefore testing starts.
(D) is incorrectbecause test logs areexecution artifactsrather than planning elements.
A test planguides testing activities and ensures alignment with project objectives.
Which of the following are the phases of the ISTQB fundamental test process?
Test planning and control, Test analysis and design, Test implementation and execution, Evaluating ex t criteria and reporting. Test closure activities
Test planning, Test analysis and design. Test implementation and control. Checking test coverage and reporting, Test closure activities
Test planning and control, Test specification and design. Test implementation and execution, Evaluating test coverage and reporting, Retesting and regression testing, Test closure activities
Test planning. Test specification and design. Test implementation and execution. Evaluating exit criteria and reporting. Retesting and test closure activities
The ISTQB fundamental test process consists of five main phases, as described in the ISTQB Foundation Level Syllabus, Version 4.0, 2018, Section 2.2, page 15:
Test planning and control: This phase involves defining the test objectives, scope, strategy, resources, schedule, risks, and metrics, as well as monitoring and controlling the test activities and results throughout the test process.
Test analysis and design: This phase involves analyzing the test basis (such as requirements, specifications, or user stories) to identify test conditions (such as features, functions, or scenarios) that need to be tested, and designing test cases and test procedures (such as inputs, expected outcomes, and execution steps) to cover the test conditions. This phase also involves evaluating the testability of the test basis and the test items (such as software or system components), and selecting and implementing test techniques (such as equivalence partitioning, boundary value analysis, or state transition testing) to achieve the test objectives and optimize the test coverage and efficiency.
Test implementation and execution: This phase involves preparing the test environment (such as hardware, software, data, or tools) and testware (such as test cases, test procedures, test data, or test scripts) for test execution, and executing the test procedures or scripts according to the test plan and schedule. This phase also involves logging the outcome of test execution, comparing the actual results with the expected results, and reporting any discrepancies as incidents (such as defects, errors, or failures).
Evaluating exit criteria and reporting: This phase involves checking if the planned test activities have been completed and the exit criteria (such as quality, coverage, or risk levels) have been met, and reporting the test results and outcomes to the stakeholders. This phase also involves making recommendations for the release or acceptance decision based on the test results and outcomes, and identifying any residual risks (such as known defects or untested areas) that need to be addressed or mitigated.
Test closure activities: This phase involves finalizing and archiving the testware and test environment for future reuse, and evaluating the test process and the test project against the test objectives and the test plan. This phase also involves identifyingany lessons learned and best practices, and communicating the findings and suggestions for improvement to the relevant parties.
References = ISTQB Certified Tester Foundation Level Syllabus, Version 4.0, 2018, Section 2.2, page 15; ISTQB Glossary of Testing Terms, Version 4.0, 2018, pages 37-38; ISTQB CTFL 4.0 - Sample Exam - Answers, Version 1.1, 2023, Question 88, page 32.
Match the Work Product with the category it belongs to:
Work Product:
1.Risk register
2.Risk information
3.Test cases
4.Test conditions
Category of work products:
Test planning work products
Test analysis work products.
Test design work products
Test monitoring and control work products
1-C, 2-A, 3-D, 4-B
1-A, 2-C, 3-B, 4-D
1-A, 2-D, 3-C, 4-B
Risk registeris a test planning work product as it documents identified risks and their mitigation strategies.
Risk informationfalls under test monitoring and control work products as it involves ongoing evaluation and reporting of risks.
Test casesare part of test design work products as they are derived from test conditions and designed to execute the testing scenarios.
Test conditionsbelong to test analysis work products as they define the items or events of a system that are to be tested.
Which of the following is an example of scenario-oriented acceptance criteria?
Verify that a registered user can create add a new project with name having more than 100 characters
An unregistered user shouldn't be shown any report.
The user should be able to provide three inputs to test the product - the Al model to be tested, the data used and an optional text file
A user is already logged in then on navigating to the Al model testing page the user should be directly shown the report of last test run.
Scenario-oriented acceptance criteria describe how a system should behave in a specific situation or scenario. These criteria are typically written from the end-user's perspective and focus on user interactions and system responses. Option D fits this description as it outlines a specific scenariowhere a user is already logged in and describes the expected behavior when the user navigates to a particular page, which is to show the report of the last test run. This type of criterion ensures that the system meets user expectations in that scenario.
Which of the following statements refers to good testing practice to be applied regardless of the chosen software development model?
Tests should be written in executable format before the code is written and should act as executable specifications that drive coding
Test levels should be defined such that the exit criteria of one level are part of the entry criteria for the next level
Test objectives should be the same for all test levels, although the number of tests designed at various levels can vary significantly
Involvement of testers in work product reviews should occur as early as possible to take advantage of the early testing principle
The statement that refers to good testing practice to be applied regardless of the chosen software development model is option D, which says that involvement of testers in work product reviews should occur as early as possible to take advantage of the early testing principle. Work product reviews are static testing techniques, in which the work products of the software development process, such as the requirements, the design, the code, the test cases, etc., are examined by one or more reviewers, with or without the author, to identify defects, violations, or improvements. Involvement of testers in work product reviews can provide various benefits for the testing process, such as improving the test quality, the test efficiency, and the test communication. The early testing principle states that testing activities should start as early as possible in the software development lifecycle, and should be performed iteratively and continuously throughout the lifecycle. Applying the early testing principle can help to prevent, detect, and remove defects at an early stage, when they are easier, cheaper, and faster to fix, as well as to reduce the risk, the cost, and the time of the testing process. The other options are not good testing practices to be applied regardless of the chosen software development model, but rather specific testing practices that may or may not be applicable or beneficial for testing, depending on the context and the objectives of the testing activities, such as:
Tests should be written in executable format before the code is written and should act as executable specifications that drive coding: This is a specific testing practice that is associated with test-driven development, which is an approach to software development and testing, in which the developers write automated unit tests before writing the source code, and then refactor the code until the tests pass. Test-driven development can help to improve the quality, the design, and the maintainability of the code, as well as to provide fast feedback and guidance for the developers. However, test-driven development is not a good testing practice to be applied regardless of the chosen software development model, as it may not be feasible, suitable, or effective for testing in some contexts or situations, such as when the requirements are unclear, unstable, or complex, when the test automation tools or skills are not available or adequate, when the testing objectives or levels are not aligned with the unit testing, etc.
Test levels should be defined such that the exit criteria of one level are part of the entry criteria for the next level: This is a specific testing practice that is associated with sequential software development models, such as the waterfall model, the V-model, or the W-model, in which the software development and testing activities are performed in a linear and sequential order, with well-defined phases, deliverables, and dependencies. Test levels are the stages of testing that correspond to the levels of integration of the software system, suchas component testing, integration testing, system testing, and acceptance testing. Test levels should have clear and measurable entry criteria and exit criteria, which are the conditions that must be met before starting or finishing a test level. In sequential software development models, the exit criteria of one test level are usually part of the entry criteria for the next test level, to ensure that the software system is ready and stable for the next level of testing. However, this is not a good testing practice to be applied regardless of the chosen software development model, as it may not be relevant, flexible, or efficient for testing in some contexts or situations, such as when the software development and testing activities are performed in an iterative and incremental order, with frequent changes, feedback, and adaptations, as in agile software development models, such as Scrum, Kanban, or XP, when the test levels are not clearly defined or distinguished, or when the test levels are performed in parallel or concurrently, etc.
Test objectives should be the same for all test levels, although the number of tests designed at various levels can vary significantly: This is a specific testing practice that is associated with uniform software development models, such as the spiral model, the incremental model, or the prototyping model, in which the software development and testing activities are performed in a cyclical and repetitive manner, with similar phases, deliverables, and processes. Test objectives are the goals or the purposes of testing, which can vary depending on the test level, the test type, the test technique, the test environment, the test stakeholder, etc. Test objectives can be defined in terms of the test basis, the test coverage, the test quality, the test risk, the test cost, the test time, etc. Test objectives should be specific, measurable, achievable, relevant, and time-bound, and they should be aligned with the project objectives and the quality characteristics. In uniform software development models, the test objectives may be the same for all test levels, as the testing process is repeated for each cycle or iteration, with similar focus, scope, and perspective of testing. However, this is not a good testing practice to be applied regardless of the chosen software development model, as it may not be appropriate, realistic, or effective for testing in some contexts or situations, such as when the software development and testing activities are performed in a hierarchical and modular manner, with different phases, deliverables, and dependencies, as in sequential software development models, such as the waterfall model, the V-model, or the W-model, when the test objectives vary according to the test levels, such as component testing, integration testing, system testing, and acceptance testing, or when the test objectives change according to the feedback, the learning, or the adaptation of the testing process, as in agile software development models, such as Scrum, Kanban, or XP, etc.References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.1.1, Testing and the Software Development Lifecycle1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.1, Testing Principles1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.2, Testing Policies, Strategies, and Test Approaches1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.3.1, Testing in Software Development Lifecycles1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.1, Test Planning1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.2, Test Monitoring and Control1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.3, Test Analysis and Design1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.4, Test Implementation1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.5, Test Execution1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.6, Test Closure1
ISTQB® Glossary of Testing Terms v4.0, Work Product Review, Static Testing, Early Testing, Test-driven Development, Test Level, Entry Criterion, Exit Criterion, Test Objective, Test Basis, Test Coverage, Test Quality, Test Risk, Test Cost, Test Time2
Which of the following statements is true?
Unlike functional testing, non-functional testing can only be applied to conventional systems, not artificial intelligence-based system
Functional testing focuses on what the system is supposed to do, while white-box testing focuses on how well the system does what it is supposed to do
Functional testing can be applied to all test levels, while non-functional testing can be applied only to system and acceptance test levels
Black-box test techniques and experience-based test techniques may be applicable to both functional testing and non-functional testing
Both black-box test techniques and experience-based test techniques can be applied to functional and non-functional testing. Functional testing focuses on what the system does, while non-functional testing examines how the system performs. These techniques provide flexible and effective methods for assessing various aspects of the system.
References:
ISTQB® CTFL Syllabus 4.0, Chapter 4.4, page 42: Experience-Based Testing Techniques
Which ONE of the following options MOST ACCURATELY describesbranch testing?
In branch testing, the coverage items are executable statements. The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved, expressed as a percentage.
In branch testing, the coverage items are control flow transfers between decisions, and the aim is to design test cases to exercise flow transfers in the code until an acceptable level of coverage is achieved. Coverage is measured as the number of branches exercised by the test cases divided by the total number of branches expressed as a percentage.
In branch testing, the coverage items are branches, and the aim is to design test cases to exercise branches in the code until an acceptable level of coverage is achieved. Coverage is measured as thenumber of branches exercised by the test cases divided by the total number of branches expressed as a percentage.
In branch testing, the coverage items are executable decisions. The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved. Coverage is expressed as a percentage.
Branch testingis a structural testing technique that ensures eachbranch (decision point)in the control flow is executed at least once. The goal is to measurebranch coverage, which is the number of branches exercised divided by the total number of branches.
(A) describes statement testing, not branch testing.
(B) and (D) introduce confusion between decisions and statements, whereas branch testing focuses on control flow branches.
In simple terms,branch testing checks that all possible decision outcomes (true/false) are executed, whereas statement testing only ensures that each line of code is executed.
Which ONE of the following work products TYPICALLY belongs to test execution?
Test logs that document the results of test execution.
Automated test scripts used for test execution.
A test plan that describes the test strategy and test objectives.
A list of test conditions prioritized during test analysis.
Test execution involves running test cases and documenting results. Test logs (A) provide evidence of executed tests, failures, and actual outcomes. Automated test scripts (B) are part of test implementation, test plans (C) belong to test planning, and test conditions (D) are identified during test analysis.
Which of the following statements about white-box test techniques is true?
Achieving full statement coverage and full branch coverage for a software product means that such software product has been fully tested and there are no remaining bugs within the code
Code-related white-box test techniques are not required to measure the actual code coverage achieved by black-box testing, as code coverage can be measured using the coverage criteria associated with black-box test techniques
Branch coverage is the most thorough code-related white-box test technique, and therefore applicable standards prescribe achieving full branch coverage at the highest safety levels for safety-critical systems
Code-related white-box test techniques provide an objective measure of coverage and can be used to complement black-box test techniques to increase confidence in the code
This answer is correct because code-related white-box test techniques are test design techniques that use the structure of the code to derive test cases. They provide an objective measure of coverage, such as statement coverage, branch coverage, or path coverage, which indicate how much of the code has been exercised by the test cases. Code-related white-box test techniques can be used to complement black-box test techniques, which are test design techniques that use the functional or non-functional requirements of the system or component to derive test cases. By combining both types of techniques, testers can increase their confidence in the code and find more defects. References: ISTQB Glossary of Testing Terms v4.0, ISTQB Foundation Level Syllabus v4.0, Section 2.3.2.2
Consider the following user story about an e-commerce website's registration feature that only allows registered users to make purchases:
“As a new user, I want to register to the website, so that I can start shopping online”
The following are some of the acceptance criteria defined for the user story:
[a]The registration form consists of the following fields: username, email address, first name, last name, date of birth, password and repeat password
[b]To submit the registration request, the new user must fill in all the fields of the registration form with valid values and must agree to the terms and conditions
[c]To be valid, the email address must not be provided by free online mail services that allow to create disposable email addresses. A dedicated error message must be presented to inform the new user when an invalid address is entered
[d]To be valid, the first name and last name must contain only alphabetic characters and must be between 2 and 80 characters long. A dedicated error message must be presented to inform the new user when an invalid first name and/or the last name is entered
[e]After submitting the registration request, the new user must receive an e-mail containing the confirmation link to the e-mail address specified in the registration form
Based only on the given information, which of the following ATDD tests is most likely to be written first?
The new user enters valid values in the fields of the registration form, except for the email address, where he/she enters an e-mail address provided by a free online mail service that allow to create disposable email addresses. Then he/she is informed by the website about this issue
The new user enters valid values in the fields of the registration form, except for the first name, where he/she enters a first name with 10 characters that contains a number. Then he/she is informed by the website about this issue
The user accesses the website with username and password, and successfully places a purchase order for five items, paying by Mastercard credit card EV
The new user enters valid values in all the fields of the registration form, confirms to accept all the terms and conditions, submits the registration request and then receives an e-mail containing the confirmation link to the e-mail address specified in the registration form
Based on the given user story and acceptance criteria, the ATDD (Acceptance Test-Driven Development) approach focuses on defining acceptance tests before development begins. The first test written typically covers the "happy path" or the most straightforward scenario to ensure the primary functionality works as expected.
The registration form must be filled with valid values.
The user must accept terms and conditions.
An email with a confirmation link must be sent after submission.
Given Acceptance Criteria:The most likely first ATDD test would ensure that a new user can successfully register by filling in all fields with valid data and confirming the registration through an email link. This ensures that the basic and most crucial functionality of the registration feature is working correctly before handling edge cases or error conditions.
References:
ISTQB CTFL Syllabus Section 2.3 on acceptance test-driven development (ATDD).
Which of the following statements about impact of DevOps on testing is CORRECT?
DevOps helps bring focus on testing of non-functional characteristics
DevOps helps shift focus of testing people to perform operations testing
DevOps helps shift focus of operations people to functional testing
DevOps helps eliminate manual testing by having focus on continuous automated testing
DevOps practices emphasize the importance of testing non-functional characteristics such as performance, security, and reliability. This focus ensures that the system not only meets functional requirements but also performs well under various conditions and is secure. DevOps promotes a continuous testing approach, which includes both functional and non-functional testing integrated into the development and deployment pipelines.
References:
ISTQB CTFL Syllabus V4.0, Section 2.1.4 on DevOps and testing, which highlights the role of DevOps in emphasizing non-functional characteristics.
Which of the following is a test task that usually occurs during test implementation?
Make sure the planned test environment is ready to be delivered
Find, analyze, and remove the causes of the failures highlighted by the tests
Archive the testware for use in future test projects
Gather the metrics that are used to guide the test project
A test task that usually occurs during test implementation is to make sure the planned test environment is ready to be delivered. The test environment is the hardware and software configuration on which the tests are executed, and it should be as close as possible to the production environment where the software system will operate. The test environment should be planned, prepared, and verified before the test execution, to ensure that the test conditions, the test data, the test tools, and the test interfaces areavailable and functional. The other options are not test tasks that usually occur during test implementation, but rather test tasks that occur during other test activities, such as:
Find, analyze, and remove the causes of the failures highlighted by the tests: This is a test task that usually occurs during test analysis and design, which is the activity of analyzing the test basis, designing the test cases, and identifying the test data. During this activity, the testers can use techniques such as root cause analysis, defect prevention, or defect analysis, to find, analyze, and remove the causes of the failures highlighted by the previous tests, and to prevent or reduce the occurrence of similar failures in the future tests.
Archive the testware for use in future test projects: This is a test task that usually occurs during test closure, which is the activity of finalizing and reporting the test results, evaluating the test process, and identifying the test improvement actions. During this activity, the testers can archive the testware, which are the test artifacts produced during the testing process, such as the test plan, the test cases, the test data, the test results, the defect reports, etc., for use in future test projects, such as regression testing, maintenance testing, or reuse testing.
Gather the metrics that are used to guide the test project: This is a test task that usually occurs during test monitoring and control, which is the activity of tracking and reviewing the test progress, status, and quality, and taking corrective actions when necessary. During this activity, the testers can gather the metrics, which are the measurements of the testing process, such as the test coverage, the defect density, the test effort, the test duration, etc., that are used to guide the test project, such as planning, estimating, scheduling, reporting, or improving the testing process. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.1, Test Planning1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.2, Test Monitoring and Control1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.3, Test Analysis and Design1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.4, Test Implementation1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.5, Test Execution1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.6, Test Closure1
ISTQB® Glossary of Testing Terms v4.0, Test Environment, Test Condition, Test Data, Test Tool, Test Interface, Failure, Root Cause Analysis, Defect Prevention, Defect Analysis, Testware, Regression Testing, Maintenance Testing, Reuse Testing, Test Coverage, Defect Density, Test Effort, Test Duration2
Which of the following statements about how different types of test tools support testers is true?
The support offered by a test data preparation tool is often leveraged by testers to run automated regression test suites
The support offered by a performance testing tool is often leveraged by testers to run load tests
The support offered by a bug prediction tool is often used by testers to track the bugs they found
The support offered by a continuous integration tool is often leveraged by testers to automatically generate test cases from a model
The support offered by a performance testing tool is often leveraged by testers to run load tests, which are tests that simulate a large number of concurrent users or transactions on the system under test, in order to measure its performance, reliability, and scalability. Performance testing tools can help testers to generate realistic workloads, monitor system behavior, collect and analyze performance metrics, and identify performance bottlenecks. The other statements are false, because:
A test data preparation tool is a tool that helps testers to create, manage, and manipulate test data, which are the inputs and outputs of test cases. Test data preparation tools are not directly related to running automated regression test suites, which are test suites that verify that the system still works as expected after changes or modifications. Regression test suites are usually executed by test execution tools, which are tools that can automatically run test cases and compare actual results with expected results.
A bug prediction tool is a tool that uses machine learning or statistical techniques to predict the likelihood of defects in a software system, based on various factors such as code complexity, code churn, code coverage, code smells, etc. Bug prediction tools are not used by testers to track the bugs they found, which are the actual defects that have been detected and reported duringtesting. Bugs are usually tracked by defect management tools, which are tools that help testers to record, monitor, analyze, and resolve defects.
A continuous integration tool is a tool that enables the integration of code changes from multiple developers into a shared repository, and the execution of automated builds and tests, in order to ensure the quality and consistency of the software system. Continuous integration tools are not used by testers to automatically generate test cases from a model, which are test cases that are derived from a representation of the system under test, such as a state diagram, a decision table, a use case, etc. Test cases can be automatically generated by test design tools, which are tools that support the implementation and maintenance of test cases, based on test design specifications or test models. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 3.4.1, Types of Test Tools
ISTQB® Glossary of Testing Terms v4.0, Performance Testing Tool, Test Data Preparation Tool, Bug Prediction Tool, Continuous Integration Tool, Test Execution Tool, Defect Management Tool, Test Design Tool
Test automation allows you to:
demonstrate the absence of defects
produce tests that are less subject to human errors
avoid performing exploratory testing
increase test process efficiency by facilitating management of defects
Test automation allows you to produce tests that are less subject to human errors, as they can execute predefined test scripts or test cases with consistent inputs, outputs, and expected results. Test automation can also reduce the manual effort and time required to execute repetitive or tedious tests, such as regression tests, performance tests, or data-driven tests. Test automation does not demonstrate the absence of defects, as it can only verify the expected behavior of the system under test, not the unexpected or unknown behavior. Test automation does not avoid performing exploratory testing, as exploratory testing is a valuable technique to discover new information, risks, or defects that are not covered by automated tests. Test automation does not increase test process efficiency by facilitating management of defects, as defect management is a separate activity that involves reporting, tracking, analyzing,and resolving defects, which may or may not be related to automated tests. References: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 3.3.1, Test Automation1
ISTQB® Glossary of Testing Terms v4.0, Test Automation2