The ever evolving ISTQB glossary can be found here.
A |
|
A/B Testing | A method of comparing two versions of a web page or application to determine which performs better in terms of user engagement, conversions, or other metrics. |
Abstract Test Case | High-level test scenarios outlining the test without any inputs or expected results. |
Acceptance Criteria | A set of conditions that a product or feature must meet to be considered complete and satisfactory by the customer or end user. |
Acceptance Testing | A type of testing that verifies whether a product or feature meets its acceptance criteria and is ready for deployment to production. |
Accessibility Testing | A type of testing that evaluates a product’s usability and accessibility for users with disabilities. |
Ad-Hoc Reports | Custom reports generated on the fly to answer specific business questions or address unexpected issues. |
Ad-Hoc Testing | A type of testing that is performed informally and spontaneously, often without a predefined test plan or test cases. Also known as ‘exploratory testing’. |
Agile Manifesto | A statement of values and principles for Agile software development, emphasising customer collaboration, flexibility, and rapid iteration. |
Agile Manifesto Principles | A set of 12 guiding values for Agile software development, emphasising collaboration, flexibility, and customer satisfaction over rigid processes and documentation. |
Agile Manifesto Values | The four key values of the Agile Manifesto. |
Agile Methodology | An iterative and collaborative approach to software development, emphasising customer feedback, continuous improvement, and rapid adaptation to changing requirements. |
Agile Model | A software development model that emphasises close collaboration between developers and customers, iterative development, and rapid feedback loops. |
Alpha Testing | A phase of software testing where a small group of users, often within the development team, test a pre-release version of the software to identify and report bugs, defects, and usability issues. |
Artificial Intelligence (AI) | The simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. |
API Testing | A type of testing that focuses on evaluating the functionality, reliability, and security of an application programming interface (API). |
Architecture | The overall design and structure of a software system, including its components, interfaces, and relationships. |
Audit Trails | Records that document all the activities and transactions within a software system, often used for compliance or security purposes. |
Authentication | The process of verifying the identity of a user or system accessing a software system or application. |
Authorisation | The process of granting or denying access to specific resources or functionalities within a software system or application. |
Automated Testing | The use of software tools to automate the execution of tests and the comparison of actual results to expected results. |
Automated Testing Tools | Software tools designed to automate the execution and management of testing activities, such as test case management, test automation, and performance testing. |
Availability Testing | A type of testing that evaluates the ability of a software system or application to remain available and responsive under various conditions, such as high traffic, network disruptions, or hardware failures. |
B |
|
Backup And Recovery | The process of creating and storing copies of important data or system components to ensure they can be restored in the event of data loss or system failure. |
Backup Retention | The length of time backups are kept before they are deleted or replaced. |
Baseline Testing | The initial round of testing performed on a product or system to establish a baseline of performance or behaviour. |
BDD (Behaviour-Driven Development) | A software development methodology that emphasises collaboration between developers, testers, and business stakeholders to ensure that software behaviour aligns with business requirements and user needs. |
Benchmark Testing | A type of testing that compares the performance or capabilities of a system or component to established standards or competitors. |
Beta Testing | A type of testing that involves releasing a pre-release version of a product to a select group of external users for real-world testing and feedback. |
Big Data | Large and complex sets of data that require specialised techniques and technologies to manage, analyse, and derive insights from. |
Big-Bang Approach | A software development approach that involves releasing an entire system or product at once, rather than in stages or increments. |
Black Box | A testing approach that treats the system or component as a “black box,” focusing on testing its inputs, outputs, and behaviour without regard to its internal workings. |
Bottom-Up Approach | A software development approach that begins with building and testing smaller, individual components before integrating them into larger systems. |
Boundary Value Analysis | A testing technique that evaluates the behaviour of a system or component at or near its defined boundaries or limits. |
Branch Coverage Testing | A testing technique that measures the percentage of code branches executed by a test suite, to ensure complete coverage of a system’s logic. |
BrowserStack | A cloud-based cross-browser testing tool that allows developers and testers to test their web applications on a variety of browsers and devices. |
Bug | An error or defect in a software system or application that causes it to behave in unexpected or incorrect ways. |
Bug Fix Report | A document that details a bug or defect found in a software system or application and the steps taken to correct it. |
Bug Leakage | The unintended release of a bug or defect into a production environment or to end-users. |
Business Policies & Procedures | A set of guidelines and rules that govern how a company conducts its operations and business activities. |
Business Rules | The specific rules or constraints that apply to a business process or application, typically defined by business stakeholders or subject matter experts. |
C |
|
Change Control Board (CCB) | A group of stakeholders responsible for reviewing and approving proposed changes to a software application or system, with the goal of ensuring that changes are made in a controlled and structured manner, without adversely affecting the software’s functionality, reliability, and quality. |
Chrome DevTools | A set of web developer tools built into the Google Chrome browser that allows developers to debug and diagnose issues in their web applications. |
CI/CD | Continuous Integration/Continuous Delivery is a set of practices that enable frequent, automated testing and delivery of software updates. |
Client | In software development, a client is a user or application that consumes or interacts with a software system or service. |
Cloud Computing | The delivery of computing services, including servers, storage, databases, networking, software, and analytics, over the internet (i.e., the “cloud”). |
Cloud-Based Testing | Testing is conducted using cloud-based infrastructure or tools, typically for scalability, flexibility, and cost-effectiveness. |
Code Analysis | The process of reviewing and analysing software code for potential issues, including security vulnerabilities, performance bottlenecks, and code smells. |
Code Coverage | The measure of how much of a software application’s source code is executed by a given test suite. |
Code Review | A process of reviewing and analysing code for potential issues, such as code quality, maintainability, and adherence to coding standards and best practices. |
Compatibility Testing | A type of testing that ensures that software applications or systems function correctly across different hardware, software, and network configurations. |
Compliance | The adherence to laws, regulations, industry standards, or other requirements related to software development or operation. |
Compliance Testing | A type of testing that verifies whether a product or system meets specific regulations, standards, or legal requirements. |
Concrete Test Case | Low-level detailed step-by-step tests with concrete input values and test data that lead to certain expected results. |
Component Testing | A type of testing that focuses on verifying the functionality, reliability, and performance of individual software components or modules. |
Condition Coverage | A testing technique that evaluates the behaviour of a system or component under various logical conditions or states. |
Configuration Testing | A type of testing that ensures that software systems or applications function correctly under different configuration settings or parameters. |
Configuration Management | Configuration management, often called “config management,” is a process that focuses on keeping software and systems organised and consistent. It involves tracking and controlling changes to code, hardware, or other technical components. Config management tools like Puppet or Ansible help automate tasks, making it easier for developers and sysadmins to maintain stability and predictability in their IT environments. |
Consortium For Information and Software Quality (CISQ) | An industry consortium that provides standards and best practices for software quality, measurement, and analysis. |
CSS | CSS (Cascading Style Sheets). A stylesheet language used to control the presentation and layout of web pages written in HTML. With CSS, you can change colours, fonts, spacing, and more, making web content look visually appealing and user-friendly. CSS works together with HTML to bring web design to life. |
CTA | A Call to Action (CTA) is a phrase or button used in marketing that prompts the user to take a specific action, such as making a purchase or signing up for a mailing list. It can take the form of a button, link, or text message, and is designed to encourage engagement with a product or service. |
Customer | In software development, a customer is a person or organisation that commissions, funds, or uses a software system or service. |
D |
|
Daily Scrum | A daily stand-up meeting held by agile development teams to discuss progress, plan the day ahead, and identify any impediments or issues. |
Dashboards | A graphical representation of data that provides an overview of key performance indicators, metrics, and trends. |
Data Analytics | The process of extracting insights and knowledge from data using statistical and computational methods. |
Data Backup | The process of creating and storing copies of data for disaster recovery, business continuity, or archival purposes. |
Data Completeness | The degree to which data contains all the required or expected information without any gaps or omissions. |
Data Encryption | The process of converting data into a coded format to prevent unauthorised access or tampering. |
Data Flow | The movement of data between different systems, applications, or components in a software ecosystem. |
Data Format | The structure and syntax of data, including the type, size, and organisation of the data elements. |
Data Governance | The management framework and policies that ensure the accuracy, consistency, integrity, and security of data across an organisation. |
Data Integration | The process of combining data from multiple sources into a unified view for analysis, reporting, or other purposes. |
Data Model | A conceptual representation of the structure, relationships, and constraints of data entities, attributes, and operations. |
Data Profiling | The process of analysing and assessing the quality, completeness, and consistency of data. |
Data Quality | The degree to which data meets the requirements or expectations for accuracy, completeness, consistency, timeliness, and relevance. |
Data Security | The protection of data from unauthorised access, theft, loss, corruption, or damage. |
Data Sources | The origin or location of data, such as databases, files, streams, or external systems. |
Data Storage | The physical or virtual space where data is stored, managed, and accessed. |
Data Transformation | The process of converting data from one format, structure, or representation to another for processing, integration, or analysis. |
Data Visualisation | The representation of data in a graphical or visual format to facilitate understanding, analysis, and communication. |
Data Volume | The amount or size of data that is processed, stored, or transferred in a system or application. |
Data Warehouse Testing | The process of validating the accuracy, completeness, and performance of data stored in a data warehouse or data mart. |
Database Testing | The process of verifying the correctness, consistency, and integrity of data stored in a database or database management system. |
Debugging | The process of identifying, analysing, and resolving errors, faults, or defects in software code or system behaviour. |
Decision Tables | A tool used to model complex business rules and logic by defining conditions and corresponding actions in a tabular format. |
Defect | A flaw, fault, or failure in software code or system behaviour that results in incorrect or unexpected outcomes. |
Defect Analysis | The process of investigating and diagnosing defects to identify their root cause, impact, and severity. |
Defect Classifications | The categorisation of defects based on their severity, priority, type, or source, such as functional defects, performance defects, or design defects. |
Defect Closure | The process of formally closing a defect after it has been resolved and verified to ensure that it has been properly addressed and documented. |
Defect Density | The number of defects identified in a specific component, module, or phase of software development, divided by the size or complexity of that component. |
Defect Fix | The corrective action taken to address a defect and restore the desired functionality or behaviour of software. |
Defect Identification | The process of detecting, reporting, and documenting defects or issues found during software testing or inspection. |
Defect Life Cycle | The stages that a defect goes through from identification to closure, including reporting, triage, analysis, fixing, retesting, and verification. |
Defect Management | The process of tracking, prioritising, and resolving defects in software development, using tools, workflows, and metrics. |
Defect Removal Efficiency | The ratio of defects found and fixed before release to the total number of defects identified, used as a measure of software quality. |
Defect Reports | Documentation that describes the characteristics, status, and resolution of defects found during software testing or inspection. |
Defect Retesting | The process of re-executing failed or erroneous test cases after defects have been fixed, to confirm that the defect has been properly addressed and resolved. |
Defect Summary Report | A report that provides an overview of the defects found during testing, including their severity, priority, and status. |
Defect Tracking | The process of monitoring and managing defects throughout their lifecycle, using tools and processes to ensure that they are addressed in a timely and effective manner. |
Defect Tracking Tools | Software tools used to track, manage, and report on defects and issues found during software development and testing. |
Defect Triage | The process of prioritising and assigning defects based on their severity, impact, and priority, to ensure that they are addressed in an appropriate and timely manner. |
Defect Verification | The process of confirming that a defect has been properly resolved and that the desired functionality or behaviour has been restored. |
Deployment | The process of installing, configuring, and activating software or systems in a target environment, such as production, staging, or testing. |
Design Phase | The stage of software development that involves creating a high-level design or architecture for the system or application, based on requirements and specifications. |
Design Reviews | Formal or informal assessments of software design and architecture, conducted by peers or experts to identify potential issues, risks, or improvements. |
Developer | A person responsible for writing, testing, and maintaining software code, using programming languages, tools, and frameworks. |
Development | The process of creating, modifying, or enhancing software or systems, using programming, scripting, and other tools, and techniques. |
Development Environment | A setup of hardware, software, and tools used by developers to write, test, and debug software code or systems. |
Development Phase | The stage of software development that involves writing, testing, and validating software code, or systems, based on requirements and design. |
Development Team | A group of developers, testers, and other professionals involved in creating, testing, and delivering software or systems. |
DevOps | A set of practices, tools, and culture that emphasises collaboration, communication, and automation between software development and operations teams. |
DevTools | Software applications and utilities that assist developers in creating and managing software. They include code editors, debugging tools, version control systems, and IDEs |
Disaster Recovery | The process of restoring data, systems, and operations after a catastrophic event, such as a natural disaster, cyber-attack, or system failure. |
Dynamic Analysis | The process of analysing software code or system behaviour during execution, using tools and techniques to detect defects, vulnerabilities, or performance issues. |
Dynamic Testing | The process of testing software or systems during execution, using tools, techniques, and scenarios to validate their functionality, behaviour, or performance. |
E |
|
E-Commerce | The buying and selling of products or services over the internet or other electronic means. |
Edge Cases | Inputs or conditions that are unlikely or rare, but still valid and can cause unexpected behaviour or errors in software or systems. |
Emulators Or Simulators | Tools or software that simulate the behaviour of hardware, software, or systems, used for testing, development, or debugging purposes. |
End User | The person or group who uses software, systems, or products for their intended purpose or task. |
End-To-End Testing | A type of testing that validates the functionality, performance, and behaviour of a system or application from end to end, covering all components and interactions. |
Endurance Testing | A type of testing that validates the behaviour and performance of a system or application under sustained or heavy load or stress conditions, for a prolonged period. |
Entry Criteria | The conditions or requirements that must be met before starting a testing activity or phase, such as completion of development, availability of test environment or data, or approval of test plan. |
Epics | High-level or broad requirements or features that are too large or complex to implement or deliver in a single iteration or release and need to be broken down into smaller user stories or tasks. |
Equivalence Partitioning | A testing technique that divides a range of inputs or values into groups or partitions that have similar behaviour or output, to reduce the number of test cases needed to cover all scenarios. |
Error | An unintended or incorrect action or behaviour of software or systems, resulting in incorrect or unexpected output or behaviour. |
Error Handling | The process of detecting, reporting, and recovering from errors or exceptions in software or systems, using techniques such as exception handling, logging, or debugging. |
Exit Criteria | The conditions or requirements that must be met before ending a testing activity or phase, such as completion of all test cases, meeting quality or performance goals, or obtaining approval for release. |
Expected Results | The behaviour or output that is expected or intended from a software or system component or interaction, based on requirements or specifications. |
Exploratory Testing | A testing technique that emphasises the creativity, intuition, and experience of testers to discover defects or issues in software or systems, by exploring and experimenting with them in an ad-hoc and unscripted manner. |
F |
|
Failover Testing | A type of testing that validates the ability of a system or application to recover or switch to a backup or standby system or component in case of a failure or outage, without losing data or functionality. |
Failures | The instances or events when a software or system component or interaction does not perform as intended or expected, resulting in incorrect or unexpected behaviour or output. |
False Negative | A false negative happens when a test fails to detect a real problem, incorrectly giving the all-clear when there’s an issue. |
False Positive | A false positive occurs when a test incorrectly identifies something as a problem when it’s not. It’s a “false alarm.” |
Fault | A defect, error, or flaw in software or system that causes or contributes to a failure or incorrect behaviour. |
Feature | A functionality, capability, or behaviour of a software or system component or interaction, designed to fulfil a specific requirement or user need. |
Figma | A cloud-based design and prototyping tool, used for creating and collaborating on user interface designs, user flows, and prototypes. |
Firefox Responsive Design Mode | A feature in Firefox browser that allows developers and designers to test and preview how web pages or applications will look and behave on different devices, screen sizes, and resolutions. |
Full Stack | A full-stack developer who can work on both the front-end and back-end of a web application. |
Functional Coverage | The extent or completeness of testing that validates the functionality or features of a software or system, based on specified requirements or specifications. |
Functional Requirements | The set of requirements or specifications that define the desired behaviour or functionality of a software or system component or interaction, from a user or functional perspective. |
Functional Requirements Specification (FRS) | A document that describes and specifies the functional requirements of a software or system component or interaction, in detail and with examples. |
Functional Testing | A type of testing that validates the functional or behavioural aspects of a software or system, based on specified requirements or specifications. |
G |
|
Go / No Go Decisions | A decision-making process that determines whether a software or system is ready to move to the next phase or release, based on predefined criteria, such as quality, functionality, and readiness. |
Graphical User Interface (GUI) | The visual and interactive interface of a software or system, which enables users to interact with and manipulate data and functionality, using icons, menus, buttons, and other graphical elements. |
Grey Box | A testing approach that combines elements of black-box testing and white-box testing, where the tester has partial knowledge of the internal workings or code of the software or system being tested. |
H |
|
Happy Path | A testing scenario or path that validates the correct or expected behaviour of a software or system, based on normal or optimal conditions, without encountering errors or exceptions. |
High-Level Design (HLD) | A design phase that defines the overall structure, components, and functionality of a software or system, at a high level of abstraction or detail, without specifying implementation details or algorithms. |
HTML | HTML (Hypertext Markup Language). A markup language used to structure content on web pages. With HTML, you define elements like headings, paragraphs, links, and images, telling web browsers how to display your content. It’s the essential language for creating web pages and is often used alongside CSS for styling and JavaScript for interactivity. |
HTTP | HyperText Transfer Protocol, a protocol for communication between web servers and web clients, used for transmitting and receiving data, requests, and responses over the internet. |
HTTP Requests | HTTP requests are the requests sent by a client (e.g., web browser) to a server to retrieve web pages or other resources. |
HTTP Response | The message sent by a server in response to an HTTP request, contains status information and possibly content requested by the client. |
Hybrid Approach | A combination of different methodologies or approaches to development, testing, or project management, often used to adapt to specific project or organisational needs. |
Hybrid Methodologies | A software development or project management approach that combines elements of different methodologies, such as agile and waterfall, to address specific needs or challenges, based on the project or team’s context and requirements. |
I |
|
IEEE | Institute of Electrical and Electronics Engineers, a professional association that develops standards, guidelines, and best practices for various industries, including software engineering and testing. |
Impediments | Any obstacles or barriers that hinder the progress or success of a software development or testing process, such as technical challenges, resource constraints, communication issues, or external factors. |
Increment | A small and measurable unit of progress or improvement in a software development or testing process, which adds new features, functionality, or value to the software or system being developed or tested. |
Incremental Model | A software development or testing model that breaks down the overall process into multiple iterations or increments, each adding new functionality or features to the software or system, based on feedback and requirements from the previous iteration. |
Initiatives | A collection of related projects or efforts that are designed to achieve a specific objective or outcome. |
In-Person Testing | A testing approach where the tester performs the testing activities on-site, in the physical environment or location where the software or system is deployed or used. |
Inspections | A testing technique that involves a systematic review or examination of the software or system design, code, or documentation, to identify defects, inconsistencies, or opportunities for improvement. |
Integration | The process of combining or merging different components, modules, or subsystems of a software or system, to form a cohesive and functional whole. |
Integration Testing | A testing phase that verifies the correct functioning and interaction of different components, modules, or subsystems of a software or system, after they have been integrated or combined. |
Internal Testing | A compliance testing approach where the testing activities are carried out in-house by the internal management team to check if the internal standards set by the organisation are met. |
Internationalisation Testing | A testing process that ensures the software or system can function properly in different languages, cultures, or regions, and meets the internationalisation standards and requirements. |
Internet Of Things (IoT) | A network of physical devices, sensors, and software that are interconnected and can communicate with each other, enabling data collection, analysis, and automation in various domains, such as healthcare, transportation, or manufacturing. |
Interoperability Testing | A testing process that ensures the compatibility and seamless communication of a software or system with other systems, platforms, or devices, based on predefined standards and protocols. |
ISO | International Organisation for Standardisation, a non-governmental organisation that develops and publishes international standards for various industries, including software engineering and testing. |
Issue | Any problem, defect, or discrepancy that affects the functionality, quality, or performance of a software or system, and requires resolution or mitigation. |
International Software Testing Qualifications Board (ISTQB) | An international organisation that develops and promotes software testing certification and education programs, based on a standardised body of knowledge and best practices. |
Iterative Model | A software development or testing model that involves repeated cycles of planning, designing, implementing, and testing, to incrementally improve and refine the software or system, based on feedback and requirements. |
K |
|
Kanban | A method for managing workflow and visualising the progress of work, originally developed for manufacturing processes but now used in a variety of industries. |
Kanban Board | A visual tool used in Kanban that displays work items, their status, and other relevant information, typically using cards and columns. |
L |
|
Launch | The act of releasing a product or service to the public, often involving marketing and other promotional efforts. |
Legal or External Testing | Testing conducted by outside parties, such as regulatory agencies or legal teams, to ensure that a product or service meets applicable laws and regulations. |
Lessons Learned Report | A document that captures and summarises the key takeaways from a project or initiative, with the goal of identifying best practices and areas for improvement. |
Load Testing | Testing the performance of a system under simulated real-world usage conditions, often involving high volumes of traffic or data. |
Localisation Testing | Testing a product or service to ensure that it is appropriate for a particular locale or culture, including language, customs, and other factors. |
Loop Coverage | A measure of the completeness of test coverage, indicating how many times each section of code has been executed during testing. |
Low-Level Design (LLD) | Detailed design documentation that describes the technical implementation of a software system, often created after high-level design and requirements gathering have been completed. |
M |
|
Maintainability Testing | Testing the ease with which a system can be maintained or updated over time, often involving measures of code complexity and modularity. |
Mandatory Testing | Testing that is required by law, regulation, or policy, often related to safety or security concerns. |
Manual Testing | Testing conducted by human testers, often involving the use of test scripts or other structured methods. |
Matrix Testing | A form of testing that involves systematically testing combinations of input values or other variables. |
Mean Time to Detect (MTTD) | A metric used in incident management to measure the average time it takes to detect a problem or issue in a system. |
Mean Time to Resolve (MTTR) | A metric used in incident management to measure the average time it takes to resolve a problem or issue in a system. |
Metrics Reports | Reports that summarise key performance metrics or other relevant data, often used for tracking progress or identifying areas for improvement. |
Mobile Applications | Software applications designed for use on mobile devices, such as smartphones or tablets. |
Mobile Devices | Portable electronic devices designed for mobile use, such as smartphones, tablets, or laptops. |
Mock-Ups | A visual representation of a product or service, often used for design or prototyping purposes. |
Moderated Testing | Testing that is overseen or guided by a moderator, often involving structured tasks or scenarios. |
Monitoring | The process of observing and measuring the performance or behaviour of a system or process, often using automated tools or systems. |
Mutation Testing |
A technique for testing software by intentionally introducing errors or faults, with the goal of identifying weaknesses or vulnerabilities in the system. |
MVP | MVP (Minimum Viable Product) is a core concept in software development. It refers to the initial version of a product that contains only the essential features needed to meet the basic requirements and deliver value to users. MVP allows developers to launch a product quickly, gather user feedback, and make informed decisions about future development. It’s a practical approach to test an idea or concept with minimal resources before investing heavily in additional features. |
N |
|
Negative Scenarios | Test cases or scenarios that aim to identify potential failures, errors, or negative outcomes, and to ensure the system can handle and recover from these situations. |
Network Testing | A type of testing that focuses on the performance, stability, and reliability of a system’s network connectivity and communication protocols. |
Non-Functional Requirements | Requirements that describe the performance, usability, reliability, security, and other non-functional aspects of a system or software. |
Non-Functional Requirements Specification (NFRS) | A document that outlines the non-functional requirements for a software or system, including how these requirements will be tested and measured. |
O |
|
Obligatory Testing | Testing that is required by law, regulation, or policy, often related to safety or security concerns. |
Opera Mobile Emulator | A tool used for testing and debugging mobile web applications on a desktop computer, allowing developers to simulate different mobile devices and environments. |
Operations | The activities and processes involved in managing and maintaining a software or system after it has been deployed, including monitoring, troubleshooting, and updating. |
Orthogonal Array Testing | A statistical testing technique that involves systematically testing combinations of input values or other variables, with the goal of identifying the most efficient way to achieve test coverage. |
P |
|
Path Coverage Testing | A type of testing that aims to ensure that every possible execution path through a system or software has been tested, typically using code analysis tools or other techniques. |
Pattern Testing | A testing technique that involves testing for patterns of behaviour or outcomes in a system, often using statistical analysis or machine learning algorithms. |
Penetration Testing | A type of testing that simulates a real-world attack on a system or software, with the goal of identifying vulnerabilities and weaknesses that could be exploited by attackers. |
Performance Testing | A type of testing that focuses on evaluating the speed, scalability, and stability of a system or software under different conditions and levels of usage. |
Performance Testing Engineer | A professional who specialises in designing, executing, and analysing performance testing for software or systems. |
Portability Testing | A type of testing that evaluates how well a system or software can be moved or adapted to different hardware or software environments, including different operating systems, platforms, or devices. |
Positive Scenarios | Test cases or scenarios that aim to validate expected behaviours, outcomes, or positive results of a system or software. |
Post-Conditions | The expected state or condition of a system or software after a particular action or event has occurred. |
Post-Launch Maintenance | The ongoing process of managing, maintaining, and updating a system or software after it has been released to the public. |
Post-Release Testing | Testing conducted after a system or software has been released to the public, often involving testing for bugs, errors, or other issues that may have been missed during development or testing. |
Pre-Conditions | The expected state or condition of a system or software before a particular action or event can occur. |
Pre-Production Testing | Testing conducted before a system or software is released to the public, often involving testing for functionality, usability, and other aspects of the system or software. |
Pre-Requisites | The conditions or requirements that must be met before a particular action or event can occur. |
Priority | The level of importance or urgency assigned to a particular task, issue, or feature in a system or software. |
Product Backlog | A prioritised list of features, enhancements, and bug fixes that are planned for a software or product, often used in agile development methodologies. |
Product Owner | The person responsible for defining and prioritising the features and requirements of a software or product, often working closely with the development team. |
Product Roadmap | A high-level strategic plan for a software or product, outlining the goals, features, and timeline for development and release. |
Production Testing | Testing that is conducted on a system or software after it has been deployed to a live production environment, often focused on ensuring that the system or software is working as expected and meeting performance and user experience requirements. |
Programmer | A professional who specialises in writing and developing software code and applications. |
Q |
|
QA Tester | A professional responsible for testing software applications or systems to ensure they meet the specified requirements and quality standards. |
Qualitative Data | Data that is descriptive in nature and often subjective, such as user feedback, survey responses, or observations. |
Quality Assurance | The process of ensuring that a system or software meets a certain level of quality or standard, often involving testing, documentation, and adherence to best practices. |
Quality Control | The process of verifying and validating that a system or software meets the specified requirements and meets the expected level of quality, often involving testing and defect management. |
Quantitative Data | Data that is numerical in nature and can be measured or analysed using statistical methods, such as performance metrics or user engagement data. |
R |
|
RACI | Responsible, Accountable, Consulted, and Informed |
Root Cause Analysis (RCA) | A systematic process of identifying the underlying cause(s) of a problem or issue in a software application or system, with the goal of preventing it from recurring in the future. |
Real-World Simulation | A type of testing that involves simulating real-world scenarios or situations to test the performance and functionality of a system or software. |
Recovery Point Objective (RPO) | The maximum amount of time that an organisation can afford to lose data in the event of a system or software failure or disaster. |
Recovery Time Objective (RTO) | The maximum amount of time that an organisation can afford to have its systems or software down before resuming normal operations. |
Refactoring Code | The process of improving the structure, design, and readability of software code without changing its functionality or behaviour. |
Regression Testing | A type of testing that is conducted to ensure that changes or updates to a system or software do not introduce new bugs or errors into previously working functionality. |
Release Notes | Documentation that accompanies a software or product release, outlining the new features, improvements, and bug fixes included in the release. |
Release Readiness Reports | Reports that assess the readiness of a system or software for release to the public, often including results from testing and other assessments. |
Reliability Testing | A type of testing that is conducted to assess the ability of a system or software to perform consistently and reliably under different conditions and levels of usage. |
Remote Testing | A type of testing that is conducted remotely, often using virtual machines or cloud-based systems, without requiring physical access to the system or software being tested. |
Request For Change (RFC) | A formal request to modify or change a system or software, often used in change management processes. |
Requirement ID | A unique identifier assigned to a specific requirement or feature of a system or software, often used for tracking and traceability. |
Requirements Analysis | The process of gathering, documenting, and analysing the requirements and features of a system or software, often involving stakeholder feedback and collaboration. |
Requirements Coverage | The degree to which the requirements and features of a system or software are covered by testing and other quality assurance activities. |
Requirements Reviews | A formal review process to ensure that the requirements and features of a system or software are accurate, complete, and testable. |
Requirements Traceability Matrix | A document or tool used to track and trace the relationship between requirements and other project artifacts, such as test cases and design documents. |
Resilience Testing | A type of testing that evaluates how well a system or software can recover from disruptions or failures and maintain its intended functionality. |
Resource Utilisation | The measurement of the number of resources, such as memory, CPU, and disk space, used by a system or software. |
Response Time | The amount of time it takes for a system or software to respond to a user request, often measured in milliseconds or seconds. |
Responsinator | A web-based tool used for testing and previewing websites on different mobile devices and screen sizes. |
Responsive Design | A design approach that focuses on creating websites and applications that can adapt and display correctly on different devices and screen sizes. |
Responsive Website | A website that is designed to be viewed on different devices and screen sizes, adapting to the available screen space, and providing a consistent user experience. |
Responsive Website Testing | Testing conducted to ensure that a website is responsive and displays correctly on different devices and screen sizes. |
Retest | Testing conducted on a system or software after issues or defects have been identified and fixed, to ensure that the fixes have been effective and have not introduced new issues. |
Retrospective | A meeting or discussion held after the completion of a project or iteration, to review what worked well and what could be improved for future projects. |
Risk Assessment | The process of identifying, evaluating, and prioritising potential risks or threats to a system or software, often to inform risk mitigation or management strategies. |
Risk-Based Coverage |
A testing approach that prioritises testing efforts based on the level of risk associated with different features or components of a system or software. |
Root Cause | The root cause is the underlying reason or source of a problem or issue. It’s the fundamental factor that, when addressed, can prevent the problem from recurring. |
S |
|
Sanity Testing | A type of testing that is performed on a software build to quickly determine if it is ready for further testing, often conducted after a minor change has been made. |
Scalability Testing | A type of testing that is performed to determine how well a system or software can handle an increasing workload, often by adding more users or data. |
Scope Creep | The phenomenon where the scope of a software project gradually increases over time, often resulting in delays, increased costs, and decreased quality. |
Screen Reader | An assistive technology that reads out text and other content displayed on a screen, often used by people with visual impairments. |
Scrum | An Agile framework used for software development, consisting of iterative and incremental development cycles, with a focus on flexibility and adaptability. |
Scrum Ceremonies | A set of regular meetings or events that are part of the Scrum framework, including Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective. |
Scrum Events | A term used to describe the different meetings or events that are part of the Scrum framework, including Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective. |
Scrum Master | A role in the Scrum framework responsible for facilitating and coaching the development team and helping to ensure the effective use of the Scrum framework. |
Scrumban | A hybrid Agile methodology that combines elements of Scrum and Kanban, often used for software development in complex environments. |
Security Audits | An evaluation of a system or software to identify potential security vulnerabilities or weaknesses, often conducted by an independent third-party. |
Security Testing | A type of testing that is performed to identify potential security vulnerabilities or weaknesses in a system or software. |
Security Testing Engineer | A role responsible for performing security testing on a system or software, identifying, and reporting potential vulnerabilities or weaknesses. |
SEO | Search Engine Optimisation, the practice of optimising a website or content to improve its visibility and ranking in search engine results. |
Server/Client | A computing model where a server provides services or resources to multiple clients, typically over a network. |
Severity | A measure of the impact or severity of a bug or issue, often classified into categories such as critical, major, minor, or trivial. |
Shift-Left | A software development approach that emphasises early testing and detection of defects, often by involving testing activities earlier in the development cycle. |
SLA | Service Level Agreement. It’s a formal agreement that defines the level of service a customer can expect from a service provider. SLAs outline specific metrics, such as response times, uptime guarantees, and support availability, ensuring both parties have clear expectations. If the service provider doesn’t meet the agreed-upon standards, there are often penalties or remedies defined in the SLA to address the issue and maintain service quality. |
SME | Subject matter expert. |
Smoke Testing | A type of testing that is performed to ensure that basic functionality of a system or software is working, often conducted after a build or deployment. |
Soak Testing | A type of testing that is performed to determine how well a system or software can perform over an extended period of time, often by subjecting it to a heavy and sustained workload. |
Software Code | The instructions written in a programming language that make up a software program or application. |
Software Development | The process of designing, coding, testing, and deploying software programs or applications. |
Software Development Life Cycle (SDLC) | The process used for software development that includes planning, design, development, testing, deployment, and maintenance. |
Software Requirements | The functional and non-functional requirements that describe the desired behaviour and performance of a software system or application. |
Software Test Methodology | A set of guidelines and principles used to guide the testing process and approach, often based on industry best practices and standards. |
Software Tester | A role responsible for planning, designing, executing, and reporting on testing activities for a software system or application. |
Software Testing Life Cycle (STLC) | The process used for testing software that includes planning, design, execution, and reporting. |
Spike Testing | A type of testing that is performed to determine how well a system or software can handle sudden spikes in workload or traffic. |
Sprint | A time-boxed iteration in the Scrum framework, typically lasting 2-4 weeks, where a team delivers a potentially releasable increment of software. |
Sprint Backlog | A list of tasks and activities that the development team plans to complete during a sprint, based on the items in the product backlog. |
Sprint Burn-Down Chart | A visual representation of the progress of a sprint, showing how much work remains and how much time is left in the sprint. |
Sprint Planning | A meeting at the beginning of a sprint where the development team and product owner plan the work to be done during the sprint. |
Sprint Retrospective | A meeting at the end of a sprint where the development team reflects on the sprint and identifies ways to improve for the next sprint. |
Sprint Review | A meeting at the end of a sprint where the development team demonstrates the work completed during the sprint to stakeholders and the product owner. |
SPOC | Single point of contact. |
Square (System and Software Quality Requirements and Evaluation) | A standard developed by the International Organisation for Standardisation (ISO) for evaluating and managing software quality. |
Staging Environment | A separate environment used for testing and validating changes to a system or software before deploying them to production. |
Stakeholders | Individuals or groups who have an interest or stake in a software system or application, such as users, customers, managers, or investors. |
State Transition Testing | A type of testing that is performed to ensure that a software system or application responds correctly to different states or conditions. |
Statement Coverage Testing | A type of testing that is performed to ensure that every statement in the software code is executed at least once during testing. |
Static Analysis | A technique used to analyse software code without executing it, often used to identify potential defects or security vulnerabilities. |
Static Testing | The process of analysing software or system artifacts, such as code, requirements, design, or documentation, without executing or running them, to detect issues, errors, or inconsistencies. |
Story Points | A measure used in Agile development to estimate the effort required to complete a user story or feature, often based on complexity, risk, and uncertainty. |
Stress Testing | A type of testing that is performed to determine how well a system or software can handle high levels of workload or traffic, often by subject. |
Stubs & Drivers | Programs or components used during testing to simulate the behaviour of software components that the code being tested depends on or that call the code being tested. |
System Architecture | The design of a computer-based system or software that describes its overall structure, components, relationships, and how they work together to achieve the desired functionality. |
System Requirements Specification (SRS) | A document that describes the functional and non-functional requirements of a software system. |
System Testing | Testing a software system to ensure it meets its specified requirements. |
T |
|
TDD – (Test-Driven Development) | Test-driven development (TDD) is a software development methodology that focuses on writing automated tests before writing code. |
Tech Stack | The set of technologies, programming languages, frameworks, and tools used to build a software application. It includes both front-end and back-end technologies that are used to create the software, as well as any additional software components such as databases, APIs, and third-party integrations. |
Technical Requirements Specification (TRS) | A document that describes the technical requirements of a software system, including hardware, software, and network requirements. |
Technical Standards | Guidelines and requirements that define how hardware, software, and systems should be designed, developed, tested, and maintained. |
Test Analyst | A professional responsible for analysing the testing requirements, designing, and developing test cases, and ensuring that the testing activities are executed effectively. |
Test Approach | Outlines the strategies, methods, and resources that will be used to ensure the quality and reliability of the product. This plan includes details on what will be tested, how it will be tested, and who will perform the testing. |
Test Approvals |
The process of obtaining approvals for the testing activities, including the test plan, test cases, and test execution reports. |
Test Artefacts | Documents and deliverables produced during the software testing process. These artifacts provide documentation, traceability, and a clear record of testing activities. Common test artefacts include test plans, test cases, test scripts, test reports, and defect logs. |
Test Artefacts Handover | The process of transferring test artefacts, such as test cases and test plans, from one team or individual to another. |
Test Automation Engineer | A professional who designs, develops, and implements automated tests for software applications using various test automation tools. |
Test Automation Frameworks | A set of guidelines, standards, and tools used to create and execute automated tests for software applications. |
Test Automation ROI | The return on investment (ROI) of implementing automated testing, which is measured by the benefits gained from reduced testing time, increased test coverage, and improved software quality. |
Test Basis | The sources guiding test case creation, such as business requirements, design specification, HLH, LLD, etc. |
Test Case ID | A unique identifier assigned to a test case that helps in tracking, managing, and reporting on the testing progress. |
Test Cases | A set of steps or conditions used to verify that a software application or system functions as intended. |
Test Closure | The process of formally ending testing activities for a software project or release. |
Test Closure Checklist | A document that includes a list of tasks that must be completed before formally ending the testing phase of a software project. |
Test Closure Meeting | Meeting held at the end of the testing phase to discuss the results, provide feedback, and decide whether to proceed with the next phase of the project. |
Test Closure Report | A document that summarises the testing results, including the issues found, their severity, and the actions taken to address them. |
Test Completion Report | A document that summarises the testing results and the status of the testing activities at the end of the project. |
Test Coordinator | A professional responsible for managing the testing activities, coordinating the test team, and ensuring the testing meets the project’s requirements. |
Test Coverage | The degree to which a software application or system has been tested. It is a measure of the percentage of the total functionality that has been tested. |
Test Cycles | A sequence of testing activities that are performed to validate a software application or system. |
Test Data | Data used to test a software application or system. It includes input data, expected output data, and any other data needed to run the tests. |
Test Design | The process of designing the tests for a software application or system. It involves identifying test scenarios, creating test cases, and defining test data. |
Test Effort & Estimation | The process of estimating the amount of effort required to complete the testing activities for a software project, including creating test cases, executing tests, and reporting issues. |
Test Engineer | A professional responsible for designing, developing, and executing automated tests for software applications or systems. |
Test Environment | The hardware, software, and network infrastructure used to support testing activities for a software application or system. |
Test Evidence | Documentation and artifacts that provide evidence of the testing activities and results, including test cases, test scripts, and test execution reports. |
Test Execution | The process of running test cases to validate a software application or system. |
Test Execution Reports | Reports that summarise the results of the test execution activities, including the number of tests run, passed, and failed, and any issues found. |
Test Governance | The management framework and policies that ensure the effectiveness, efficiency, and quality of software testing processes and practices. |
Test Harness | A software component that helps automate the testing activities by providing a set of libraries, tools, and utilities to create and run tests. |
Test KPIs | Key Performance Indicators used to measure the effectiveness of testing activities, including metrics such as test coverage, defect density, and test execution time. |
Test Lead | A professional responsible for leading the testing activities for a software application or system. |
Test Management | The process of planning, organising, and controlling the testing activities for a software application or system. |
Test Manager | A professional responsible for managing the testing activities for a software application or system. |
Test Methodology | A set of guidelines, standards, and procedures used to plan, design, and execute testing activities for a software application or system. |
Test Objectives | The goals and objectives that define the purpose and scope of the testing activities for a software application or system. |
Test Outcome | The result of the testing activities, including the number of defects found, the severity of the defects, and the impact on the software application or system. |
Test Planning | The process of defining the testing approach, scope, objectives, and resources required for a software application or system. |
Test Procedures | Step-by-step directions for running a test case. |
Test Process | The systematic sequence of activities and steps undertaken to evaluate and ensure the quality of a software application or system. It encompasses all stages of testing, from planning and designing test cases to executing them, reporting defects, retesting, and providing test results. |
Test Reporting | The process of creating and sharing reports on the testing activities and results, including test execution reports, defect reports, and test summary reports. |
Test Scenarios | A set of conditions or steps used to test a software application or system in a real-world scenario. |
Test Schedule | The timeline for executing the testing activities for a software application or system. |
Test Scope | The boundaries and limitations of the testing activities for a software application or system. |
Test Scripts | Step-by-step scripts used to execute tests for a software application or system. |
Test Sign-Off | The process of formally approving the testing activities for a software application or system and moving it to the next phase of development or release. |
Test Status | The current state of the testing activities, including the progress made, the issues found, and the remaining tasks. |
Test Steps | The individual tasks or actions that make up a test case. |
Test Strategy | The high-level plan and approach for the testing activities for a software application or system. |
Test Suite | A collection of test cases or test scripts used to validate a software application or system. |
Test Summary Report | A document that provides an overview of the testing activities and results for a software application or system. |
Test Team | A group of professionals responsible for planning, designing, executing, and reporting on the testing activities for a software application or system. |
Test Work Products | Also known as testware or test artefacts, created as part of the test management process in software development projects. Common test work products include test plans, test cases, test scripts, test reports, and defect logs. |
Tester | A professional responsible for executing test cases and reporting issues for a software application or system. |
Testing | The process of validating a software application or system to ensure that it meets the specified requirements and quality standards. |
Testing Frameworks | A set of guidelines, standards, and tools used to design, develop, and execute tests for software applications or systems. |
Testing Phase | The phase of the software development lifecycle where testing activities are executed. |
Testing Requirements | The set of requirements that define the testing activities and objectives for a software application or system. |
Testing Standards | A set of guidelines and best practices that define the standards for testing software applications or systems. |
Testware | Also known as test work products or test artefacts, created as part of the test management process in software development projects. Common test artefacts include test plans, test cases, test scripts, test reports, and defect logs. |
Themes | High-level, overarching concepts or goals that guide the direction of a project or product. They provide a framework for organising and prioritising work and help ensure that all efforts are aligned with the overall strategy |
Threat Modelling | A process of identifying potential threats and vulnerabilities in a software application or system and developing mitigation strategies to address them. |
Three Amigos | A collaborative technique for refining user stories and ensuring shared understanding between product owners, developers, and testers. |
Throughput | The rate at which a system or process can handle and process a certain amount of data or tasks in a given amount of time. |
Time-Boxed | A technique used in project management where specific tasks or activities are allocated a fixed amount of time, usually with a deadline or a time limit. |
Top-Down Approach | A testing approach where the high-level functions and features of a software application or system are tested first before testing the lower-level components. |
Traceability | The ability to trace and track the testing activities and results back to the original requirements, design, and development specifications. |
U |
|
UAT – User Acceptance Testing | A type of testing where the end-users of a software application or system test its functionality to ensure that it meets their requirements and expectations. |
Unit Testing | A type of testing where individual units or components of a software application or system are tested in isolation to ensure that they work as intended. |
Unmoderated Testing | A type of usability testing where participants are given tasks to perform on a software application or system without any supervision or guidance. |
Usability Testing | A type of testing where a software application or system is evaluated based on how easy it is to use and how well it meets the needs of its users. |
Use Case | A description of a specific scenario or situation in which a software application or system is used to accomplish a particular task or goal. |
User Experience (UX) | The overall experience and satisfaction of users when interacting with a software application or system. |
User Flows | The series of steps or actions that a user takes when interacting with a software application or system to accomplish a particular task or goal. |
User Interface (UI) | The visual and interactive elements of a software application or system that allow users to interact with it. |
User Requirements | The set of requirements that define the needs and expectations of the users for a software application or system. |
User Story | A brief and informal description of a feature or functionality of a software application or system from the perspective of its end-users. |
UX Designer | A professional who designs and improves the user experience of a software application or system. |
V |
|
Validation | The process of evaluating a software application or system to ensure that it meets its intended requirements and performs as expected. |
Verification | The process of evaluating a software application or system to ensure that it meets its specified requirements and adheres to the defined standards. |
V-Model | A software development and testing model that emphasises the importance of early testing and verification activities in the development lifecycle. |
Version Control | A system that tracks changes to files, typically used in software development. With version control, you can record and manage different versions of code and documentation as it evolves over time. |
Volume Testing | A type of testing where a software application or system is subjected to a large volume of data to evaluate its performance and capacity under high-load conditions. |
Voluntary Testing | A type of testing where individuals or groups participate in testing a software application or system on a voluntary basis, typically without any compensation. |
Vulnerability Scanning | The process of identifying and evaluating potential vulnerabilities in a software application or system using automated tools. |
W |
|
Walkthroughs | A type of review process where team members or stakeholders go through a software application or system step-by-step to identify and address issues and improve its quality. |
Waterfall Model | A traditional software development model where the development process is divided into sequential phases, with each phase being completed before moving on to the next. |
WCAG – Web Content Accessibility Guidelines | A set of guidelines and standards developed by the World Wide Web Consortium (W3C) to ensure that web content is accessible to users with disabilities. |
Web Applications | Software applications that are designed to be accessed and used through a web browser or web interface. |
White Box | A testing approach where the internal workings of a software application or system are examined and tested, typically by developers or technical testers. |
Wireframe | A visual representation or blueprint of the layout and structure of a software application or system, typically used in the early stages of design to clarify and communicate the design concept. |