Blog Software Testing

A to Z Guide on Software Testing Terminologies

Adepu Bindu Adepu Bindu | Last updated: September 13, 2024 |

Hey! We know you came here in search of some important notes on a few software testing terminologies and we just hope this blog will be your perfect learning resort. We have cover and will keep covering the A to Z terminologies in software testing. Okay, let’s keep reading it now.

A to Z Software Testing Terminologies

Acceptance criteria:

The criteria in the sense of a general user, where a component or the whole system must satisfy the formal exit criteria to be wholly accepted by them.

Acceptance Testing:

The testing with regards to the user needs, requirements, and business processes determine if the system would satisfy the acceptance criteria or not.

Accessibility testing:

The testing that ensures accessibility for all, especially for people with disabilities.

Accuracy:

The software’s potential to furnish the agreed and accurate results to the respective end-users (clients, stakeholder customers).

Actual result:

The upshot (the outcome) that’s been obtained during or after testing a particular component or whole system.

Ad Hoc testing:

It’s an informal testing method that occurs without any documentation, test planning, or a test design technique.

Also read: What Everybody Ought To Know About Adhoc Testing

Adaptability:

The ability of the software system to get adapted to different environments without taking any actions.

Agile Testing:

Testing that tail around with rules and principles of Agile software development and begins with the start of the project itself. It also keeps a continuous integration between development and testing throughout the process.
QA Touch

Alpha Testing:

This software test identifies bugs in the application before reaching the actual end-user. Just a filtering kind of testing to pluck out the undiscovered bugs from the previously conducted tests.

Anomaly:

The condition which gets diverged from the expectations hinged with requirements, design documents, specifications, etc. These Anomalies come to notice during testing, analysis, compilation, direct use of the software, etc.

Attractiveness:

The flair of the application or software to be attractive and pleasing to the user.

Audit Trial or Audit Log:

A security-related chronological record keeps track of the activities in sequential order, which could have affected a distinct operation, procedure, or event.

Availability:

The magnitude to which a particular or the whole system is attainable and operational when in use.

Automated Testware:

The tools scripts used in automation testing are automated Testware.

B

Back-to-back testing:

The testing of two or more variants of the component or system with the same inputs. Later the results are compared and scrutinized to see which works the best.

Baseline:

A formally reviewed specification that becomes the base for further development and every future modification would also require a formal control process or procedure.

Basic Block:

The chronological pattern of one or more executable statements that contain no branches.

Basis test set:

A bunch of test cases obtained from the specification to ensure its 100% coverage criterion are fulfilled.

Behavior:

The response of the component or the system (result) obtained from a given set of inputs and preconditions.

Benchmark test:

A set standard that helps to evaluate the quality of the software developed.

Bespoke Software:

A software that is developed for a particular set of users or customers.

Beta testing:

One of the acceptance test types where the end-user is directly involved in the development processes. It is carried out as an external acceptance test to determine if the software fulfills the user’s needs and to drive the market feedback.

Big-bang testing:

It’s a type of integration testing in which all the elements and components are combined and tested together to save the tester’s time.

Black box testing:

Testing the functional or non-functional aspects of the component or the system without considering the internal code.

Balck box test design techniques:

A documented process to select the test cases basing any speciation, be it functional or non-functional, of a component without any reference to the internal code.

Blocked Test case:

A test case can not be executed because of external reasons, such as the environment being not ready.

Bottom-up testing:

The testing where the lowest level components are tested first and then proceeds with the higher-level components.

Boundary value:

The value of an input or an out that lies on the edge of an equivalent partition.

Boundary value analysis:

A software testing technique where test cases are designed based on the boundary values.

Boundary value coverage:

The total percentage of boundary values that are being employed by a test suite.

Branch:

A basic block that can be opted for execution basing its program construct, where one or more alternative program paths are available. Examples like go to, case, jump, etc.

Branch Coverage:

The percentage of branches that are being employed by a test suite. Note: 100% branch coverage = 100% decision coverage and 100% statement coverage.

Branch testing:

White box test design technique where the test cases are designed to execute the branches.

Business process-based testing:

A software testing type where the test cases are designed based on business processes.

C

Capability Maturity Model (CMM):

The capability Maturity Model is a typical 5 level framework representing the effectual software process’s key elements. It covers critical aspects like practices for planning, engineering, and managing software development and maintenance.

Capability Maturity Model Integration:

It is the successor of the CMM. This framework represents the key elements of effectual software development and its maintenance processes.

Capture/playback:

A test execution tool in which inputs are captured during manual testing to bring out the automated test scripts that can be further executed. These tools also support automated regression testing.

CASE:

Its Computer-Aided Software Engineering. It is the practice of computer-facilitated tools and methodologies in software development.

CAST:

Its Computer-Aided Software Testing. This cites the computing-based processes, techniques, and tools for the testing of applications or programs. (Can be used for software and hardware-based tools)

Cause effective graphing:

It is a technique where graphs are used to determine combinations of input conditions.

Certification:

A Process of confirming that a component, system, or persons satisfy particular requirements.

Changeability:

The potential of the software to allow specific modifications to be implemented.

Code coverage:

An analysis method to determine which parts or components of the software are being executed and which are not being executed by the test suite.

Classification tree method:

A technique by which test cases are designed by means of a classification tree to execute combinations of inputs or outputs.

D

Data driven testing:

A scripting technique used to store the test input and expected outputs in the form of a spreadsheet or table. Majorly to execute all the tests in the table with a single control script.

Debugging:

A standard process of finding, emphasizing, and removing the bugs (failures) from the software or application.

Defect density:

In general, defect density is the total number of defects found in a component or in the whole system divided by the component or system’s size.

Defect report:

A standard document containing the information (reporting) of a flaw that can potentially fail the system.

Defect Management:

A process of identifying, investigating, immediately resolving defects. The process flow is like recording the defects, classifying and stressing their impact.

Defect masking:

An instance where one defect prevents the identification of another defect.

Deliverable:

The product or work that ought to be delivered to someone out of the workspace.

Design-based testing:

In this testing approach, the test cases are designed basing the architecture and design of the system or application.

Development Testing:

The testing done by the developers that is conducted during the implementation of a component or a system in the development environment.

Documented testing:

A test to check the documentation quality for instance user guides, installation guide, etc.

E

Efficiency:

The real potential of the software or the application to yield an appropriate performance for the amount of resources and time put in.

Efficiency testing:

A testing process to emphasize the efficiency of the software or application.

Entry criteria:

A set of conditions that permits the process of going forward with the predefined tasks. This process helps in preventing potential failures.

Emulator:

A tool, computer program, or system that takes the same inputs and gives the same outputs as a given system.

Error guessing:

The tester’s experience is taken as a base to anticipate what defects might be existing in the system.

Error seeding:

It’s a process of intentionally adding the defects to the system to monitor the rate of detection, removal, and anticipating the remaining defects.

Error tolerance:

The potential of the application or system to continue its operations even in the presence of wrong inputs.

Exception handling:

The response or the behavior of the system or application to the wrong inputs, an eternal user, or an internal failure or defect.

Exit criteria:

Some set conditions that permit a process to be completed, normally after which the software or the applications will be set outlive in the market for use.

Equivalence partition coverage:

The total percentage of equivalence partitions that a test suite has implemented.

F

Fail:

The term is used when a test is unsuccessful to provide the desired results.

Failure mode:

Failure mode is the foundation or possibility of a failure or why failures can occur.

Fault tolerance:

The software or application’s capacity to preserve the particular performance level in instances of defects.

Fault tree analysis:

The process of examining the causes of failures.

Feature:

The trait of a component or system as specified by the requirements documentation. For example, usability, design constraints, reliability, etc.

Functional Testing:

Functional testing is primarily a Quality Assurance process, and it is one type of black-box testing. The functional testing runs by feeding the input, then checks the output, and it has very little to do with the internal program structure.

Also read: A Studious Manual On 8 Functional Testing Types 

Functional integration:

A process that brings together the components or the system to see how efficiently or effectively all components work together.

Frozen test basis:

It’s a test basis document, which in general wouldn’t be changed until a formal change process is initiated.

G

Glass box testing:

White box testing is also called with the name Glass box testing. In this testing, the internal structure of the system is tested.

H

Heuristic evaluation:

The experts use this static usability test technique to measure the user interface’s usability with the rules of thumb.

High-level test case:

It’s a test case without any concrete data, inputs, and expected results.

Horizontal traceability:

Tracing of requirements through the layers of test documentation for a test level. For example, test plan, test design specification, test case specification, and test procedure specification.

I

Incident:

Any unusual occurrence in the process of testing that requires a detailed investigation is called an incident.

Incident report:

Incident report is a document that contains the details of any event that occurred during the testing and which requires an investigation.

Inspection:

Inspection is a review based on the visual examination of defects.

K

Keyword-driven testing:

It is a scripting technique that employees data files that contain keywords associated with the application being tested. Some special scripts define these keywords that are convened by the control script for the test.

L

Load testing:

It is one type of performance testing. It simulates how the system will perform in certain heavy loads (high volumes by concurrent users). This type of testing is best suited where high loads are expected.

Low-level test case:

This is that test case type with solid values for input data as well the expected results.

M

Maintenance testing:

It tests the changes in an operational system or effect of any changing environment in an operating system.

Maturity:

A company’s ability with regards to its effectiveness and efficiency in all of its processes and work practices.

Moderator:

He/she is the leader who is responsible for the inspection and review process.

Multiple condition testing:

It is a white box testing design technique where test cases are specially designed to carry out combinations of single condition outcomes just within one statement.

N

Non-functional testing:

No-functional testing comprises testing of security, reliability, usability, performance, and many more. The requirements of non-functional testing don’t relate to any functional aspects of the application.

Negative testing:

It’s a testing method that makes sure that the application is working as intended to the requirements and can manage the unwanted input or user behaviors. Invalid inputs are given to compare the results with the inserted inputs.

O

Off-the-self-software:

It is software that is being developed for a broad market or precisely for the general public in an identical format.

Operational testing:

Operational testing is conducted to assess a small component or the whole system in its operational environment.

Operability:

The ability of the software system to allow the user to process and control it.

P

Penetration Testing:

It’s also called ethical hacking. It is to hack into the system to check and correct the vulnerabilities of the software. Depending on the sensitivity of the data, it is required to conduct the penetration testing.

Performance Testing:

The performance testing checks the speed, stability, reliability, scalability, responsiveness, load, and many more performance aspects of the application or software.

Pair testing:

It is a type of testing where two testers pair up and work to find the defects while sharing one computer throughout the process.

Q

Quality:

The standard to which an application or system meets the given requirements or expectations of end-users.

QA

QA is quality assurance, a part of Quality Management that is aimed to provide legit confidence in the developed software.

Quality Management:

Systematized activities that an organization would follow to control and manage quality. This would typically include the inception of the quality policy, quality objectives, quality planning, quality control, quality assurance, and quality improvement.

R

Random testing:

Random testing is a black box test design technique that is used to test non-functional aspects like reliability and performance.

Regression Testing:

The previously tested programs are tested again following all the apparent modifications here in the regression test. The test also ensures that no more defects are being introduced to the software. Again the regression test is performed when there is a change in the environment.

Requirements-based testing:

A technique to testing, here the test cases are designed, seeing the objectives and test conditions obtained from requirements.

S

Software Audit:

It is a type of review done by people outside the software development spectrum. Perhaps an independent examination of software or processes to determine compliance with the specifications, standards, and guidelines set in the objective criteria.

Security testing:

Security Testing is conducted to check the application or system’s vulnerabilities to protect the internal data from external hackers.

Stress testing:

Stress testing is done to evaluate the system at or beyond its expected limits.

System Testing:

It’s a testing type where the integrated system gets tested to verify if it perfectly meets all the specified requirements.

T

Test case:

It is a set of actions performed to test a specific function or feature of an application or software. They typically contain the test steps, data, preconditions designed for a particular test scenario to test any requirement.

Test automation:

It’s an approach to running tests automatically without much manual effort to effectively manage tet data, utilize the results and enhance the software’s quality.

U

UAT:

It’s called User Acceptance Testing. It’s a coordinated and managed task done by the software’s end-users to find issues using it in a real-world environment.

Usability testing:

Testing the application or the software to check if it is well understood, easy to use, user-friendly and more.

V

Volume testing:

It’s to test the system’s ability to take data in large volumes.

Vertical traceability:

Its when requirements are traced through different layers of development documentation to components.

W

White-box testing:

It’s a software testing method that tests a component or software’s internal structures where programming skills are used to create the test cases.

There are literally thousands of terms used and added very often in the testing sphere, but we’ll keep updating them and keep you posted as well. If you like the like type of content we publish do subscribe to our blog and we’ll notify you without much noise in your inbox. And you can also join our social media family for more interesting and entertaining content and be sure gives a thumbs up.

P.S The QA Masterclass is back with a blazing season 2. Interesting topics and actionable insights are waiting for you. Click here to get registered to enjoy endless learning and more.

Leave a Reply