High-Level Testing (PQ)
High-level testing of Computerized Systems (CS) in the context of Computerized Systems Validation (CSV) corresponds to Performance Qualification (PQ) in conventional validation (equipment qualification) and follows the low-level testing of CS (OQ).
The objectives of high-level testing of CS (PQ) are:
- To compare the system’s results against the initial objectives set for it
- To determine whether all specified objectives are being met
- To identify and resolve any discrepancies/issues
Before beginning high-level testing of CS, test data sets are prepared, and if necessary, automated testing tools are developed.
Testing refers to the examination of a computerized system or its components in operation with the goal of detecting and correcting errors. From the GMP/GAMP5 perspective, testing involves turning on the system or launching code components, corresponding to the traditional OQ and PQ validation stages.
Testing is conducted according to an approved protocol by feeding the program/component with data from pre-developed test data sets and recording the results in a report. Testing can be manual or automated. In traditional software development practices, manual testing also includes code reviews (inspections, walkthroughs, desk checks), which, in the GMP/GAMP validation framework, corresponds to Stage 4: Design Review/Design Qualification (DR/DQ).
In contrast to low-level testing, which involves exhaustive and detailed testing of individual components and system functions, high-level testing focuses on a general evaluation of the system's compliance with stated objectives and its usability for the intended purposes.
Where low-level testing involves testing all individual elements, high-level testing incorporates selectivity and subjectivity in test design. High-level testing corresponds to PQ in traditional validation, although some tests can be classified as OQ.
Several terms are used in relation to high-level testing: system testing, user testing, and acceptance testing. These terms are not mutually exclusive testing methods but represent different aspects of testing that may overlap.
System testing most closely aligns with the concept of high-level testing and is the most appropriate term to describe it as a validation stage. In system testing, test sets are based on defined system parameters that must be evaluated.
User testing is a subset of either low- or high-level testing (typically functional and/or system testing), conducted by the system's client or after installation in the final user environment.
Acceptance testing is another subset of testing (typically functional and/or system testing) performed as part of the formal acceptance of the system by the client to verify compliance with contract specifications. This type of testing places more emphasis on commercial aspects and relates to system testing similarly to how Site Acceptance Testing (SAT) relates to validation.
System Test Categories
System tests can be divided into the following 15 categories:
- Facility
- Volume
- Stress
- Usability
- Security
- Performance
- Storage
- Configuration
- Compatibility
- Installation
- Reliability
- Recovery
- Serviceability
- Documentation
- Procedures
Each category represents a set of tests related to a specific system function or aspect of its operation.
Category: Facility
- Facility testing verifies the system’s ability to perform specific tasks as outlined in the specification of objectives.
- This category is similar to functional testing but differs in that it tests more complex tasks requiring the coordinated operation of multiple elementary functions.
- An example would be verifying the existence and operation of the Audit Trail.
- Depending on the system, facility tests may also be part of functional testing!
Category: Volume
- Volume testing involves evaluating the system’s ability to handle exceptionally large amounts of data.
- The goal of this test is to demonstrate that the system can process the maximum volumes of data specified in the objectives.
- An example could be importing, exporting, or batch processing very large amounts of data or documents, such as environmental parameter data for an entire facility over several years in a GMP monitoring system.
Category: Stress
- Stress testing subjects the system to intense loads. Unlike volume testing, which involves large amounts of data, stress testing focuses on peak data volumes or operations performed in a short timeframe.
- Stress testing applies to systems with variable loads where time is a factor. An example might be a server simultaneously handling the maximum number of user requests.
Category: Usability
- Usability testing assesses the user interface. This type of testing is always performed manually and checks whether the interface is intuitive and whether the system is easy to use.
- An example would be a Laboratory Information Management System (LIMS), where ease of registering samples, searching, and displaying data is evaluated.
- This type of testing is sometimes called user testing.
Category: Security
- Security testing involves creating conditions that could potentially compromise the system’s security measures.
- Known vulnerabilities in similar systems can help develop tests to determine whether similar issues are present.
- An example could be any system with remote access tested for vulnerabilities to unauthorized access.
Category: Performance
- The objectives for system performance often include metrics such as response time or throughput. Performance testing checks whether the system meets these target metrics.
- An example would be evaluating response time or latency in an industrial process control and data acquisition system (SCADA).
Category: Storage
- Systems may use various types of memory, such as RAM, disk space, or flash memory. Since different processes or software layers may run concurrently on the same hardware, they can indirectly affect each other by consuming or locking shared system resources. In some cases, resources may not even be sufficient for a single process.
- Storage testing aims to determine the amount of system resources consumed.
- An example would be checking the amount of RAM and disk space used by an Environmental Monitoring System (EMS) to predict how long the local storage space will be sufficient.
Category: Configuration
- Many systems allow configuration of parameters or customization of settings. Some systems also enable saving and loading user configurations from files.
- Configuration testing checks whether the process of setting parameters is functioning correctly and whether configuration changes affect system performance.
- An example would be testing recipe configuration parameters in a pharmaceutical solution preparation control system.
Category: Compatibility
- Compatibility refers to the system’s ability to operate correctly in various scenarios:
- Migration to a different hardware platform
- Migration to a different software platform (e.g., a different OS version or virtualization software)
- Software upgrades
- Handling data files from older/newer software versions or from other programs
- Data exchange or migration to other systems
- System compatibility must be verified for all scenarios likely to arise during operation.
- An example of compatibility testing would be verifying data transfer from a LIMS to an Enterprise Resource Planning (ERP) system, such as SAP.
Category: Installation
- Software installation, in addition to copying executable files, may involve several complex operations, such as:
- Analyzing hardware and software configurations and adapting the installation accordingly
- Creating directory structures and configuring attributes
- Registering the software in the system registry
- Setting up accounts, encryption keys, services
- Configuring permissions and system startup parameters
- Even validated systems may require a complete software reinstallation, so the installation process must be tested.
- Installation testing is particularly important if end users will be responsible for installing the software.
Category: Reliability
- The primary reliability metric for computerized systems is the time they operate without failure. One formalized reliability metric is Mean Time Between Failures (MTBF).
- In pharmaceutical computerized systems, this criterion is challenging to test, and reliability is often ensured through design.
- Reliability should be confirmed using criteria defined by the process and quality system. It can be determined through failure statistics over an extended period.
Category: Recovery
- Various system failures can occur during operation, such as power outages, hardware failures, I/O errors, or loss of communication. The system design must allow for proper recovery without data integrity loss or disruption of the process.
- Recovery testing involves intentionally simulating system failures and verifying the system's ability to recover.
- A typical recovery test is the power failure test, commonly part of Operational Qualification (OQ) in traditional validation (equipment qualification). For information systems, a typical test is Backup Data Recovery.
Category: Serviceability
- It is generally impossible to maintain systems using only their primary functionality. Systems are often equipped with additional features and tools specifically designed for maintenance, such as diagnostic tools, error logs, variable value visualization tools, and data file converters.
- Serviceability testing checks whether these maintenance tools work correctly and are user-friendly. An example would be testing the simulation tools for measurement channel values in a GMP monitoring system.
Category: Documentation
- Documentation testing assesses the completeness, detail, accuracy, and clarity of the user documentation.
- This category also includes built-in help sections, tooltips, and explanations in dialog boxes.
Category: Procedures
- Some operations related to CS operation or maintenance are performed manually according to an approved procedure (SOP), such as data archiving, registering new user accounts, or system restart after a failure.
- High-level software testing should include verification of these procedures for correctness, completeness, and clarity.
- Procedure testing should be performed by personnel not involved in their development or system administration.
The specialists at Tarqvara Pharma Technologies have years of experience in conducting qualification, validation, and acceptance tests within the pharmaceutical industry, in full compliance with international, European, and national GMP/GxP regulations and standards.
See also:
Qualification / Validation / Commissioning
Computerized Systems Validation (CSV)
Acceptance Tests (FAT/SAT)
Risk-Oriented Approach