Exploring the Critical to Quality (CTQ) Factors

Using the Principles Document should be a thinking exercise, not a 'check the box' exercise. We encourage you to further explore the various Critical To Quality factors (CTQs) by selecting categories on the image to the left. Additional details are provided below to give the description / rationale for the various factors within each category, as well as potential considerations in evaluating the relative importance of the CTQ factor for your study.  

PROTOCOL DESIGN SHOW ALL HIDE ALL

Eligibility Criteria

Carefully designed eligibility criteria ensure that the intended study population is enrolled and that trial participants for whom participation may be harmful are not included. Ambiguity may result in inconsistent application across sites; overly restrictive criteria may limit the real-world applicability of results or impede trial participant recruitment.

Each criterion should be evaluated in terms of its utility in 1) defining the population, 2) excluding trial participants for whom there are safety concerns, 3) avoidance of confounding of efficacy measures, and 4) identifying contraindicated medications or procedures. If the criterion does not have utility by these measures, the rationale for retaining it should be further considered.

  1. Describe the specific population needed for the trial to evaluate the intended question. If this specific population is not enrolled, will trial results be brought into question?
  2. Are there trial participant populations that must be excluded from enrollment due to specific safety concerns with administration of the product to that population?
  3. Evaluate the impact of “getting it wrong” with regard to eligibility. If a trial participant is found to not meet a criterion, what is the impact on the trial (e.g., if removed, replaced, considered treatment failure)?
  4. Is the trial intended to evaluate effectiveness and safety of the investigational product in a real-world population that would be likely to receive the product after approval?
  5. What are the commonly accepted criteria for diagnosing and evaluating patients:
    a.  With the disease under study?
    b.  With comorbid conditions that are exclusionary?
  6. Have PPAO and participating investigators provided imput as to the feasibility of implementing criteria?

Randomization

Randomization, when appropriately executed, addresses selection bias and permits a valid basis for making comparisons between, and drawing statistical inferences about, study groups.The integrity of randomization rests on both sponsor and site-level processes. For example, the sponsor or its designee generates and programs randomization schemes, and must ensure adequate allocation concealment; site staff must administer the treatment to which a trial participant was randomized.

  1. Is the study randomized?
  2. If the study is randomized, consider:
    1. Who will generate and implement the randomization schema?
    2. What is the method by which randomization will occur?
    3. Are any specific approvals needed to randomize a trial participant?
    4. Who is permitted to randomize trial participants?
    5. How and by whom will randomization errors be managed?

Masking

Masking may minimize biases that result from differences in management, treatment, assessment of trial participants, or interpretation of results that arise as a result of trial participant, investigator, or study staff knowledge of treatment assignment. Prespecified controls should be considered to prevent unblinding and to deal with potential unblinding events should they occur. Designs that require some staff (whether at the sponsor or site level) to be unmasked while maintaining masking for others present opportunities for inadvertent unmasking and may require additional controls.

  1. What is the impact of unmasking for this study? Does it pose a risk to interpretation of study outcomes?
  2. Does the study design:
    1. Require that some site staff members be unmasked while others remain masked?
    2. Require that some sponsor or contract or academic research organization (CRO/ARO) staff members be unmasked while others remain masked?
    3. Require study data to be unmasked for periodic interim reviews/analyses (e.g., for a data monitoring committee [DMC] or adaptive design)?
    4. If so, the process(es) and responsibilities for maintaining masking in these scenarios should be described.

Types of Controls

The acceptability of the control (if used) in the study may affect the willingness of trial participants to participate in the study and the interpretation of perceived value and reliability of the study’s conclusions by different stakeholders (e.g., patients, regulators, payers).

  1. Consider the type(s) of control(s) to be used in the study (e.g., placebo/sham procedure, standard of care, historical) and the rationale for selection.
  2. Is there clinical equipoise? Do PPAO and treating physicians agree that there is clinical equipoise?
  3. Is a control group feasible, especially from the PPAO and treating physician perspective?
  4. Identify controls that may be preferred by different stakeholders (regulators, payers, PPAO).

Data Quantity

There are a variety of viewpoints and interests involved in designing a trial. The minimum data set that is sufficient to address study endpoints and meets the needs of various stakeholders should be that which is collected (data parsimony).

  1. What data points are critical to addressing the question(s) posed by the trial?
  2. How will these critical data points be generated, collected, and reported?
  3. What is the distinction between exploratory endpoints and primary and secondary endpoints?
  4. Does the need for exploratory data endpoints unduly burden data collection?
  5. Have PPAO and participating investigators provided input as to which data points are the most important to them?

Endpoints

Clearly defining study endpoints and describing how endpoint data are to be collected and reported will support consistent trial implementation across sites and prevent errors that may interfere with analysis and bring into question study conclusions. In defining endpoints, prospective attention should be given to the degree of objectivity in assessment of endpoints, the potential for simple external verification (e.g., death certificates, central and/or bioanalytical laboratory data), and potential for unbiased adjudication or review of endpoint data.

  1. Is/are the endpoint(s) commensurate with the scientific question/objectives of the study?
  2. Will the endpoint have a clinically meaningful impact on patient care or provide a unique building block for future research?
  3. Are standardized and generally accepted endpoint definitions and methods to ascertain endpoints available?
  4. If there are multiple primary endpoints, verify and describe how each is necessary to address/directly link to the scientific question posed by the study.
  5. Consider the characteristics of the primary endpoint(s), including
    • How is the endpoint defined?
    • Is it assessable?
    • How and by whom will the endpoint(s) be ascertained (e.g., investigator, centrally, third party uninvolved in the study)?
    • If the endpoint is to be adjudicated, what were the criteria to determine that adjudication was necessary?
    • Is the endpoint objective (e.g., pregnancy, death) or subjective (e.g., pain score)?
    • Is the endpoint event-driven?
  6. Have patient-reported outcomes (PROs) been considered as an endpoint? What are the risks and benefits of their use?

Procedues Supporting Study Endpoints and Data Integrity

Conduct of key procedures, collection of critical data, and effective monitoring of trial participant safety depend on consistent conduct of study procedures. Resources should be focused on preventing opportunities for errors in critical study procedures supporting collection and reporting of critical data directly related to study endpoints and in study procedures necessary to ensure adequate monitoring of trial participant safety.

  1. Can the investigational product technically do what you are aiming for clinically?
  2. What procedures are critical to collecting reliable data for analysis of study endpoints? Which are non-critical?
  3. How necessary is it for these procedures to be conducted absolutely consistently across sites or in a highly specific manner or window?
  4. What procedures do not significantly impact data analysis or trial participant safety (i.e., where error or inconsistency in conduct can generally be tolerated)?

Investigational Product (IP) Handling and Administration

Appropriate controls must be in place to ensure equivalent consistency between IPs from manufacturing through administration. In addition, evaluation of both the efficacy and safety effects of an intervention requires confirmation that the assigned intervention was received as prescribed in the investigational plan.

  1. Describe the IP, including any special considerations for its handling and use in this trial.
  2. Evaluate any specific safety concerns associated with the use of the product and describe how these have been identified and managed in prior investigational or marketing experience.
  3. What IP use data are integral to evaluating trial results? Why are these data critical?
  4. For implantable devices, what information about the implant procedure is critical to trial analysis, results, and reporting?
  5. For diagnostic trials, how will appropriate handling of specimens be verified?
  6. If the protocol calls for dosage adjustments of IP or control product, are the directions and procedures for making dosage adjustment(s) clear and is the responsible entity (e.g., interactive voice response system directed, site staff) clearly defined?

FEASIBILITY SHOW ALL HIDE ALL

Study and Site Feasibility

As the success of a study is largely dependent on the implementation of the investigational plan by investigator sites, it is important to assess the feasibility of successful completion of the study at potential sites. Consideration should be given to what kind of site is required based on the particular study design. Typical areas considered include access of the site to the study target population, whether site staff are qualified to conduct the study, and whether the site has adequate resources to conduct the study, especially if the experimental arm involves a change in procedure from standard care.

Expanding this inquiry beyond traditional measures can highlight important issues with trial feasibility, such as:

  • Inconsistency across countries in standard of care vs. protocol-defined procedures.
  • Important differences in study staff expertise.
  • Potential critical differences in characteristics of the patient population.
  • Disparate access to trial participant data.

Identifying such issues early in protocol development may permit the protocol or other aspects of the investigational plan to be modified in order to minimize their impact.

  1. Describe the countries and regions in which the trial is planned. Consider both the countries/regions in which the trial will initially be conducted and those that might be added to bolster enrollment. If the trial could not be conducted in these regions, would there be an impact on the trial completion or conclusions?
  2. Discuss the standard of care for the therapeutic area/indication in the different countries/regions in which the trial will be conducted.
  3. Are established research networks for the therapeutic area available?
  4. Evaluate the level of clinical experience with the trial interventions that will be needed at the clinical sites.
  5. Describe the site-level infrastructure, resources, and any specific certification or training necessary to carry out the planned study visits and procedures and to collect and report data in a timely manner.
  6. Will the protocol design be pretested with investigators, site staff, and/or PPAO during development?
  7. Consider the reimbursement issues that impact conduct of the study at the site:
    1. Will unmasking of the control arm become an issue to secure reimbursement for trial participants in the control arm?
    2. Will use of the investigational product in the post-marketing setting affect reimbursement?

Accrual

A study may be well designed scientifically but still fall short or even fail, if the appropriate number of trial participants cannot be accrued. Factors considered during feasibility may enhance the likelihood that the study will accrue sufficient trial participants to address the intended objectives posed by the protocol.

  1. Describe the enrollment needed by site and overall to complete the study.
  2. Determine if historical data are available regarding enrollment and site performance, including:
    1. Recent data (if available) regarding enrollment for similarly designed trials.
    2. Whether the anticipated patient population will be available in the regions in which the study is planned.
  3. Are there competing trials for this patient population? What impact might this have on any pre-specified sample sizes for subgroups of trial participants?
  4. Are existing patient advocacy groups or support networks available that can be used to generate interest and support around the trial? Consider involving these groups from the time of initial protocol development.

PATIENT SAFETY SHOW ALL HIDE ALL

Informed Consent

The clinical investigator has a responsibility to ensure that trial participants’ participation in research is informed and voluntary, and that new information that may affect trial participants’ willingness to continue in the study is communicated in a timely manner. Informed consent is an ongoing process, and the consent document should be the basis for a meaningful exchange between the investigator (or designee) and the trial participant.

  1. What are the key elements of the informed consent process for this study?
  2. Have various stakeholders, especially PPAO and treating physicians, been involved in the development of the informed consent document?
  3. Does the consent document employ plain language principles, including description of symptoms rather than disease state (e.g., fatigue rather than anemia)?
  4. How does the consent process (vs. the document) fit within the study processes?
  5. Describe the study population. Is there the potential for:
    • Vulnerable trial participants?
    • Trial participants with impaired cognition or diminished capacity to consent, either initially or over time?
    • Emergency situations in which obtaining consent prospectively may not be feasible?

Withdrawal Criteria and Trial Participant Retention

Clear criteria for stopping study treatment and/or withdrawing trial participants from the study are necessary to ensure the protection of trial participants; however, consideration should be given to methods that will preserve trial participants’ safety and rights, while still minimizing loss of critical outcomes data.

  1. Describe the situations in which trial participants should or may be withdrawn from study treatment.
  2. For participants who stop the assigned treatment, what data are critical for study analysis and reporting?
  3. For this study, what steps are required prior to deeming a trial participant “lost to follow-up”? Are there critical data (e.g., survival) that might need to be collected for these trial participants (e.g., survival status)?
  4. How will trial participants with permanent device implants be followed upon withdrawal?
  5. In non-randomized trials, how are trial participants who withdraw after treatment assignment but prior to enrollment handled (i.e., will trial participants be replaced, counted as treatment failures, etc.)?
  6. For disease under study, are there patients/patient advocacy groups/patient support groups active that communicate within the community the importance of full and complete participation in trials? Have these groups been involved with the development of the retention plan?

Signal Detection and Safety Reporting

Implementing safety-reporting systems that are designed relative to and appropriate to the nature of the interventions (e.g., what is known about the investigational product and the risk relative to the trial participants) will facilitate timely identification of safety signals and efficient, expedited reporting.

  1. Describe the planned processes for monitoring existing and identifying new or emerging safety signals.
  2. For known safety concerns:
    • What specific evaluations does the study include to further characterize the association between the investigational product and event?
    • How and in what time frame are data from these evaluations to be collected/reported?
  3. How will emerging safety issues from other sources (e.g., other trials, real-world use) that may have an impact on study design and conduct be identified?
  4. Consider what events are anticipated to occur in the study population. How and in what time frame will these events be reported in the study?
  5. For non-randomized studies, how will safety signals be assessed in the absence of comparators?
  6. What level of risk are different stakeholders willing to assume, including trial participants?

Data Monitoring Committee (DMC)/Stopping Rules (if applicable)

When interim monitoring of accumulating efficacy and/or safety data is considered necessary to make determinations on whether to continue, modify, or terminate a trial, this process may be best accomplished by use of a DMC. Use of an appropriately convened DMC should protect the integrity of the trial from adverse impacts that might otherwise arise from access of unmasked interim trial data by individuals involved with the design, conduct, and monitoring of the trial. The DMC is responsible for defining its deliberative processes, including event triggers that would call for an unscheduled review, stopping guidelines, unmasking, and voting procedures prior to initiating any data review. The DMC is also responsible for maintaining the confidentiality of its internal discussions and activities as well as the contents of reports provided to it to prevent the introduction of bias.

  1. Describe the circumstances in which the study should be terminated early. At what point, if any, would the study be stopped early for efficacy?
  2. Evaluate whether the study should include a DMC. DMCs are generally recommended for any controlled trial of any size that will compare rates of mortality or major morbidity (FDA DMC guidance).
  3. Will the DMC be responsible only for this study, or will they monitor trials across a development program?
  4. If there is not a DMC, how will analyses be performed on accumulating safety data and how will decisions be made about necessary actions?
  5. How might new information from outside the trial (such as results from a competitor) be incorporated into ongoing assessments of the benefit/risk ratio for participants in the study?
  6. If the trial has multiple adaptive procedures (adaptive randomization, early stopping, sample size re-estimation), how will these rules interact with others to be used by the DMC?
  7. Consider, a priori, the data reporting order (e.g., DMC  steering committee  sponsor) for stopping rules or preplanned adaptations.

STUDY CONDUCT SHOW ALL HIDE ALL

Training

Study-specific training may involve all stakeholders, including but not limited to sponsors, third-party service providers, DMCs, adjudicators, investigators, coordinators, other local site staff, and/or trial participants. Ongoing focused training of study staff during the study can reinforce protocol requirements as well as provide needed updates when some portion of the investigational plan has been amended (e.g., protocol, CRF, EDC, monitoring plan). Study-specific training minimizes site-to-site variability in conduct of critical study procedures and ensures that all stakeholders understand and appropriately implement the protocol.

  1. Consider the critical elements of the investigational plan, including whether these activities are carried out and/or critical data generated by:
    1. Sponsor staff.
    2. CRO/ARO staff.
    3. Other third parties (e.g., adjudication committee).
  2. For what critical activities are focused and/or targeted training necessary to ensure appropriate and consistent conduct?
  3. Consider any study-specific assessments for which staff must be certified vs. trained (i.e., use of the investigational product).
  4. How applicable will the training employed during the study be in more general settings?
  5. Will roll-in trial participants be used at sites? How many? How will these trial participants contribute to the overall findings of the study?
  6. How might human factors (HF) play a role in the intended use of the investigational product? How can training be used to mitigate HFs?

Data Recording and Reporting

The manner and timeliness in which study data are collected and submitted to the clinical trial database are critical contributors to overall trial quality.

  1. Consider how and by whom critical data will be collected and reported (e.g., CRF, EDC, PRO).
  2. Can IT systems (e.g., EDC) also be used to encourage and enforce compliance with the protocol requirements for data capture and reporting?
  3. Will standardized data definitions be used when available?
  4. Will there be eSource records, and how and by whom will they be managed?
  5. Can study data be captured in parallel with routine clinical assessments and documentation?
  6. Does the investigator need to review and/or take action on data generated directly by the trial participant or a third party.
  7. Will multiple data systems be utilized, requiring transfer and integration (e.g., central lab, interactive voice response system, imaging reader)?

Data Monitoring and Management

Sponsors have an obligation to monitor the progress of their trial. Ongoing data monitoring provides assurance that trial participants’ safety will be protected (e.g., a trial will be terminated if it presents an unreasonable and significant risk) and that the data gathered during a trial will be fit for purpose. Operational checks (e.g., on-site, remote, and centralized monitoring) and statistical surveillance can identify important data quality issues at a point at which corrective action is feasible.

  1. Identify departures from study conduct that may generate “errors that matter.”
  2. Which data are not critical to study analysis?
  3. By what methods will data be monitored while the study is ongoing? At what frequency?
  4. Will centralized statistical monitoring approaches be used in combination with on-site monitoring activities? (Find additional resources through CTTI and the FDA here)
  5. What functional lines will be involved in ongoing data monitoring?
  6. Identify which function/individual is ultimately responsible for the decision to lock and unlock the database.
  7. What types of issues is the monitoring plan designed to detect? Is it sufficiently comprehensive?
  8. Define critical data elements for data management during protocol development.

Statistical Analysis

Details of the study design and conduct, as well as the principal features of its proposed statistical analysis, should be clearly specified in a protocol written before the study begins. The extent to which procedures in the protocol are well defined and the primary analysis is planned, a priori, will contribute to the degree of confidence in the final results and conclusions of the trial.

  1. What data are critical to the statistical analysis plan (SAP)?
  2. Does the study include multiple endpoints? What is the order of analysis?
  3. Consider how:
    1. Data that are differentially obtained will be handled (e.g., lost-to-follow-up or early withdrawal).
      Missing data will be dealt with in the analysis.
  4. Clearly identify which trial participants are to be included in intention-to-treat (ITT) analysis vs. per protocol or as treated analyses.
  5. How will evaluation and/or implementation of stopping rules affect the statistical analysis? [See PATIENT SAFETY – Independent Data Monitoring Committee (DMC)/Stopping Rules above for additional information]

STUDY REPORTING SHOW ALL HIDE ALL

Dissemination of Study Results

To assess a trial accurately, readers of a published report need complete and clear information. Study reporting may include submission of clinical study reports (CSRs) to regulators, reporting to public clinical trial registries (e.g., ClinicalTrials.gov), and other means of disclosing study results to stakeholders. Transparency of both the data and the processes for analyzing the data allows both regulators and the public to understand the scientific and ethical conduct of the trial.

  1. Identify who will have rights to publish or otherwise disseminate study results. Consider a writing committee to oversee all papers resulting from a study database. The committee should include all stakeholders involved with the trial development.
  2. To whom will trial results be submitted and for what purposes?
  3. Does the trial sponsor have obligations to publish or disclose study data (e.g., corporate policy, national clinical trial registry)?
  4. Will the CSR include a quality by design section describing all relevant quality findings during the study and actions taken?
  5. When/how should study data be shared with trial participants? How will important information be communicated to trial participants?
  6. Clearly identify primary vs. secondary vs. post hoc analyses in study reports.
  7. Clearly identify which subset analyses were preplanned vs. which were post hoc.
  8. Can ITT, per protocol, and as treated definitions, as defined in the protocol, be appropriately translated in the study report?

THIRD-PARTY ENGAGEMENT SHOW ALL HIDE ALL

Delegation of Sponsor Responsibilities

Sponsors are increasingly reliant on third-party service providers (e.g., CROs, AROs, and other study-specific vendors) to assist with activities, from designing a study through reporting its results. As a result, multiple parties have or share responsibility for study conduct and/or oversight at different points of the study. To ensure oversight of third parties, sponsors should have appropriate levels of internal governance and oversight when engaging third parties in the design, conduct, and reporting of clinical trials. The sponsor should ensure that CROs/AROs and other study vendors are (and remain) qualified to carry out contracted activities. Sponsors must also consider appropriate controls to ensure, in an ongoing manner, that CROs/AROs and vendors are carrying out these activities appropriately and in accordance with contractual requirements or other defined quality expectations.

  1. What activities will be delegated to a CRO/ARO or conducted by another third party?
  2. Which of these are CTQ activities?
  3. Will the entire activity be delegated, or will the sponsor retain responsibility for some aspects?
  4. Are there unique risks that matter to the trial inherent in this partnership?
  5. What infrastructure and capabilities are required to manage the relationship and provide appropriate oversight of the deliverables from the third party?
  6. Is there clarity of what needs to be escalated and when? Is there a clear escalation pathway for all parties? Do all parties understand escalation pathways?

Collaborations

Sponsors are increasingly using alternative models to develop medicines, such as co-sponsorships (where permitted), co-development programs, licensing agreements, collaborations, and acquisitions. These result in the need to ensure mutual understanding of the roles and responsibilities at different stages of the development life cycle. The type of collaboration will drive the nature and degree of oversight and control necessary and/or feasible.

  1. What is the intended use of the data?
  2. Is there a clear understanding of who the sponsor is, and who holdss the investigational new drug/clinical trials application?
  3. Is there a mutual understanding on what is CTQ to ensure that collaborative partners give proper attention to CTQ areas?
  4. Are there unique risks that matter to the trial inherent in this partnership?