library(CohortPlat)

Simulation of cohort platform trials investigating combination treatments with a common backbone versus placebo

The following document gives an overview of the CohortPlat R package. Each function is discussed separately. For the rest of this document it is assumed that the package has been installed and loaded. Many examples are provided throughout.

Trial Design Overview

We look at an open-entry, cohort platform study design with a binary endpoint investigating the efficacy of a two-compound combination therapy compared to the respective mono therapies and placebo, whereby we assume one of the mono therapies to be the backbone of all combinations with putative superiority over placebo. After an initial inclusion of one or more cohorts, we allow new cohorts to enter the trial over time. Each cohort consists of four arms: combination therapy, mono therapy, backbone therapy and placebo. The backbone therapy is the same for all cohorts, while the mono therapy and as a result the combination therapy are different in every cohort. We furthermore assume to conduct one interim analysis for efficacy and futility for every cohort. We assume the short-term interim endpoint to be a binary surrogate of the final endpoint, with a certain sensitivity and specificity in predicting the final endpoint.

We ultimately seek regulatory approval of the combination therapy and therefore superiority of the combination therapy over both mono therapies and superiority of both mono therapies over placebo needs to be shown. Depending on the level of prior study information available, i.e. whether or not the superiority of the mono therapies over placebo has already been shown, we differentiated three testing strategies. In the first testing strategy, we assume superiority has been shown for both mono therapies, therefore we are only interested in testing the combination therapy against both mono therapies. In the second testing strategy, we assume superiority of the backbone mono therapy versus placebo has been shown, but not for the add-on mono therapy. Therefore, compared to the first testing strategy, we additionally test add-on mono therapy against placebo, resulting in three comparisons. In the third and last testing strategy, we don’t assume any superiority has been shown yet. Therefore, compared to the second testing strategy, we additionally test backbone mono therapy against placebo, resulting in four comparisons.

For every testing strategy a certain number of comparisons is conducted. For every of the comparisons, we allow an arbitrary number of decision rules, which can be either of Bayesian nature based on the posterior distributions of the response rates of the respective study arms, or frequentist based on either a p-value or point estimates and confidence intervals. Generally, we allow decision rules for declaring efficacy and decision rules for declaring futility. In order to declare efficacy, all efficacy decision rules must be simultaneously fulfilled. In order to declare futility, it is enough if any of the futility decision rules is fulfilled. Please note that it is possible that neither is satisfied. In case this happens at the final analysis, we declare the combination therapy unsuccessful, but due to not reaching the superiority criterion at the maximum sample size, and not due to reaching the futility criterion. While this is only a technical difference, this information should be available when conducting the simulations. Please further note that at interim only a decision to enrich or not enrich can be made, hence only success criteria are needed. Related to the flexible decision rules we can specify whether we want to share information on the backbone mono therapy and placebo arms across the study cohorts. We allow four options: 0) no sharing, using only data from the current cohort, 1) only sharing of concurrent data, 2) using a dynamic borrowing approach further described in the appendix, in which the degree of shared data increases with the homogeneity of the treatment efficacy, i.e. sharing more, if the treatment efficacy is similar, and 3) full sharing of all available data. For a detailed formulation of the decision rules, see make_decision_trial().

For reasons of completeness: The package does allow an initial cohort that does not have a placebo arm, a balanced allocation ratio, no interim analysis and a final analysis after 210 patients. We will breifly discuss this in the simulate_trial() section.

Overview of all simulation parameters

All simulation parameters (not not decision rule parameters), which will be discussed later in the context of the functions they are required for, are listed in the following table:

Parameter Definition                                     Domain
random Indicator whether response rates for the treatment arms should be drawn randomly (otherwise they are fixed for all starting and future cohorts) \(\{0,1\}\)
random_type Option for how treatment arm response rates should be derived. Options include direct specification (“absolute”), risk-differences (“risk_difference”), risk-ratios (“risk_ratio”) and odds-ratios (odds_ratio)
rr_comb Vector of \(k\) non-zero values representing possible response rates, risk-differences, risk-ratios or interaction effects for the combination treatment arms \((0,\infty)^k\)
prob_comb_rr Vector \(k\) non-zero probability values representing the probabilities with which each of the values of \(rr_{comb}\) are drawn every time a new cohort enters the platform \((0,1)^k\)
rr_mono Vector of \(k\) non-zero values representing possible response rates, risk-differences, risk-ratios or interaction effects for the mono treatment arms \((0,\infty)^k\)
prob_mono_rr Vector \(k\) non-zero probability values representing the probabilities with which each of the values of \(rr_{mono}\) are drawn every time a new cohort enters the platform \((0,1)^k\)
rr_back Vector of \(k\) non-zero values representing possible response rates, risk-differences, risk-ratios or interaction effects for the backbone treatment arms \((0,\infty)^k\)
prob_back_rr Vector \(k\) non-zero probability values representing the probabilities with which each of the values of \(rr_{back}\) are drawn every time a new cohort enters the platform \((0,1)^k\)
rr_plac Vector of \(k\) non-zero values representing possible response rates, risk-differences, risk-ratios or interaction effects for the placebo treatment arms \((0,\infty)^k\)
prob_plac_rr Vector \(k\) non-zero probability values representing the probabilities with which each of the values of \(rr_{plac}\) are drawn every time a new cohort enters the platform \((0,1)^k\)
sharing_type Level of backbone-monotherapy and placebo sharing across cohorts. Either no sharing (“cohort”), sharing only concurrent trial data (“concurrent”), using a dynamic borrowing approach (“dynamic”), or full sharing (“all”).
target_rr Vector containing required superiority of the combination treatment over the mono therapies (first element) and mono therapies over placebo (second element) for alternative hypothesis to hold. The third element of the vector specifies the relation, choices are risk-difference (1), risk-ratio (2) and odds-ratio (3) \(\mathbb{R}^3\)
rr_transform List of \(v\) possible association functions between interim surrogate and final outcome response rates (sample from multinomial distribution). Direct connection between these functions and sensitivity/specificity of interim surrogate in predicting final outcome described later \(\{ f: [0,1] \to [0,1] \}^v\)
prob_rr_transform Vector \(v\) non-zero probability values representing the probabilities with which each of the values of \(rr_{transform}\) are drawn every time a new cohort enters the platform \((0,1)^v\)
cohorts_start Number of cohorts starting simultaneously at the beginning of the platform trial \(\mathbb{N}\)
cohort_random Probability for new cohort to be added at every time step (weighted later by number of patients per time step) \([0,1)\)
cohort_fixed Fixed timesteps after which a cohort will be included \(\mathbb{N}\)
cohorts_offset Minimum number of patients between inclusion of two new cohorts \(\mathbb{N}\)
cohorts_max Maximum number of cohorts that can be added throughout the trial \(\mathbb{N}\)
safety_prob Probability for individual cohorts to stop for safety reasons at every time step (weighted later by number of patients per time step) \([0,1)\)
sr_drugs_pos Number of cohorts with a successful final analysis after which the platform trial stops automatically and immediately \(\mathbb{N}\)
sr_pats Stopping rule for total number of patients; Default = cohorts_max * n_fin + error term based on randomization \(\mathbb{N}\)
sr_first_pos Stopping rule for first successful cohort; if TRUE, after first cohort was found to be successful, no further cohorts will be included but cohorts will finish evaluating, unless other stopping rules reached prior. Default is FALSE.
n_int Sample size after which interim analysis will be conducted (except for optional initial cohort) \(\mathbb{N}\)
n_fin Sample size per cohort at final \(\mathbb{N}\)
trial_struc Trial Structure: “all_plac” = all cohorts have placebo arm, “no_plac” = no cohort has placebo arm, “stop_post_mono” = all cohorts start with placebo arm, but after first mono has been declared successful, newly enrolled cohorts have no more placebo, “stop_post_back” = all cohorts start with placebo arm, but after first backbone has been declared successful, newly enrolled cohorts have no more placebo

General simulation assumptions

Cohort Structure

We assume a fixed structure for all cohorts being added to the platform over time. All cohorts consist of a placebo arm, two mono therapy arms and a combination arm of the two mono therapies. One of the two mono therapies is a common backbone across all cohorts. The only exception to this cohort structure is the initial cohort, which can run without a placebo arm.

Recruitment Speed

As a simplification and since no patient-level simulation is conducted, the recruitment speed is driven by the allocation ratios of all ongoing cohorts. To give an example: If at any point of the simulation there are three currently enrolling cohorts with the allocation ratios 3:3:1:1, 3:3:1:1 and 3:3:1:1, then in this simulation iteration 24 (3+3+1+1+3+3+1+1+3+3+1+1) patients will be simulated and placed on the study arms accordingly. The recruitment speed is therefore directly influenced by the number of and allocation ratio within the currently ongoing cohorts.

Availability of Endpoints

For reasons of simplicity, in the simulations we assume the endpoints to be available immediately for every patient. To illustrate: Conducting an interim analysis after 50 patients have reached the surrogate interim endpoint or conducting an interim analysis after 50 patients have been included is the same in the simulations, even though in reality in such a trial the interim analysis would of course be conducted after 50 patients have reached the interim endpoint.

Design parameters and treatment efficacy assumptions

Both design parameters and treatment efficacy assumptions need to be chosen before running simulations. In the following subsection, we explain these parameters and their domains in detail.

Target Product Profile

When calculating the operating characteristics of a trial design and the chosen decision rules, we need a notion for whether or not a correct or wrong decision was made. We want to assess this via the target product profile, which specifies how much better (in terms of risk-difference, risk-ratio or odds-ratio) the combo needs to perform than both of the mono therapies and how much better the mono therapies need to perform compared to the placebo for the alternative hypothesis to hold. If any of these conditions is not satisfied, the null hypothesis holds.

Cohort Allocation Ratios

Since we conduct interim analyses to possibly enrich certain arms, we need to specify two allocation ratios: An initial allocation ratio, which will be used up to and after an unsuccessful interim, and an allocation ratio in case the interim was successful. Please note that the allocation ratios also influence the recruitment speed.

Sample Sizes

Independently of the cohort allocation ratios, we perform interim analyses after a certain number of patients have reached the interim endpoint (except for the optional initial cohort which has no interim analysis). If the cohort is not stopped at interim for futility, we further recruit until a certain number of patients have been enrolled.

Additional Cohort Parameters

Recruitment happens in little steps and this is our measure of time elapsed. At every such time step, with a certain probability that depends on the number of patients in this timestep, we add a new cohort to the platform. This probability can therefore be interpreted as the probability to include a new cohort for every observed patient. We also allow a minimum amount of patients which needs to pass after inclusion of a new cohort before another cohort can be added. Furthermore, the number of cohorts allowed to enter the platform can be limited. Also, a cohort can be added in fixed time intervals.

Safety Stopping

Recruitment happens in little steps. In every such time step, we allow a cohort to be stopped with a certain probability for safety reasons. This probability dependes on the number of patients in this time step, therefore the probability can be interpreted as stopping for futility for every observed patient. For simplicity, this safety stopping is assumed to independent of the interim and final endpoints.

Level of Data Sharing Across Cohorts

Related to the flexible decision rules, we can specify whether we want to share information on the backbone monotherapy and placebo arms across the study cohorts. We allow four options: 0) no sharing, using only data from the current cohort, 1) only sharing of concurrent data, i.e. if cohort 1 recruits from 2020-2022 and cohort 2 from 2021-2023, we will use the 2021 and 2022 data of cohort 1 in the analysis of cohort 2, 2) using a dynamic borrowing approach further described in the appendix, in which the degree of shared data increases with the homogeneity of the treatment efficacy, i.e. sharing more, if the treatment efficacy is similar and 3) full sharing of all available data.

Platform Stopping Rule

While in theory a platform trial could run perpetually, it makes sense to foresee stopping it after a certain number of successful cohorts. Note that this number can be set to Infinity, meaning all cohorts that enter the platform will always finish evaluating. Another option is to set a maximum sample size, after which recruitment to all available cohorts will be stopped.

Treatment efficacy assumptions

Additionally to specifying the trial design parameters, treatment efficacy assumptions need to be made. For every new cohort that enters the platform trial, we allow the response rates of the respective arms to be drawn independently from previously specified discrete probability distributions, which should capture the observed heterogeneous treatment effects. Several different options are possible: Either response rates are drawn directly, or risk-ratios or risk-differences are drawn based on placebo as reference arm. Furthermore, odds-ratios can be drawn randomly for the two mono therapies and subsequently an interaction effect is randomly drawn for the combination therapy.

Sensitivity and Specificity of Surrogate Interim Endpoint

When simulating the binary interim and final endpoints for each patient, we randomly draw from a multinomial distribution with four possible outcome pairs: 0/0, 0/1, 1/0 and 1/1, where the first number corresponds to the interim outcome and the second number corresponds to the final outcome. The probabilities with which these outcomes are drawn depend on both the true response rate for the final endpoint (which we will denote by \(rr\), but which depends on the cohort and arm) and the sensitivity (\(sens\)) and specificity (\(spec\)) of the interim endpoint in predicting the final endpoint. The four probabilities are as follows: \((1-rr)*spec\) for the outcome 0/0, \((1-rr)*(1-spec)\) for the outcome 1/0, \(rr*(1-sens)\) for the outcome 0/1 and \(rr*sens\) for the outcome 1/1. The sensitivity and specificity can differ for every cohort and is drawn randomly everytime a new cohort enters the platform trial.

make_decision_trial()

The following section explains how to use the make_decision_trial() function by explaining the various input parameters and giving some examples. As the name suggests, this function is used to evaluate the trial results of the cohort platform trial and make a GO, STOP or EVALUATE decision with respect to given decision criteria.

General

Within the simulate_trial() function (see below), the trial results are constantly updated. A function is necessary that uses the current trial results and checks whether the decision criteria are met or not. The make_decision_trial() function facilitates this. In general, the function requires the user to specify the exact decision criteria that should be used for superiority and futility for all four possible comparisons Combo vs. Mono, Combo vs. Backbone and Mono vs. Placebo and Backbone vs. Placebo, depending on whether the conducted analysis is an interim analysis or the final analysis.

Variables

Parameter Definition
res_list List item containing individual cohort trial results so far in a format used by the other functions in this package
eli_cohort Current cohort that should be evaluated
test_strat Testing strategy used; 1 = Combo vs. both monos, 2 = 1 + Add-on Mono vs. Placebo, 3 = 2 + Backbone mono vs. placebo
back_type What backbone data should be used for comparisons; Default is “all”. Other options are “concurrent” or “dynamic” or “cohort”.
plac_type Which placebo data should be used for arm comparisons; Default is “all”. Another option is “concurrent” or “dynamic” or “cohort”.
w If dynamic borrowing, what is the prior choice for w. Default is 0.5.
beta_prior Prior parameter for all Beta Distributions. Default is 0.5.
Bayes_Sup List of matrices with rows corresponding to number of multiple Bayesian posterior two-arm combination criteria for superiority
Bayes_Fut List of matrices with rows corresponding to number of multiple Bayesian posterior two-arm combination criteria for futility
Bayes_SA_Sup List of matrices with rows corresponding to number of multiple Bayesian posterior single-arm combination criteria for superiority
Bayes_SA_Fut List of matrices with rows corresponding to number of multiple Bayesian posterior single-arm combination criteria for futility
P_Sup List with sublists corresponding to number of multiple frequentist test-based combination criteria for superiority
P_Fut List with sublists corresponding to number of multiple frequentist test-based combination criteria for futility
Est_Sup_Fut List with sublists corresponding to number of multiple point estimate based combination criteria for superiority and futility
CI_Sup_Fut List with sublists corresponding to number of multiple confidence interval based combination criteria for superiority and futility
interim Is the analysis conducted an interim or a final analysis?
initial_coh Indicator whether passed on cohort is the initial cohort (Then only the first two decision rules will be used).
... Further arguments inherited from upper layer functions

The requirement for all lists of decision rules are the same: the list needs to consist of one of the respective decision rule’s elements (described below), i.e. matrices for Bayes_Sup, Bayes_Fut, Bayes_SA_Sup, Bayes_SA_Fut and lists for P_Sup, P_Fut, Est_Sup_Fut and/or CI_Sup_Fut.

For each of the decision rules we need to follow a certain structure:

  • For Bayes_Sup, the number of rows of the matrix determines how many criteria need to be simultaneously true to declare superiority. The first column refers to the required superiority margin and the second column to the required confidence. The third column, which is used for the concept of a “promising” drug gives the threshold at which we declare a drug “promising”, in case we did not declare it superior. To set a superiority decision rule of the form: GO, if \(P(X>Y+0.1)>0.5\), use a matrix with one row and three columns:
B1 <- matrix(nrow = 1, ncol = 3)
B1[1,] <- c(0.10, 0.50, 1.00)
  • For Bayes_Fut, the number of rows of the matrix determines how many criteria need to be simultaneously true to declare futility. The first column refers to the required superiority margin and the second column to the required confidence. To set a futility decision rule of the form: STOP, if \(P(X>Y+0.1)<0.5\), use a matrix with one row and two columns:
B2 <- matrix(nrow = 1, ncol = 2)
B2[1,] <- c(0.10, 0.50)
  • For Bayes_SA_Sup, the number of rows of the matrix determines how many criteria need to be simultaneously true to declare superiority. The first column refers to the required value and the second column to the required confidence. The third column, which is used for the concept of a “promising” drug gives the threshold at which we declare a drug “promising”, in case we did not declare it superior. To set a superiority decision rule of the form: GO, if \(P(X>0.1)>0.5\), use a matrix with one row and three columns:
B3 <- matrix(nrow = 1, ncol = 3)
B3[1,] <- c(0.10, 0.50, 1.00)
  • For Bayes_SA_Fut, the number of rows of the matrix determines how many criteria need to be simultaneously true to declare futility. The first column refers to the required value and the second column to the required confidence. To set a futility decision rule of the form \(P(X>0.1)<0.5\), use a matrix with one row and two columns:
B4 <- matrix(nrow = 1, ncol = 2)
B4[1,] <- c(0.10, 0.50)
  • For P_Sup, the number of list elements determines how many criteria need to be simultaneously true to declare superiority. Each list element needs to follow the following structure: The first element passes the testfunction. This function needs to take a 2x2 table as input and return a list with at least one element “p.value”. The second element passes the significance level to declare superiority, the third element passes the “promising” significance level which is used for the concept of a “promising” drug. The fourth element decides whether a Bonferroni correction (assuming two tests, therefore taking half of the specified significance levels) should be used. To set a superiority decision rule of the form: GO, if \(p<0.025\), with p stemming from a one-sided Chi-Square test without continuity correction using no multiplicity correction, use a list:
P1 <- list(list(testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE), p_sup = 0.025, p_prom = 0, p_adj = "none"))
  • For P_Fut, the number of list elements determines how many criteria need to be simultaneously true to declare futility. Each list element needs to follow the following structure: The first element passes the testfunction. This function needs to take a 2x2 table as input and return a list with at least one element “p.value”. The second element passes the significance level to declare futility. The fourth element decides whether a Bonferroni correction (assuming two tests, therefore taking half of the specified significance levels) should be used. To set a futility decision rule of the form: STOP, if \(p \geq 0.5\), with p stemming from a one-sided Chi-Square test without continuity correction using no multiplicity correction, use a list:
P2 <- list(list(testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE), p_fut = 0.5, p_adj = "none"))
  • For Est_Sup_Fut, the number of list elements determines how many criteria need to be simultaneously true to declare superiority/futility. Each list element needs to follow the following structure: The first element passes the type of the point estimate. Options are “RR” (risk ratio) and “OR” (odds ratio). The second element passes the threshold to declare superiority. The third element passes the threshold to declare futility. The fourth element passes the promising threshold which is used for the concept of a “promising” drug. To set a rule of the form: GO, if RR \(\geq 1.2\) and STOP, if RR \(\leq 1\), use a list:
P3 <- list(list(est = "RR", p_hat_sup = 1.2, p_hat_fut = 1, p_hat_prom = Inf))
  • For CI_Sup_Fut, the number of list elements determines how many criteria need to be simultaneously true to declare superiority/futility. Each list element needs to follow the following structure: The first element passes the type of the point estimate. Options are “RR” (risk ratio) and “OR” (odds ratio). The second element passes the desired coverage probability of the confidence interval. The third element passes the threshold for the upper confidence bound to declare superiority. The fourth element passes the threshold for the lower confidence bound to declare futility. The fifth element passes the promising threshold for the upper confidence bound which is used for the concept of a “promising” drug. To set a rule of the form: GO, if upper bound of 95% CI for RR \(\geq 1.2\) and STOP, if lower bound of 95% CI for RR \(\leq 1\), use a list:
P4 <- list(list(est = "RR", ci = 0.95, p_hat_lower_sup = 1.2, p_hat_upper_fut = 1, p_hat_lower_prom = Inf))

Examples

The decision rules submitted to the simulate_trial() function follow a strict structure: They need to be a list, consisting of two sublists, the first for the interim analysis decision rules and the second for the final analysis decision rules.

Bayesian

Imagine we want to specify the following Bayesian superiority criteria, whereby \(\pi_x\) refers to the response rate of treatment \(x\):

At interim:

\[ \begin{align*} \text{GO, if } & (P(\pi_{Comb} > \pi_{Mono1} + 0.05) > 0.8) \ \wedge \\ & (P(\pi_{Comb} > \pi_{Mono2} + 0.05) > 0.8) \ \wedge \\ & (P(\pi_{Mono1} > \pi_{SoC}) > 0.8) \ \wedge \\ & (P(\pi_{Mono2} > \pi_{SoC}) > 0.8) \\ \\ \text{STOP, if } & (P(\pi_{Comb} > \pi_{Mono1}) < 0.6) \ \wedge \\ & (P(\pi_{Comb} > \pi_{Mono2}) < 0.6) \ \wedge \\ & (P(\pi_{Mono1} > \pi_{SoC}) < 0.6) \ \wedge \\ & (P(\pi_{Mono2} > \pi_{SoC}) < 0.6) \\ \\ \text{CONTINUE, } & \text{otherwise} \\ \end{align*} \]

At final:

\[ \begin{align*} \text{GO, if } & (P(\pi_{Comb} > \pi_{Mono1} + 0.10) > 0.8) \ \wedge \\ & (P(\pi_{Comb} > \pi_{Mono2} + 0.10) > 0.8) \ \wedge \\ & (P(\pi_{Mono1} > \pi_{SoC} + 0.05) > 0.8) \ \wedge \\ & (P(\pi_{Mono2} > \pi_{SoC} + 0.05) > 0.8) \\ \\ \text{STOP, } & \text{otherwise} \\ \end{align*} \]

We could achieve this by specifying:


# Comparison Combo vs Mono Interim Analysis
Bayes_Sup1_Int <- matrix(nrow = 1, ncol = 3)
Bayes_Sup1_Int[1,] <- c(0.05, 0.80, 1.00)
# Comparison Combo vs Backbone Interim Analysis
Bayes_Sup2_Int <- matrix(nrow = 1, ncol = 3)
Bayes_Sup2_Int[1,] <- c(0.05, 0.80, 1.00)
# Comparison Mono vs Placebo Interim Analysis
Bayes_Sup3_Int<- matrix(nrow = 1, ncol = 3)
Bayes_Sup3_Int[1,] <- c(0.00, 0.80, 1.00)
# Comparison Backbone vs Placebo Interim Analysis
Bayes_Sup4_Int<- matrix(nrow = 1, ncol = 3)
Bayes_Sup4_Int[1,] <- c(0.00, 0.80, 1.00)

# Comparison Combo vs Mono Final Analysis
Bayes_Sup1_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup1_Fin[1,] <- c(0.10, 0.80, 1.00)
# Comparison Combo vs Backbone Final Analysis
Bayes_Sup2_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup2_Fin[1,] <- c(0.10, 0.80, 1.00)
# Comparison Mono vs Placebo Final Analysis
Bayes_Sup3_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup3_Fin[1,] <- c(0.05, 0.80, 1.00)
# Comparison Backbone vs Placebo Final Analysis
Bayes_Sup4_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup4_Fin[1,] <- c(0.05, 0.80, 1.00)

# Wrapup in package format
Bayes_Sup <- list(list(Bayes_Sup1_Int, Bayes_Sup2_Int, Bayes_Sup3_Int, Bayes_Sup4_Int),
                  list(Bayes_Sup1_Fin, Bayes_Sup2_Fin, Bayes_Sup3_Fin, Bayes_Sup4_Fin))

# Comparison Combo vs Mono
Bayes_Fut1 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut1[1,] <- c(0.00, 0.60)
# Comparison Combo vs Backbone
Bayes_Fut2 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut2[1,] <- c(0.00, 0.60)
# Comparison Mono vs Placebo
Bayes_Fut3 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut3[1,] <- c(0.00, 0.60)
# Comparison Backbone vs Placebo
Bayes_Fut4 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut4[1,] <- c(0.00, 0.60)
Bayes_Fut <- list(list(Bayes_Fut1, Bayes_Fut2, Bayes_Fut3, Bayes_Fut4),
                  list(Bayes_Fut1, Bayes_Fut2, Bayes_Fut3, Bayes_Fut4))

Frequentist

Imagine we want to specify the following frequentist decision rules:

  • At interim, go if the one-sided p-value from a Chi-Square Test without continuity correction comparing the combination response rate against the mono response rate is below 0.15.

  • At final, declare superiority if:

    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the combination response rate against the mono response rate is below 0.05.
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the combination response rate against the backbone response rate is below 0.05.
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the mono response rate against the placebo response rate is below 0.05.
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the backbone response rate against the placebo response rate is below 0.05.

We could do this by specifying:


# Comparison Combo vs Mono Interim Analysis
P_Sup1_Int <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.15, p_prom = 0))
# Comparison Combo vs Backbone Interim Analysis
P_Sup2_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Mono vs Placebo Interim Analysis
P_Sup3_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Backbone vs Placebo Interim Analysis
P_Sup4_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))

# Comparison Combo vs Mono Final Analysis
P_Sup1_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.05, p_prom = 0, p_adj = "B"))
# Comparison Combo vs Backbone Final Analysis
P_Sup2_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.05, p_prom = 0, p_adj = "B"))
# Comparison Mono vs Placebo Final Analysis
P_Sup3_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.05, p_prom = 0, p_adj = "B"))
# Comparison Backbone vs Placebo Final Analysis
P_Sup4_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.05, p_prom = 0, p_adj = "B"))

# Wrapup in package format
P_Sup <- list(list(P_Sup1_Int, P_Sup2_Int, P_Sup3_Int, P_Sup4_Int),
              list(P_Sup1_Fin, P_Sup2_Fin, P_Sup3_Fin, P_Sup4_Fin))

simulate_trial()

The following section explains how to use the simulate_trial() function by explaining the various input parameters and giving some examples.

General

The simulate_trial() function is used to simulate the cohort platform trial, using certain assumptions that reflect the current beliefs on the future trial structure and design. Those are:

  • The trial is based on cohorts running in parallel. Each cohort consists of up to four arms: Combo therapy, Mono therapy, Backbone and Placebo.

  • For each patient, two types of responses are defined: 1) Based on a biomarker (data available at interim) and 2) based on histology (available at final).

  • At both interim (for early efficacy or futility) and final analyses, up to four types of comparisons can be conducted: Combo vs. Mono, Combo vs. Backbone, Mono vs. Placebo and Backbone vs Placebo. At both analyses, it is assumed that data for all currently enrolled patients is available.

  • Over time, a limited number of new cohorts can randomly enter the trial. The trial stops, if either a pre-specified number of successful combo therapies has been identified, or if all ongoing cohorts have finished evaluating.

In terms of recruitment, the simulate_trial() function works as follows: Patients are recruited equally to all cohorts according to the allocation ratios. At a particular point in time, the function checks which cohorts are active and reads their allocation ratios. If e.g. the allocation ratios are 2:2:1:1 and 2:2:1:1, 12 patients (2+2+1+1+2+2+1+1) will be enrolled in the next iteration and their responses will be determined. Afterwards, the function will check whether any cohort is ready for their interim/final analysis, which cohorts should stop for safety and whether or not to add a new cohort. The next iteration starts, unless any trial stopping rules have been reached.

Variables

The variables used in this function are:

Parameter Definition
n_int Sample size per cohort to conduct interim analysis
n_fin Sample size per cohort at final
trial_struc Trial Structure: “all_plac” = all cohorts have placebo arm, “no_plac” = no cohort has placebo arm, “stop_post_mono” = all cohorts start with placebo arm, but after first mono has been declared successful, newly enrolled cohorts have no more placebo, “stop_post_back” = all cohorts start with placebo arm, but after first backbone has been declared successful, newly enrolled cohorts have no more placebo
cohorts_start Number of cohorts to start the platform with (only relevant if initial = FALSE)
rr_comb Response rates of combination therapies
rr_back Response rates of backbone arms
rr_mono Response rate of mono therapies
rr_plac Response rate of the placebo
rr_transform Function transforming all the above response rates to a vector of four probabilities for the multinomial simulation. First element is probability of both failures. Second element is probability of biomarker success and histology failure. Third element is probability of biomarker failure and histology success. Fourth element is probability of both success.
random Should the response rates of the arms be randomly drawn? Default is FALSE.
random_type How should the response rates be drawn randomly? Options are: “absolute”: Specify absolute response rates that will be drawn with a certain probability. “risk_difference”: Specify absolute response rates for placebo which will be drawn randomly, plus specify vectors for absolute treatment effects of mono therapies over placebo and for combo over the mono therapies (will use only rr_mono). “risk_ratio”: Specify absolute response rates for placebo which will be drawn randomly, plus specify vectors for relative treatment effects of mono therapies over placebo and for combo over the mono therapies (will use only rr_mono). “odds_ratios”: Specify response rate for placebo, specify odds-ratios for mono therapies (via rr_back and rr_mono) and respective probabilities. On top, specify interaction for the combination therapy via rr_comb with prob_rr_comb. Set: odds_combo = odds_plac * or_mono1 * or_mono2 * rr_comb. If rr_comb > 1 -> synergistic, if rr_comb = 1 -> additive. If rr_comb < 1 -> antagonistic. Default is “NULL”.
prob_comb_rr If random == TRUE, what are the probabilities with which the elements of rr_comb should be drawn?
prob_back_rr If random == TRUE, what are the probabilities with which the elements of rr_back should be drawn?
prob_mono_rr If random == TRUE, what are the probabilities with which the elements of rr_mono should be drawn?
prob_plac_rr If random == TRUE, what are the probabilities with which the elements of rr_plac should be drawn?
prob_rr_transform If random == TRUE, what are the probabilities with which the elements of rr_transform should be drawn?
stage_data Should individual stage data be passed along? Default is TRUE
cohort_random If not NULL, indicates that new arms/cohorts should be randomly started. For every patient, there is a cohort_random probability that a new cohort will be started.
cohort_fixed If not NULL, fixed timesteps after which a cohort will be included.
cohorts_max Maximum number of cohorts that are allowed to be added throughout the trial
cohorts_offset Minimum number of patients between adding new cohorts
sr_drugs_pos Stopping rule for successful experimental arms; Default = 1
sr_pats Stopping rule for total number of patients; Default = cohorts_max * n_fin + error term based on randomization
sr_first_pos Stopping rule for first successful cohort; if TRUE, after first cohort was found to be successful, no further cohorts will be included but cohorts will finish evaluating, unless other stopping rules reached prior. Default is FALSE.
target_rr What is target to declare a combo a positive? Vector of length 3 giving the threshold by which 1) The combo needs to be better than the monos and 2) the monos need to be better than the placebo. The third element of the vector specifies the relation, choices are 1==“risk-difference”, 2==“risk-ratio” and 3==“odds-ratio”. By default: c(0,0, 1)
sharing_type Level of backbone-monotherapy and placebo sharing across cohorts. Either no sharing (“cohort”), sharing only concurrent trial data (“concurrent”), using a dynamic borrowing approach (“dynamic”), or full sharing (“all”).
safety_prob Probability for a safety stop per patient
... Further arguments to be passed to decision function, such as decision making criteria

Examples

Below we provide two examples of how a certain design can be facilitated with the simulate_trial() function.

Example 1

At interim:

\[ \begin{align*} \text{GO, if } & (P(\pi_{Comb} > \pi_{Mono1} + 0.05) > 0.8) \ \wedge \\ & (P(\pi_{Comb} > \pi_{Mono2} + 0.05) > 0.8) \ \wedge \\ & (P(\pi_{Mono1} > \pi_{SoC}) > 0.8) \ \wedge \\ & (P(\pi_{Mono2} > \pi_{SoC}) > 0.8) \\ \\ \text{STOP, if } & (P(\pi_{Comb} > \pi_{Mono1}) < 0.6) \ \wedge \\ & (P(\pi_{Comb} > \pi_{Mono2}) < 0.6) \ \wedge \\ & (P(\pi_{Mono1} > \pi_{SoC}) < 0.6) \ \wedge \\ & (P(\pi_{Mono2} > \pi_{SoC}) < 0.6) \\ \\ \text{CONTINUE, } & \text{otherwise} \\ \end{align*} \]

At final:

\[ \begin{align*} \text{GO, if } & (P(\pi_{Comb} > \pi_{Mono1} + 0.10) > 0.8) \ \wedge \\ & (P(\pi_{Comb} > \pi_{Mono2} + 0.10) > 0.8) \ \wedge \\ & (P(\pi_{Mono1} > \pi_{SoC} + 0.05) > 0.8) \ \wedge \\ & (P(\pi_{Mono2} > \pi_{SoC} + 0.05) > 0.8) \\ \\ \text{STOP, } & \text{otherwise} \\ \end{align*} \]

Furthermore, we want:

  • A probability to stop for safety for every patient of 0.01%.

  • Both the placebo and the backbone data should be pooled across cohorts for all comparisons.

  • In order to declare a positive a true positive, we want the underlying response rate of the combination therapy to be at least 10% points better than both mono and backbone, and we want the mono and backbone to be at least 5% points better than placebo.

  • A maximum of 5 cohorts in total should be evaluated in the platform and for every patient we want the probability to start a new cohort to be 2%. We want to trial to stop immediately after one successful combination therapy has been identified. The interim analysis should be conducted for every cohort after 50 patients and the final analysis after 100 patients.

  • In terms of response rates of the different arms, we want to specify the following discrete probability distributions:

    • For the combo: P(RR=0.35) = 0.4, P(RR=0.40) = 0.4, P(RR=0.45) = 0.2.
    • For the mono: P(RR=0.15) = 0.2, P(RR=0.20) = 0.4, P(RR=0.25) = 0.4.
    • For backbone: P(RR=0.15) = 0.3, P(RR=0.20) = 0.4, P(RR=0.25) = 0.3.
    • For placebo: P(RR=0.10) = 0.25, P(RR=0.12) = 0.5, P(RR=0.14) = 0.25.
  • When simulating the biomarker and histology responses, we want to transform the true histology response rate assigned to this arm (called “x”) using the following function: \(P(Bio=1, Hist=1) = Sensitivity*x, P(Bio=0, Hist=0) = Specificity*(1 - x)\), \(P(Bio=1, Hist=0) = (1-Specificity)*(1-x), P(Bio=0, Hist=1) = (1-Sensitivity)*x\). In order to account for potentially different correlation structures with respect to the combination therapy in use or the use of different biomarkers in different cohorts, we want the sensitivity and specificity to alter between 75% and 85%, each with a probability of 0.5.

We can facilitate this using the following code:


# Set decision rules ----------------

# Comparison Combo vs Mono Interim Analysis
Bayes_Sup1_Int <- matrix(nrow = 1, ncol = 3)
Bayes_Sup1_Int[1,] <- c(0.05, 0.80, 1.00)
# Comparison Combo vs Backbone Interim Analysis
Bayes_Sup2_Int <- matrix(nrow = 1, ncol = 3)
Bayes_Sup2_Int[1,] <- c(0.05, 0.80, 1.00)
# Comparison Mono vs Placebo Interim Analysis
Bayes_Sup3_Int<- matrix(nrow = 1, ncol = 3)
Bayes_Sup3_Int[1,] <- c(0.00, 0.80, 1.00)
# Comparison Backbone vs Placebo Interim Analysis
Bayes_Sup4_Int<- matrix(nrow = 1, ncol = 3)
Bayes_Sup4_Int[1,] <- c(0.00, 0.80, 1.00)

# Comparison Combo vs Mono Final Analysis
Bayes_Sup1_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup1_Fin[1,] <- c(0.10, 0.80, 1.00)
# Comparison Combo vs Backbone Final Analysis
Bayes_Sup2_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup2_Fin[1,] <- c(0.10, 0.80, 1.00)
# Comparison Mono vs Placebo Final Analysis
Bayes_Sup3_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup3_Fin[1,] <- c(0.05, 0.80, 1.00)
# Comparison Backbone vs Placebo Final Analysis
Bayes_Sup4_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup4_Fin[1,] <- c(0.05, 0.80, 1.00)

# Wrapup in package format
Bayes_Sup <- list(list(Bayes_Sup1_Int, Bayes_Sup2_Int, Bayes_Sup3_Int, Bayes_Sup4_Int),
                  list(Bayes_Sup1_Fin, Bayes_Sup2_Fin, Bayes_Sup3_Fin, Bayes_Sup4_Fin))

# Comparison Combo vs Mono
Bayes_Fut1 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut1[1,] <- c(0.00, 0.60)
# Comparison Combo vs Backbone
Bayes_Fut2 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut2[1,] <- c(0.00, 0.60)
# Comparison Mono vs Placebo
Bayes_Fut3 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut3[1,] <- c(0.00, 0.60)
# Comparison Backbone vs Placebo
Bayes_Fut4 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut4[1,] <- c(0.00, 0.60)
Bayes_Fut <- list(list(Bayes_Fut1, Bayes_Fut2, Bayes_Fut3, Bayes_Fut4),
                  list(Bayes_Fut1, Bayes_Fut2, Bayes_Fut3, Bayes_Fut4))

# Furthermore set stopping for safety probability for every patient
safety_prob <- 0.0001

# For comparisons with Backbone and Placebo, how should data be pooled?
sharing_type <- "all"

# Which differences in response rates between 1:Combo vs. Mono/Back and 2:Mono/Back vs Plac should be considered a success?
target_rr <- c(0.10, 0.05, 1)

# Set cohort rules ------------

# What is the interim and final sample size for the cohorts?
n_int <- 50
n_fin <- 100

# What is the maximum number of cohorts that can be included in the platform (including starting cohorts)?
cohorts_max <- 5

# With what probability should a new cohort be added for every patient?
cohort_random <- 0.02
cohort_fixed  <- NULL

# Set simulation rules ----------

# Should response rates for the arms and the correlation structures be drawn randomly?
random <- TRUE

# We specify the absolute response rates
random_type <- "absolute"

# What are the possible response rates for the combination therapies and with what probabilities should they be drawn?
rr_comb <- c(0.35, 0.40, 0.45)
prob_comb_rr <- c(0.4, 0.4, 0.2)
# What are the possible response rates for the mono therapies and with what probabilities should they be drawn?
rr_mono <- c(0.15, 0.20, 0.25)
prob_mono_rr <- c(0.2, 0.4, 0.4)
# What are the possible response rates for the backbone therapies and with what probabilities should they be drawn?
rr_back <- c(0.15, 0.20, 0.25)
prob_back_rr <- c(0.3, 0.4, 0.3)
# What are the possible response rates for the placebos and with what probabilities should they be drawn?
rr_plac <- c(0.10, 0.12, 0.14)
prob_plac_rr <- c(0.25, 0.5, 0.25)

# How should response rates be transformed to four probabilities in multinomial sampling where the options are:
# 1) Biomarker:0, Histology:0, 2) Biomarker:1, Histology:0, 3) Biomarker:0, Histology:1, 4) Biomarker:1, Histology:1
# Choose values such that: 1) Specificity*(1-x), 2) (1-Specificity)*(1-x), 3) (1-Sensitivity)*x, 4) Sensitivity*x
# (Sensitivity and Specificity of Biomarker in prediciting Histology)
# In the following example therefore two cases, each with 50% probability:
# 1) Sensitivity = Specificity = 75%
# 2) Sensitivity = Specificity = 85%
rr_transform <- list(
  function(x) {return(c(0.75*(1 - x), (1-0.75)*(1-x), (1-0.75)*x, 0.75*x))},
  function(x) {return(c(0.85*(1 - x), (1-0.85)*(1-x), (1-0.85)*x, 0.85*x))}
)
prob_rr_transform <- c(0.5, 0.5)

# After how many identified successful combos should the trial stop?
sr_drugs_pos <- 1

# Should individual arm data be saved?
stage_data <- TRUE

We can now can the simulation using:

set.seed(12)
run1 <- simulate_trial(
  n_int = n_int, n_fin = n_fin, rr_comb = rr_comb, rr_mono = rr_mono, rr_back = rr_back, 
  rr_plac = rr_plac, rr_transform = rr_transform, random = random, random_type = random_type, 
  prob_comb_rr = prob_comb_rr, prob_mono_rr = prob_mono_rr, prob_back_rr = prob_back_rr,
  prob_plac_rr = prob_plac_rr, stage_data = stage_data, cohort_random = cohort_random, 
  cohorts_max = cohorts_max, sr_drugs_pos = sr_drugs_pos, target_rr = target_rr, 
  sharing_type = sharing_type, safety_prob = safety_prob, Bayes_Sup = Bayes_Sup,
  Bayes_Fut = Bayes_Fut, prob_rr_transform = prob_rr_transform, cohort_fixed = cohort_fixed
)

Example 2

Imagine we want to specify the following frequentist decision rules:

  • At final, declare superiority if:
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the combination response rate against the mono response rate is below 0.10.
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the combination response rate against the backbone response rate is below 0.10.
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the mono response rate against the placebo response rate is below 0.10.
    • The one-sided, Bonferroni-corrected p-value from a Chi-Square Test without continuity correction comparing the backbone mono response rate against the placebo response rate is below 0.10.

Furthermore, we want:

  • A probability to stop for safety for every patient of 0.

  • Both the placebo and the backbone data should be dynamically shared across cohorts for all comparisons.

  • In order to declare a positive a true positive, we want the underlying response rate of the combination therapy to be at least 10% points better than both mono and backbone, and we want the mono and backbone to be at least 5% points better than placebo.

  • A maximum of 5 cohorts in total should be evaluated in the platform and for every patient we want the probability to start a new cohort to be 3%, but with an offset of at least 20 patients. We want to trial to stop after all cohorts have been evaluated.

  • In terms of response rates of the different arms, we want to specify the following discrete probability distributions:

    • For the combo: P(RR=0.35) = 0.4, P(RR=0.40) = 0.4, P(RR=0.45) = 0.2.
    • For the mono: P(RR=0.15) = 0.2, P(RR=0.20) = 0.4, P(RR=0.25) = 0.4.
    • For backbone: P(RR=0.15) = 0.3, P(RR=0.20) = 0.4, P(RR=0.25) = 0.3.
    • For placebo: P(RR=0.10) = 0.25, P(RR=0.12) = 0.5, P(RR=0.14) = 0.25.
  • When simulating the biomarker and histology responses, we want to transform the true histology response rate assigned to this arm (called “x”) using the following function: \(P(Bio=1, Hist=1) = Sensitivity*x, P(Bio=0, Hist=0) = Specificity*(1 - x)\), \(P(Bio=1, Hist=0) = (1-Specificity)*(1-x), P(Bio=0, Hist=1) = (1-Sensitivity)*x\). In order to account for potentially different correlation structures with respect to the combination therapy in use or the use of different biomarkers in different cohorts, we want the sensitivity and specificity to alter between 75% and 85%, each with a probability of 0.5.

We can facilitate this using the following code:


# Set decision rules ----------------

# Comparison Combo vs Mono Interim Analysis
P_Sup1_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Combo vs Backbone Interim Analysis
P_Sup2_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Mono vs Placebo Interim Analysis
P_Sup3_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Backbone vs Placebo Interim Analysis
P_Sup4_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))

# Comparison Combo vs Mono Final Analysis
P_Sup1_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))
# Comparison Combo vs Backbone Final Analysis
P_Sup2_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))
# Comparison Mono vs Placebo Final Analysis
P_Sup3_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))
# Comparison Backbone vs Placebo Final Analysis
P_Sup4_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))

# Wrapup in package format
P_Sup <- list(list(P_Sup1_Int, P_Sup2_Int, P_Sup3_Int, P_Sup4_Int),
              list(P_Sup1_Fin, P_Sup2_Fin, P_Sup3_Fin, P_Sup4_Fin))

# Furthermore set stopping for safety probability for every patient
safety_prob <- 0

# For comparisons with Backbone and Placebo, how should data be pooled?
sharing_type <- "dynamic"

# Which differences in response rates between 1:Combo vs. Mono/Back and 2:Mono/Back vs Plac should be considered a success?
target_rr <- c(0.10, 0.05, 1)

# Set cohort rules ------------

# What is the interim sample size for the additional cohorts?
n_int <- 100
# What should be the final sample size?
n_fin <- 200

# What is the maximum number of cohorts that can be included in the platform (including starting cohorts)?
cohorts_max <- 5

# With what probability should a new cohort be added at every timestamp?
cohort_random <- 0.03
cohort_fixed  <- NULL

# What is the minimum number of iterations between the addition of new cohort?
cohort_offset <- 20

# Finish evaluating all cohorts
sr_drugs_pos <- Inf

# Set simulation rules ----------

# Should response rates for the arms and the correlation structures be drawn randomly?
random <- TRUE

# We specify the absolute response rates
random_type <- "absolute"

# What are the possible response rates for the combination therapies and with what probabilities should they be drawn?
rr_comb <- c(0.35, 0.40, 0.45)
prob_comb_rr <- c(0.4, 0.4, 0.2)
# What are the possible response rates for the mono therapies and with what probabilities should they be drawn?
rr_mono <- c(0.15, 0.20, 0.25)
prob_mono_rr <- c(0.2, 0.4, 0.4)
# What are the possible response rates for the backbone therapies and with what probabilities should they be drawn?
rr_back <- c(0.15, 0.20, 0.25)
prob_back_rr <- c(0.3, 0.4, 0.3)
# What are the possible response rates for the placebos and with what probabilities should they be drawn?
rr_plac <- c(0.10, 0.12, 0.14)
prob_plac_rr <- c(0.25, 0.5, 0.25)

# How should response rates be transformed to four probabilities in multinomial sampling where the options are:
# 1) Biomarker:0, Histology:0, 2) Biomarker:1, Histology:0, 3) Biomarker:0, Histology:1, 4) Biomarker:1, Histology:1
# Choose values such that: 1) Specificity*(1-x), 2) (1-Specificity)*(1-x), 3) (1-Sensitivity)*x, 4) Sensitivity*x
# (Sensitivity and Specificity of Biomarker in prediciting Histology)
# In the following example therefore two cases, each with 50% probability:
# 1) Sensitivity = Specificity = 75%
# 2) Sensitivity = Specificity = 85%
rr_transform <- list(
  function(x) {return(c(0.75*(1 - x), (1-0.75)*(1-x), (1-0.75)*x, 0.75*x))},
  function(x) {return(c(0.85*(1 - x), (1-0.85)*(1-x), (1-0.85)*x, 0.85*x))}
)
prob_rr_transform <- c(0.5, 0.5)

# Should individual arm data be saved?
stage_data <- TRUE

We can now can the simulation using:

set.seed(23)
run2 <- simulate_trial(
  n_int = n_int, n_fin = n_fin, rr_comb = rr_comb, rr_mono = rr_mono, rr_back = rr_back, 
  rr_plac = rr_plac, rr_transform = rr_transform, random = random, prob_comb_rr = prob_comb_rr, 
  random_type = random_type, prob_mono_rr = prob_mono_rr, prob_back_rr = prob_back_rr,
  prob_plac_rr = prob_plac_rr, stage_data = stage_data, cohort_random = cohort_random, 
  cohorts_max = cohorts_max, sr_drugs_pos = sr_drugs_pos, target_rr = target_rr, 
  sharing_type = sharing_type, safety_prob = safety_prob, P_Sup = P_Sup, 
  prob_rr_transform = prob_rr_transform, cohort_offset = cohort_offset, 
  cohort_fixed = cohort_fixed
)

plot_trial()

The following section explains how to use the plot_trial() function by giving some examples. As the name suggests, this function is used to plot the results of a trial simulation from simulate_trial(). Using the examples from the simulate_trial() section, the functionality will be illustrated.

General

The plot_trial() function is very simple: Assuming that the simulate_trial() function has already been used to create a trial object, the plot_trial() function can be applied directly to it. The result is a plotly interactive ggplot. The plot is arranged in a 3x2 grid: The first plot on the top left shows the simulates study overview and gives informations about the cohorts. The second plot on the top right shows the simulated correlation of the biomarker and the histology responses. Please note that the points are jittered and the plot is not 100% accurate and should therefore only be used to get a quick feeling about the correlation. The third plot in the middle on the left shows the empirical biomarker and histology response rates for the combination treatment, as well as the true histology response rate with respect to cohort. The fourth plot in the middle on the right shows the empirical biomarker and histology response rates for the mono treatment, as well as the true histology response rate with respect to cohort. The fifth plot on the bottom left shows the empirical biomarker and histology response rates for the backbone treatment, as well as the true histology response rate with respect to cohort. The sixth plot on the bottom right shows the empirical biomarker and histology response rates for the combination treatment, as well as the true histology response rate with respect to cohort.

Variables

The variables are:

Parameter Definition
res_list List item containing trial results so far in a format used by the other functions in this package
unit What is unit of observation in response rate plots: N_cohort or N_total?

Example 1

#plot_trial(run1)

Example 2

#plot_trial(run2, unit = "n")

trial_ocs()

The following section explains how to use the trial_ocs() function by explaining the various input parameters and giving some examples. As the name suggests, this function is used to simulate a clinical trial given certain design parameters multiple times and then calculate the operating characteristics.

General

The trial_ocs() function takes the same arguments as the simulate_trial() function, which specify the desired study design, and additionally a few variables which define how many simulations will be run, on how many parallel cores the computation should be performed, whether to save the results as an Excel file or RData file and if so, where to save it.

Computed operating characteristics

Operating characteristics Definition
Avg Pat Average number of patients per trial
Avg Pat Comb Average number of patients on combination arms per trial
Avg Pat Mono Average number of patients on mono therapy arms per trial
Avg Pat Back Average number of patients on backbone arms per trial
Avg Pat Plat Average number of patients on placebo arms per trial
Avg RR Comb Average response rate of combination treatment arms across all trials
Avg RR Mono Average response rate of mono therapy treatment arms across all trials
Avg RR Back Average response rate of backbone treatment arms across all trials
Avg RR Plat Average response rate of placebo treatment arms across all trials
SD RR Comb Standard deviation of response rate of combination treatment arms across all trials
SD RR Mono Standard deviation of response rate of mono therapy treatment arms across all trials
SD RR Back Standard deviation of response rate of backbone treatment arms across all trials
SD RR Plat Standard deviation of response rate of placebo treatment arms across all trials
Avg Suc Hist Average number of responders at final analysis per trial
Avg Suc Hist Comb Average number of responders at final analysis on combination arms per trial
Avg Suc Hist Mono Average number of responders at final analysis on mono therapy arms per trial
Avg Suc Hist Back Average number of responders at final analysis on backbone arms per trial
Avg Suc Hist Plac Average number of responders at final analysis on placebo arms per trial
Avg Suc Bio Average number of responders at interim analysis per trial
Avg Suc Bio Comb Average number of responders at interim analysis on combination arms per trial
Avg Suc Bio Mono Average number of responders at interim analysis on mono therapy arms per trial
Avg Suc Bio Back Average number of responders at interim analysis on backbone arms per trial
Avg Suc Bio Plac Average number of responders at interim analysis on placebo arms per trial
Avg Cohorts Average number of cohorts per trial
Avg TP Average number of true positives per trial, i.e. on average, for how many cohorts, which are in truth superior according to the defined target product profile, did the decision rules lead to a declaration of superiority
Avg FP Average number of false positives per trial, i.e. on average, for how many cohorts, which are in truth futile according to the defined target product profile, did the decision rules lead to a declaration of superiority
Avg TN Average number of true negatives per trial, i.e. on average, for how many cohorts, which are in truth futile according to the defined target product profile, did the decision rules lead to a declaration of futility
Avg FN Average number of false negatives per trial, i.e. on average, for how many cohorts, which are in truth superior according to the defined target product profile, did the decision rules lead to a declaration of futility
FDR “False Discovery Rate”, the ratio of the sum of false positives (i.e. for how many cohorts, which are in truth futile according to the defined target product profile, did the decision rules lead to a declaration of superiority) among the sum of all positives (i.e. for how many cohorts did the decision rules lead to a declaration of superiority) across all trial simulations
PTP “Per-Treatment-Power”, the ratio of the sum of true positives (i.e. for how many cohorts, which are in truth superior according to the defined target product profile, did the decision rules lead to a declaration of superiority) among the sum of all cohorts, which are in truth superior (i.e. the sum of true positives and false negatives) across all trial simulations, i.e. this is a measure of how wasteful the trial is with (in truth) superior therapies
PTT1ER “Per-Treatment-Type-1-Error”, the ratio of the sum of false positives (i.e. for how many cohorts, which are in truth futile according to the defined target product profile, did the decision rules lead to a declaration of superiority) among the sum of all cohorts, which are in truth futile (i.e. the sum of false positives and true negatives) across all trial simulations, i.e. this is a measure of how sensitive the trial is in detecting futile therapies
FWER The proportion of trials, in which at least one false positive decision has been made, where only such trials are considered, which contain at least one cohort that is in truth futile
FWER BA The proportion of trials, in which at least one false positive decision has been made, regardless of whether or not any cohorts which are in truth futile exist in there trials
Disj Power The proportion of trials, in which at least one correct positive decision has been made, where only such trials are considered, which contain at least one cohort that is in truth superior
Disj Power BA The proportion of trials, in which at least one correct positive decision has been made, regardless of whether or not any cohorts which are in truth superior exist in there trials
Avg Cohorts Average number of cohorts per trial
Avg_Perc_Pat_Sup_Plac_Th Average percentage of patients on arms that are superior to placebo, whereby also arms that might have entered the platform are considered
Avg_Perc_Pat_Sup_Plac_Real Average percentage of patients on arms that are superior to placebo, taking into account only arms that were actually in the platform
Avg_Pat_Plac_First_Suc Average number of patients on placebo until the first cohort was declared successful
Avg_Pat_Plac_Pool Average number of patients that could have been randomised to placebo
Avg_Cohorts_First_Suc Average number of cohorts until the first cohort was declared successful
Avg_any_P Percentage of trials where any alternative hypothesis was true

Variables

The variables, which will be explained later in two examples are:

Parameter Definition
iter Number of program simulations that should be performed
coresnum How many cores should be used for parallel computing
save Indicator whether simulation results should be saved in an Excel file
path Path to which simulation results will be saved; if NULL, then save to current path
ret_list Indicator whether function should return list of results
ret_trials Indicator whether individual trial results should be saved as well
filename Filename of saved Excel file with results; if NULL, then name will contain design parameters
plot_ocs Should OCs stability plots be drawn?
export Should any other variables be exported to the parallel tasks?
... All other design parameters for chosen program

Examples

We will look at the two examples from the simulate_trial() section.

Example 1

We set the parameters as:


# Set decision rules ----------------

# Set decision rules ----------------

# Comparison Combo vs Mono Interim Analysis
Bayes_Sup1_Int <- matrix(nrow = 1, ncol = 3)
Bayes_Sup1_Int[1,] <- c(0.05, 0.80, 1.00)
# Comparison Combo vs Backbone Interim Analysis
Bayes_Sup2_Int <- matrix(nrow = 1, ncol = 3)
Bayes_Sup2_Int[1,] <- c(0.05, 0.80, 1.00)
# Comparison Mono vs Placebo Interim Analysis
Bayes_Sup3_Int<- matrix(nrow = 1, ncol = 3)
Bayes_Sup3_Int[1,] <- c(0.00, 0.80, 1.00)
# Comparison Backbone vs Placebo Interim Analysis
Bayes_Sup4_Int<- matrix(nrow = 1, ncol = 3)
Bayes_Sup4_Int[1,] <- c(0.00, 0.80, 1.00)

# Comparison Combo vs Mono Final Analysis
Bayes_Sup1_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup1_Fin[1,] <- c(0.10, 0.80, 1.00)
# Comparison Combo vs Backbone Final Analysis
Bayes_Sup2_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup2_Fin[1,] <- c(0.10, 0.80, 1.00)
# Comparison Mono vs Placebo Final Analysis
Bayes_Sup3_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup3_Fin[1,] <- c(0.05, 0.80, 1.00)
# Comparison Backbone vs Placebo Final Analysis
Bayes_Sup4_Fin <- matrix(nrow = 2, ncol = 3)
Bayes_Sup4_Fin[1,] <- c(0.05, 0.80, 1.00)

# Wrapup in package format
Bayes_Sup <- list(list(Bayes_Sup1_Int, Bayes_Sup2_Int, Bayes_Sup3_Int, Bayes_Sup4_Int),
                  list(Bayes_Sup1_Fin, Bayes_Sup2_Fin, Bayes_Sup3_Fin, Bayes_Sup4_Fin))

# Comparison Combo vs Mono
Bayes_Fut1 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut1[1,] <- c(0.00, 0.60)
# Comparison Combo vs Backbone
Bayes_Fut2 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut2[1,] <- c(0.00, 0.60)
# Comparison Mono vs Placebo
Bayes_Fut3 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut3[1,] <- c(0.00, 0.60)
# Comparison Backbone vs Placebo
Bayes_Fut4 <- matrix(nrow = 1, ncol = 2)
Bayes_Fut4[1,] <- c(0.00, 0.60)
Bayes_Fut <- list(list(Bayes_Fut1, Bayes_Fut2, Bayes_Fut3, Bayes_Fut4),
                  list(Bayes_Fut1, Bayes_Fut2, Bayes_Fut3, Bayes_Fut4))

# Furthermore set stopping for safety probability for every patient
safety_prob <- 0.0001

# For comparisons with Backbone and Placebo, how should data be pooled?
sharing_type <- "all"

# Which differences in response rates between 1:Combo vs. Mono/Back and 2:Mono/Back vs Plac should be considered a success?
target_rr <- c(0.10, 0.05, 1)

# Set cohort rules ------------

# What is the interim and final sample size for the cohorts?
n_int <- 50
n_fin <- 100

# What is the maximum number of cohorts that can be included in the platform (including starting cohorts)?
cohorts_max <- 5

# With what probability should a new cohort be added for every patient?
cohort_random <- 0.02
cohort_fixed  <- NULL

# Set simulation rules ----------

# Should response rates for the arms and the correlation structures be drawn randomly?
random <- TRUE

# We specify the absolute response rates
random_type <- "absolute"

# What are the possible response rates for the combination therapies and with what probabilities should they be drawn?
rr_comb <- c(0.35, 0.40, 0.45)
prob_comb_rr <- c(0.4, 0.4, 0.2)
# What are the possible response rates for the mono therapies and with what probabilities should they be drawn?
rr_mono <- c(0.15, 0.20, 0.25)
prob_mono_rr <- c(0.2, 0.4, 0.4)
# What are the possible response rates for the backbone therapies and with what probabilities should they be drawn?
rr_back <- c(0.15, 0.20, 0.25)
prob_back_rr <- c(0.3, 0.4, 0.3)
# What are the possible response rates for the placebos and with what probabilities should they be drawn?
rr_plac <- c(0.10, 0.12, 0.14)
prob_plac_rr <- c(0.25, 0.5, 0.25)

# How should response rates be transformed to four probabilities in multinomial sampling where the options are:
# 1) Biomarker:0, Histology:0, 2) Biomarker:1, Histology:0, 3) Biomarker:0, Histology:1, 4) Biomarker:1, Histology:1
# Choose values such that: 1) Specificity*(1-x), 2) (1-Specificity)*(1-x), 3) (1-Sensitivity)*x, 4) Sensitivity*x
# (Sensitivity and Specificity of Biomarker in prediciting Histology)
# In the following example therefore two cases, each with 50% probability:
# 1) Sensitivity = Specificity = 75%
# 2) Sensitivity = Specificity = 85%
rr_transform <- list(
  function(x) {return(c(0.75*(1 - x), (1-0.75)*(1-x), (1-0.75)*x, 0.75*x))},
  function(x) {return(c(0.85*(1 - x), (1-0.85)*(1-x), (1-0.85)*x, 0.85*x))}
)
prob_rr_transform <- c(0.5, 0.5)

# After how many identified successful combos should the trial stop?
sr_drugs_pos <- 1

# Should individual arm data be saved?
stage_data <- TRUE

We can now set the function-specific parameters and run the simulation using:

# Set specific parameters
# How many iterations should be performed?
iter <- 10
# On how many cores should the calculation be performed?
coresnum <- 1
# Should the result be saved as an Excel File?
save <- TRUE
# Under which path?
path <- "C:\\Users\\Elias\\Desktop\\"
# Under which filename?
filename <- "Testrun"
# Should result also be returned as list?
ret_list <- TRUE
# Should individual trial data be saved?
ret_trials <- FALSE
# Should stability plots be shown?
plot_ocs <- TRUE
set.seed(50)
 ocs1 <- trial_ocs(
  n_int = n_int, n_fin = n_fin, rr_comb = rr_comb, rr_mono = rr_mono, rr_back = rr_back, 
  rr_plac = rr_plac, rr_transform = rr_transform, random = random, random_type = random_type, 
  prob_comb_rr = prob_comb_rr, prob_mono_rr = prob_mono_rr, prob_back_rr = prob_back_rr,
  prob_plac_rr = prob_plac_rr, stage_data = stage_data, cohort_random = cohort_random, 
  cohorts_max = cohorts_max, sr_drugs_pos = sr_drugs_pos, target_rr = target_rr, 
  sharing_type = sharing_type, safety_prob = safety_prob, Bayes_Sup = Bayes_Sup,
  Bayes_Fut = Bayes_Fut, prob_rr_transform = prob_rr_transform, cohort_fixed = cohort_fixed,
  ret_trials = ret_trials, iter = iter, coresnum = coresnum, save = FALSE, path = path, 
  filename = filename, ret_list = ret_list, plot_ocs = plot_ocs
 )
#ocs1[[3]]

Example 2

We set the parameters as:


# Set decision rules ----------------

# Comparison Combo vs Mono Interim Analysis
P_Sup1_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Combo vs Backbone Interim Analysis
P_Sup2_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
# Comparison Mono vs Placebo Interim Analysis
P_Sup3_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))
P_Sup4_Int <- list(list(testfun = NA, p_sup = NA, p_prom = NA))

# Comparison Combo vs Mono Final Analysis
P_Sup1_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))
# Comparison Combo vs Backbone Final Analysis
P_Sup2_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))
# Comparison Mono vs Placebo Final Analysis
P_Sup3_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))
P_Sup4_Fin <- list(list(
testfun = function(x) stats::prop.test(x, alternative = "less", correct = FALSE),
p_sup = 0.10, p_prom = 0, p_adj = "B"))

# Wrapup in package format
P_Sup <- list(list(P_Sup1_Int, P_Sup2_Int, P_Sup3_Int, P_Sup4_Int),
              list(P_Sup1_Fin, P_Sup2_Fin, P_Sup3_Fin, P_Sup4_Fin))

# Furthermore set stopping for safety probability for every patient
safety_prob <- 0

# For comparisons with Backbone and Placebo, how should data be pooled?
sharing_type <- "dynamic"

# Which differences in response rates between 1:Combo vs. Mono/Back and 2:Mono/Back vs Plac should be considered a success?
target_rr <- c(0.10, 0.05, 1)

# Set cohort rules ------------

# What is the interim sample size for the additional cohorts?
n_int <- 100
# What should be the final sample size?
n_fin <- 200

# What is the maximum number of cohorts that can be included in the platform (including starting cohorts)?
cohorts_max <- 5

# With what probability should a new cohort be added at every timestamp?
cohort_random <- 0.03
cohort_fixed <- NULL

# What is the minimum number of iterations between the addition of new cohort?
cohort_offset <- 20

# Finish evaluating all cohorts
sr_drugs_pos <- Inf

# Set simulation rules ----------

# Should response rates for the arms and the correlation structures be drawn randomly?
random <- TRUE

# We specify the absolute response rates
random_type <- "absolute"

# What are the possible response rates for the combination therapies and with what probabilities should they be drawn?
rr_comb <- c(0.35, 0.40, 0.45)
prob_comb_rr <- c(0.4, 0.4, 0.2)
# What are the possible response rates for the mono therapies and with what probabilities should they be drawn?
rr_mono <- c(0.15, 0.20, 0.25)
prob_mono_rr <- c(0.2, 0.4, 0.4)
# What are the possible response rates for the backbone therapies and with what probabilities should they be drawn?
rr_back <- c(0.15, 0.20, 0.25)
prob_back_rr <- c(0.3, 0.4, 0.3)
# What are the possible response rates for the placebos and with what probabilities should they be drawn?
rr_plac <- c(0.10, 0.12, 0.14)
prob_plac_rr <- c(0.25, 0.5, 0.25)

# How should response rates be transformed to four probabilities in multinomial sampling where the options are:
# 1) Biomarker:0, Histology:0, 2) Biomarker:1, Histology:0, 3) Biomarker:0, Histology:1, 4) Biomarker:1, Histology:1
# Choose values such that: 1) Specificity*(1-x), 2) (1-Specificity)*(1-x), 3) (1-Sensitivity)*x, 4) Sensitivity*x
# (Sensitivity and Specificity of Biomarker in prediciting Histology)
# In the following example therefore two cases, each with 50% probability:
# 1) Sensitivity = Specificity = 75%
# 2) Sensitivity = Specificity = 85%
rr_transform <- list(
  function(x) {return(c(0.75*(1 - x), (1-0.75)*(1-x), (1-0.75)*x, 0.75*x))},
  function(x) {return(c(0.85*(1 - x), (1-0.85)*(1-x), (1-0.85)*x, 0.85*x))}
)
prob_rr_transform <- c(0.5, 0.5)

# Should individual arm data be saved?
stage_data <- TRUE

We can now set the function-specific parameters and run the simulation using:

# Set specific parameters
# How many iterations should be performed?
iter <- 10
# On how many cores should the calculation be performed?
coresnum <- 1
# Should the result be saved as an Excel File?
save <- TRUE
# Under which path?
path <- "C:\\Users\\Elias\\Desktop\\"
# Under which filename?
filename <- "Testrun"
# Should result also be returned as list?
ret_list <- TRUE
# Should individual trial data be saved?
ret_trials <- FALSE
# Should stability plots be shown?
plot_ocs <- TRUE
# set.seed(50)
#  ocs1 <- trial_ocs(
#   n_int = n_int, n_fin = n_fin, rr_comb = rr_comb, rr_mono = rr_mono, rr_back = rr_back, 
#   rr_plac = rr_plac, rr_transform = rr_transform, random = random, prob_comb_rr = prob_comb_rr, 
#   random_type = random_type, prob_mono_rr = prob_mono_rr, prob_back_rr = prob_back_rr,
#   prob_plac_rr = prob_plac_rr, stage_data = stage_data, cohort_random = cohort_random, 
#   cohorts_max = cohorts_max, sr_drugs_pos = sr_drugs_pos, target_rr = target_rr, 
#   sharing_type = sharing_type, safety_prob = safety_prob, P_Sup = P_Sup, cohort_fixed = cohort_fixed,
#   prob_rr_transform = prob_rr_transform, cohort_offset = cohort_offset, ret_trials = ret_trials,
#   iter = iter, coresnum = coresnum, save = save, path = path, filename = filename, ret_list = ret_list,
#   plot_ocs = plot_ocs
#  )
# ocs1[[3]]