The purpose of this vignette is to introduce the `bdpsurvival`

function. `bdpsurvival`

is used for estimating posterior samples in the context of right-censored data for clinical trials where an informative prior is used. The underlying model is a piecewise exponential model that assumes a constant hazard rate for each of several sub-intervals of the time to follow-up. In the parlance of clinical trials, the informative prior is derived from historical data. The weight given to the historical data is determined using what we refer to as a discount function. There are three steps in carrying out estimation:

Estimation of the historical data weight, denoted \(\hat{\alpha}\), via the discount function

Estimation of the posterior distribution of the current data, conditional on the historical data weighted by \(\hat{\alpha}\)

If a two-arm clinical trial, estimation of the posterior treatment effect, i.e., treatment versus control

Throughout this vignette, we use the terms `current`

, `historical`

, `treatment`

, and `control`

. These terms are used because the model was envisioned in the context of clinical trials where historical data may be present. Because of this terminology, there are 4 potential sources of data:

Current treatment data: treatment data from a current study

Current control data: control (or other treatment) data from a current study

Historical treatment data: treatment data from a previous study

Historical control data: control (or other treatment) data from a previous study

If only treatment data is input, the function considers the analysis a one-arm trial. If treatment data + control data is input, then it is considered a two-arm trial.

Before we get into our estimation scheme, we will briefly describe the piecewise exponential model. First, we partition the time duration into \(J\) intervals with cutpoints (or breaks) \(0=\tau_0<\tau_1<\dots<\tau_J=\infty\). The \(j\)th interval is defined as \([\tau_{j-1},\,\tau_j)\). Then, we let \(\lambda_j\) denote the hazard rate of the \(j\)th interval. That is, we assume that the hazard rate is piecewise constant.

Now, let \(d_{ij}\) be an event indicator for the \(i\)th subject in the \(j\)th interval. That is, \(d_{ij}=1\) if the endpoint occurred in the \(j\)th interval, otherwise \(d_{ij}=0\). Let \(t_{ij}\) denote the exposure time of the \(i\)th subject in the \(j\)th interval.

Let \(D_j=\sum_id_{ij}\) be the number of episodes that occurred in interval \(j\), and let \(T_j=\sum_it_{ij}\) be the total exposure time within interval \(j\). Then, the \(j\)th hazard rate is estimated as \[\lambda_j\mid D_j, \sim \mathcal{G}amma\left(a_0+D_j,\,b_0+T_j\right),\] where \(a_0\) and \(b_0\) are the prior shape and rate parameters of a gamma distribution. The survival probability can be estimated as \[p_S = 1-F_p\left(q,\,\lambda_1,\dots,\,\lambda_J,\,\tau_0,\,\dots\,\,\tau_J\right),\] where \(F_p\) is the piecewise exponential cumulative distribution function.

In the case where a covariate effect is present, a slightly different approach is used. In the `bdpsurvival`

function, a covariate effect arises in the context of a two-arm trial where the covariate of interest is the treatment indicator, i.e., treatment vs. control. In that case, we assume a Poisson glm model of the form \[\log\mathbb{E}\left(d_{ij}\mid\lambda_j,\,\beta\right)=\log t_{ij} + \log\lambda_j + x_i\beta,\,\,\,i=1,\dots,\,N,\,\,j=1,\dots,\,J.\] In this context, \(\beta\) is the \(\log\) hazard rate between the treatment and control arms. With the Poisson glm, we use an approximation to estimate \(\beta\) conditional on each \(\lambda_j\). Suppose we estimate hazard rates for the treatment and controls arms independently, denoted \(\lambda_{jT}\) and \(\lambda_{jC}\), respectively. That is \[\lambda_{jT} \sim \mathcal{G}amma\left(a_0+D_{jT},\,b_0+T_{jT}\right)\] and \[\lambda_{jC} \sim \mathcal{G}amma\left(a_0+D_{jC},\,b_0+T_{jC}\right),\] where \(D_{jT}\) and \(D_{jC}\) denote the number of events occurring in interval \(j\) for the treatment and control arms, respectively, and \(T_{jT}\) and \(T_{jC}\) denote the total exposures times in interval \(j\) for the treatment and control arms, respectively. Then, the approximation of the log-hazard rate \(\beta\) is carried out as follows: \[\begin{array}{rcl}
R_j & = & \log\lambda_{jT}-\log\lambda_{jC},\,\,\,j=1,\dots,\,J,\\
\\
V_j & = & \mathbb{V}ar(R_j),\,\,\,j=1,\dots,\,J,\\
\\
\beta & = & \displaystyle{\frac{\sum_jR_j/V_j}{\sum_j1/V_j}}.\\
\end{array}\] This estimate of \(\beta\) is essentially a normal approximation to the estimate under a Poisson glm. Currently, the variance term \(V_j\) is estimated empirically by calculating the variance of the posterior draws. The empirical variance approximates the theoretical variance under a normal approximation of \[\begin{array}{rcl}
\tilde{V}_j & = & V_{jT} + V_{jC},\\
\\
V_{jT} & = & 1/D_{jT},\\
\\
V_{jC} & = & 1/D_{jC}.\\
\end{array}\] In a future release of the package, we will demonstrate via simulation how well this normal approximation works in practice.

In the first estimation step, the historical data weight \(\hat{\alpha}\) is estimated. In the case of a two-arm trial, where both treatment and control data are available, an \(\hat{\alpha}\) value is estimated separately for each of the treatment and control arms. Of course, historical treatment or historical control data must be present, otherwise \(\hat{\alpha}\) is not estimated for the corresponding arm.

When historical data are available, estimation of \(\hat{\alpha}\) is carried out as follows. Let \(d_{ij}\) and \(t_{ij}\) denote the the event indicator and event time or censoring time for the \(i\)th subject in the \(j\)th interval of the current data, respectively. Similarly, let \(d_{0ij}\) and \(t_{0ij}\) denote the the event indicator and event time or censoring time for the \(i\)th subject in the \(j\)th interval of the historical data, respectively. Let \(a_0\) and \(b_0\) denote the shape and rate parameters of a gamma distribution, respectively. Then, the posterior distributions of the \(j\) piecewise hazard rates for current and historical data, under vague (flat) priors are

\[\lambda_{j} \sim \mathcal{G}amma\left(a_0+D_j,\,b_0+T_j\right)\]

\[\lambda_{0j} \sim \mathcal{G}amma\left(a_0+D_{0j},\,b_0+T_{0j}\right)\] respectively, where \(D_j=\sum_id_{ij}\), \(T_j=\sum_it_{ij}\), \(D_{0j}=\sum_id_{0ij}\), and \(T_{j0}=\sum_it_{0ij}\). The next steps are dependent on whether a one-arm or two-arm analysis is requested.

Under a one-arm analysis, the comparison of interest is the survival probability at user-specified time \(t^\ast\). Let \[\tilde{\theta} = 1-F_p\left(t^\ast,\,\lambda_1,\dots,\,\lambda_J,\,\tau_0,\,\dots\,\,\tau_J\right),\] and \[\theta_0 = 1-F_p\left(t^\ast,\,\lambda_{01},\dots,\,\lambda_{0J},\,\tau_0,\,\dots\,\,\tau_J\right),\] be the posterior survival probabilities for the current and historical data, respectively. Then, we compute the posterior probability that the current survival is greater than the historical survival \(p = Pr\left(\tilde{\theta} > \theta_0 \mid D, T, D_0,T_0 \right)\), where \(D\) and \(T\) collect \(D_1,\dots,D_J\) and \(T_1,\dots,T_J\), respectively.

Under a two-arm analysis, the comparison of interest is the hazard ratio of current vs. historical data. We estimate the log hazard ratio \(\beta\) as described previously and compute the posterior probability that \(\beta>0\) as \(p = Pr\left(\beta > 0\mid D, T, D_0, T_0\right)\).

Finally, for a discount function, denoted \(W\), \(\hat{\alpha}\) is computed as \[ \hat{\alpha} = \alpha_{max}\cdot W\left(p, \,w\right),\,0\le p\le1, \] where \(w\) is one or more parameters associated with the discount function and \(\alpha_{max}\) scales the weight \(\hat{\alpha}\) by a user-input maximum value. More details on the discount functions are given in the discount function section below.

There are several model inputs at this first stage. First, the user can select `fix_alpha=TRUE`

and force a fixed value of \(\hat{\alpha}\) (at the `alpha_max`

input), as opposed to estimation via the discount function. Next, a Monte Carlo estimation approach is used, requiring several samples from the posterior distributions. Thus, the user can input a sample size greater than or less than the default value of `number_mcmc=10000`

. Next, the Beta rate parameters can be changed from the defaults of \(a_0=b_0=1\) (`a0`

and `b0`

inputs).

An alternate Monte Carlo-based estimation scheme of \(\hat{\alpha}\) has been implemented, controlled by the function input `method="mc"`

. Here, instead of treating \(\hat{\alpha}\) as a fixed quantity, \(\hat{\alpha}\) is treated as random. For a one-arm analysis, let \(p_1\) denote the posterior probability. Then, \(p_1\) is computed as

\[ \begin{array}{rcl} v^2_1 & = & \displaystyle{t^{\ast2}\sum_{j=1}^{J\left(t^\ast\right)}\frac{\lambda_{j}^{2}}{D_{j}}} ,\\ \\ v^2_{01} & = & \displaystyle{t^{\ast2}\sum_{j=1}^{J\left(t^\ast\right)}\frac{\lambda_{0j}^{2}}{D_{0j}}} ,\\ \\ Z_1 & = & \displaystyle{\frac{\left|\tilde{\theta}-\theta_0\right|}{\sqrt{v^2_1 + v^2_{01}}}} ,\\ \\ p_1 & = & 2\left(1-\Phi\left(Z_1\right)\right), \end{array} \]

where \(\Phi\left(x\right)\) is the \(x\)th quantile of a standard normal (the value \(p_1\) is found via the `pnorm`

R function). Here, \(v_1^2\) and \(v^2_{01}\) are the variances of \(\tilde{\theta}\) and \(\theta_0\), respectively, derived via the Fisher information. Next, \(p_1\) is used to construct \(\hat{\alpha}\) via the discount function. Since the values \(Z_1\) and \(p_1\) are computed at each iteration of the Monte Carlo estimation scheme, \(\hat{\alpha}\) is computed at each iteration of the Monte Carlo estimation scheme, resulting in a distribution of \(\hat{\alpha}\) values.

For a two-arm analysis, let \(p_2\) denote the posterior probability. Then, \(p_2\) is computed as

\[ \begin{array}{rcl}
v^2_{2j} & = & \left(a_0 + D_j\right)^{-1},\, j=1,\dots,J,\\
\\
v^2_{02j} & = & \left(a_0 + D_{0j}\right)^{-1},\, j=1,\dots,J,\\
\\
\tilde{R} & = & \displaystyle{ \left(\sum_{j=1}^J\frac{\log\lambda_j-\log\lambda_{0j} }{1/v^2_{2j} + 1/v^2_{02j}}\right) \left(\sum_{j=1}^J\frac{1}{1/v^2_{2j} + 1/v^2_{02j}}\right)^{-1} } ,\\
\\
Z_2 & = & \displaystyle{\left|\tilde{R}\right|\left(\sum_{j=1}^J\frac{1}{1/v^2_{2j} + 1/v^2_{02j}}\right)^{-1/2} } ,\\
\\
p_2 & = & 2\left(1-\Phi\left(Z_2\right)\right),
\end{array}
\] where \(\Phi\left(x\right)\) is the \(x\)th quantile of a standard normal (the value \(p_2\) is found via the `pnorm`

R function). Here, \(v^2_{2j}\) and \(v^2_{02j}}\) are the variances of \(\log\lambda_j\) and \(\log\lambda_{0j}\), respectively, derived via the Fisher information. Next, \(p_2\) is used to construct \(\hat{\alpha}\) via the discount function. Since the values \(Z_2\) and \(p_2\) are computed at each iteration of the Monte Carlo estimation scheme, \(\hat{\alpha}\) is computed at each iteration of the Monte Carlo estimation scheme, resulting in a distribution of \(\hat{\alpha}\) values.

With the historical data weight (or weights) \(\hat{\alpha}\) in hand, we can move on to estimation of the posterior distribution of the current data.

There are currently three discount functions implememented throughout the `bayesDP`

packge. The discount function is specified using the `discount_function`

input with the following choices available:

`weibull`

(default): Weibull cumulative distribution function (CDF);`scaledweibull`

: Scaled Weibull CDF;`identity`

: Identity.

The Weibull CDF is the default discount function and has two user-specified parameters associated with it, the shape and scale. The default shape is 3 and the default scale is 0.135, each of which are controlled by the function inputs `weibull_shape`

and `weibull_scale`

, respectively. The form of the Weibull CDF is \[W(x) = 1 - \exp\left\{- (x/w_{scale})^{w_{shape}}\right\}.\]

The second discount function option is the Scaled Weibull CDF. The Scaled Weibull CDF is the Weibull CDF divided by the value of the Weibull CDF evaluated at 1, i.e., \[W^{\ast}(x) = W(x)/W(1).\] Similar to the Weibull CDF, the Scaled Weibull CDF has two user-specified parameters associated with it, the shape and scale, again controlled by the function inputs `weibull_shape`

and `weibull_scale`

, respectively.

The third discount function is the identity. This simply sets the discount weight \(\hat{\alpha}=p\).

Using the default shape and scale inputs, each of the discount functions are shown below.

In each of the above plots, the x-axis is the stochastic comparison between current and historical data, which we’ve denoted \(p\). The y-axis is the discount value \(\hat{\alpha}\) that corresponds to a given value of \(p\).

An advanced input for the plot function is `print`

. The default value is `print = TRUE`

, which simply returns the graphics. Alternately, users can specify `print = FALSE`

, which returns a `ggplot2`

object. Below is an example using the discount function plot:

```
p1 <- plot(fit01, type="discount", print=FALSE)
p1 + ggtitle("Discount Function Plot :-)")
```

The posterior distribution is dependent on the analysis type: one-arm or two-arm analysis.

With \(\hat{\alpha}\) in hand, we can now estimate the posterior distributions of the hazards so that we can estimate the survival probability as described previously. Using the notation of the previous sections, the posterior distribution is \[
\begin{array}{rcl}
p_S & = & 1-F_p\left(t^\ast,\,\lambda_1,\dots,\,\lambda_J,\,\tau_0,\,\dots\,\,\tau_J\right),\\
\\
\lambda_j & \sim & \mathcal{G}amma\left(a_0+\sum_id_{ij} + \hat{\alpha}\sum_id_{0ij},\,b_0+\sum_it_{ij} + \hat{\alpha}\sum_it_{0ij}\right),\,j=1,\dots,J\\
\end{array}
\] At this model stage, we have in hand `number_mcmc`

simulations from the augmented posterior distribution and we then generate posterior summaries.

Again, under a two-arm analysis, and with \(\hat{\alpha}\) in hand, we can now estimate the posterior distribution of the log hazard rate comparing treatment and control. Let \(\lambda_{jT}\) and \(\lambda_{jC}\) denote the hazard associated with the treatment and control data for the \(j\)th interval, respectively. Then we augment each of the treatment and control data by the weighted historical data as \[
\begin{array}{rcl}
\lambda_{jT} & \sim & \mathcal{G}amma\left(a_0+\sum_id_{ijT} + \hat{\alpha}_T\sum_id_{0ijT},\,b_0+\sum_it_{ijT} + \hat{\alpha}_T\sum_it_{0ijT}\right),\,j=1,\dots,J\\
\end{array}
\] and \[
\begin{array}{rcl}
\lambda_{jC} & \sim & \mathcal{G}amma\left(a_0+\sum_id_{ijC} + \hat{\alpha}_C\sum_id_{0ijC},\,b_0+\sum_it_{ijC} + \hat{\alpha}_C\sum_it_{0ijC}\right),\,j=1,\dots,J\\
\end{array}
\] respectively. We then construct the log hazard ratio \(\beta\) using the estimates of \(\lambda_{jT}\) and \(\lambda_{jC}\), \(j=1,\dots,\,J\) as described previously. At this model stage, we have in hand `number_mcmc`

simulations from the augmented posterior distribution and we then generate posterior summaries.

The data inputs for `bdpsurvival`

is via a data frame that must have very specific column names. The required columns are `status`

, `time`

, `historical`

, and `treatment`

. Descriptions of each column are as follows:

`status`

- an indicator (0 or 1) of whether an event was observed (1) or if the observation is right-censored (0)`time`

- the event or censor time`historical`

- an indicator (0 or 1) of whether the observation source is historical data (1) or current data (0)`treatment`

- an indicator (0 or 1) of whether the observation source is treatment group (1) or control group (0)

Historical data are not necessary, but using this function would not be necessary either. **At the minimum, at least one historical=0 and one treatment=1 observation must be present**. Each of the following input combinations for observations are allowed:

- (
`historical=0`

,`treatment=1`

) - one-arm trial - (
`historical=0`

,`treatment=1`

) + (`historical=1`

,`treatment=1`

) - one-arm trial - (
`historical=0`

,`treatment=1`

) + (`historical=0`

,`treatment=0`

) - two-arm trial - (
`historical=0`

,`treatment=1`

) + (`historical=0`

,`treatment=0`

) - two-arm trial - (
`historical=0`

,`treatment=1`

) + (`historical=1`

,`treatment=1`

) + (`historical=0`

,`treatment=0`

) - two-arm trial - (
`historical=0`

,`treatment=1`

) + (`historical=1`

,`treatment=1`

) + (`historical=1`

,`treatment=0`

) - two-arm trial - (
`historical=0`

,`treatment=1`

) + (`historical=1`

,`treatment=1`

) + (`historical=0`

,`treatment=0`

)+ (`historical=1`

,`treatment=0`

) - two-arm trial

To demonstrate a one-arm trial, we will simulate survival data from an exponential distribution. For ease of exposition, we will assume that there are no censored observations.

```
set.seed(42)
# Simulate survival times
time_current <- rexp(50, rate=1/10)
time_historical <- rexp(50, rate=1/15)
# Combine simulated data into a data frame
data1 <- data.frame(status = 1,
time = c(time_current, time_historical),
historical = c(rep(0,50),rep(1,50)),
treatment = 1)
```

In this example, we’ve simulated current survival times from an exponential distribution with rate `1/10`

and historical survival times from an exponential distribution with rate `1/15`

. With our data frame constructed, we can now fit the `bdpsurvival`

model. Since this is a one-arm trial, we will request the survival probability at `surv_time=5`

. Thus, estimation using the default model inputs is carried out:

```
set.seed(42)
fit1 <- bdpsurvival(Surv(time, status) ~ historical + treatment,
data = data1,
surv_time = 5)
print(fit1)
```

```
##
## One-armed bdp survival
##
##
## n events surv_time median lower 95% CI upper 95% CI
## 50 50 5 0.6569 0.5556 0.7501
```

The `print`

method displays the median survival probability of `0.6569`

and the 95% lower and upper interval limits of `0.5556`

and `0.7501`

, respectively. The `summary`

method is implemented as well. For a one-arm trial, the summary outputs a survival table as follows:

`summary(fit1)`

```
##
## One-armed bdp survival
##
## Stochastic comparison (p_hat) - treatment (current vs. historical data): 0.1108
## Discount function value (alpha) - treatment: 0.4247
##
## Current treatment - augmented posterior summary:
## time n.risk n.event survival std.err lower 95% CI upper 95% CI
## 0.2882 50 1 0.9792 0.0053 0.9672 0.9878
## 0.2883 49 1 0.9792 0.0053 0.9672 0.9878
## 0.3819 48 1 0.9725 0.0069 0.9568 0.9839
## 0.5714 47 1 0.9591 0.0102 0.9361 0.9760
## 0.9615 46 1 0.9322 0.0167 0.8948 0.9599
## 1.9834 45 1 0.8651 0.0317 0.7950 0.9190
## 2.2356 44 1 0.8493 0.0351 0.7722 0.9092
## 2.3111 43 1 0.8446 0.0360 0.7655 0.9062
## 2.7996 42 1 0.8150 0.0420 0.7234 0.8876
## 2.8349 41 1 0.8129 0.0424 0.7205 0.8862
## 3.0982 40 1 0.7974 0.0454 0.6989 0.8764
## 3.1398 39 1 0.7950 0.0459 0.6955 0.8748
## 3.5121 38 1 0.7672 0.0461 0.6680 0.8481
## 3.5455 37 1 0.7646 0.0460 0.6658 0.8449
## 3.7034 36 1 0.7521 0.0458 0.6545 0.8326
## 3.8641 35 1 0.7393 0.0458 0.6429 0.8207
## 3.9266 34 1 0.7345 0.0458 0.6380 0.8160
## 4.1013 33 1 0.7212 0.0461 0.6251 0.8041
## 4.3871 32 1 0.7001 0.0469 0.6027 0.7850
## 4.7318 31 1 0.6759 0.0484 0.5761 0.7646
## 4.7519 30 1 0.6744 0.0485 0.5746 0.7636
## 4.9759 29 1 0.6587 0.0496 0.5573 0.7514
## 5.6939 28 1 0.6114 0.0539 0.5035 0.7130
## 6.2211 27 1 0.5789 0.0571 0.4666 0.6875
## 6.5632 26 1 0.5584 0.0578 0.4458 0.6688
## 6.6090 25 1 0.5555 0.0575 0.4434 0.6655
## 6.6583 24 1 0.5523 0.0572 0.4409 0.6619
## 7.1486 23 1 0.5215 0.0553 0.4142 0.6293
## 7.1909 22 1 0.5189 0.0551 0.4120 0.6260
## 7.3750 21 1 0.5080 0.0548 0.4020 0.6143
## 7.7829 20 1 0.4846 0.0544 0.3800 0.5913
## 11.9160 19 1 0.3106 0.0501 0.2199 0.4158
## 11.9511 18 1 0.3095 0.0500 0.2191 0.4148
## 12.1138 17 1 0.3044 0.0497 0.2148 0.4093
## 12.4558 16 1 0.2940 0.0490 0.2067 0.3989
## 12.5396 15 1 0.2915 0.0488 0.2046 0.3958
## 12.8709 14 1 0.2820 0.0482 0.1969 0.3864
## 13.0849 13 1 0.2760 0.0479 0.1914 0.3803
## 13.4472 12 1 0.2663 0.0474 0.1826 0.3701
## 14.1609 11 1 0.2480 0.0466 0.1664 0.3514
## 14.6363 10 1 0.2365 0.0462 0.1563 0.3395
## 15.2313 9 1 0.2230 0.0457 0.1436 0.3257
## 17.5641 8 1 0.1772 0.0440 0.1034 0.2771
## 19.0209 7 1 0.1658 0.0416 0.0963 0.2611
## 24.0868 6 1 0.1339 0.0353 0.0762 0.2156
## 30.1856 5 1 0.1037 0.0306 0.0560 0.1762
## 41.6813 4 1 0.0643 0.0250 0.0284 0.1251
## 48.6280 3 1 0.0482 0.0223 0.0184 0.1033
## 49.9597 2 1 0.0457 0.0218 0.0167 0.0997
## 68.4657 1 1 0.0214 0.0154 0.0050 0.0627
```

In the above output, in addition to a survival table, we can see the stochastic comparison between the current and historical data of `0.1108`

as well as the weight, alpha, of `0.4247`

applied to the historical data. Here, the weight applied to the historical data is very small since the stochastic comparison suggests that the current and historical data are not similar.

Suppose that we would like to apply full weight to the historical data. This can be accomplished by setting `alpha_max=1`

and `fix_alpha=TRUE`

as follows:

```
set.seed(42)
fit1a <- bdpsurvival(Surv(time, status) ~ historical + treatment,
data = data1,
surv_time = 5,
alpha_max = 1,
fix_alpha = TRUE)
print(fit1a)
```

```
##
## One-armed bdp survival
##
##
## n events surv_time median lower 95% CI upper 95% CI
## 50 50 5 0.6831 0.5958 0.7592
```

Now, the median survival probability shifts upwards towards the historical data.

Many of the the values presented in the `summary`

and `print`

methods are accessible from the fit object. For instance, `alpha`

is found in `fit1a$posterior_treatment$alpha_discount`

and `p_hat`

is located at `fit1a$posterior_treatment$p_hat`

. The augmented survival probability and CI are computed at run-time. The results can be replicated as:

```
survival_time_posterior_flat <- ppexp(5,
fit1a$posterior_treatment$posterior_hazard,
cuts=c(0,fit1a$args1$breaks))
surv_augmented <- 1-median(survival_time_posterior_flat)
CI95_augmented <- 1-quantile(survival_time_posterior_flat, prob=c(0.975, 0.025))
```

Here, we first compute the piecewise exponential cumulative distribution function using the `ppexp`

function. The `ppexp`

function requires the survival time, `5`

here, the posterior draws of the piecewise hazards, and the cuts points of the corresponding intervals.

Finally, we’ll explore the `plot`

method.

`plot(fit1a)`

The top plot displays three survival curves. The green curve is the survival of the historical data, the red curve is the survival of the current event rate, and the blue curve is the survival of the current data augmented by historical data. Since we gave full weight to the historical data, the augmented curve is “in-between” the current and historical curves.

The bottom plot displays the discount function (solid curve) as well as `alpha`

(horizontal dashed line) and `p_hat`

(vertical dashed line). In the present example, the discount function is the Weibull probability distribution with `shape=3`

and `scale=0.135`

.

On to two-arm trials. In this package, we define a two-arm trial as an analysis where a current and/or historical control arm is present. Suppose we have the same treatment data as in the one-arm example, but now we introduce control data. Again, we will assume that there is no censoring present in the control data.:

```
set.seed(42)
# Simulate survival times for treatment data
time_current_trt <- rexp(50, rate=1/10)
time_historical_trt <- rexp(50, rate=1/15)
# Simulate survival times for control data
time_current_cntrl <- rexp(50, rate=1/20)
time_historical_cntrl <- rexp(50, rate=1/20)
# Combine simulated data into a data frame
data2 <- data.frame(status = 1,
time = c(time_current_trt, time_historical_trt,
time_current_cntrl, time_historical_cntrl),
historical = c(rep(0,50),rep(1,50), rep(0,50),rep(1,50)),
treatment = c(rep(1,100), rep(0,100)))
```

In this example, we’ve simulated current and historical control survival times from exponential distributions with rate `1/20`

. Note how the `data2`

data frame has been constructed; we’ve taken care to ensure that the current/historical and treatment/control indicators line up properly. With our data frame constructed, we can now fit the `bdpsurvival`

model. Before proceeding, it is worth pointing out that the discount function is applied separately to the treatment and control data. Now, let’s carry out the two-arm analysis using default inputs:

```
set.seed(42)
fit2 <- bdpsurvival(Surv(time, status) ~ historical + treatment,
data = data2)
print(fit2)
```

```
##
## Two-armed bdp survival
##
## data:
## Current treatment: n = 50, number of events = 50
## Current control: n = 50, number of events = 50
## Stochastic comparison (p_hat) - treatment (current vs. historical data): 0.0966
## Stochastic comparison (p_hat) - control (current vs. historical data): 0.2948
## Discount function value (alpha) - treatment: 0.3068
## Discount function value (alpha) - control: 1
##
## coef exp(coef) se(coef) lower 95% CI upper 95% CI
## treatment 0.6348 1.8866 0.1726 0.2962 0.9691
```

The `print`

method of a two-arm analysis is largely different than a one-arm analysis (with a two-arm analysis, the `summary`

method is identical to `print`

). First, we see the stochastic comparisons reported for both the treatment and control arms. As seen previously, the stochastic comparison between the current and historical data for the treatment data is relatively small at `0.0966`

, giving a low historical data weight of `0.3068`

. On the other hand, the stochastic comparison between the current and historical data for the control data is relatively large at `0.2948`

, giving a high historical data weight of `1`

. Finally, the presented `coef`

value (and associated interval limits) is the log hazard ratio between the augmented treatment data and the augmented control data computed as `log(treatment) - log(control)`

.