Fitting football models and visualizing predictions with the footBayes package

Leonardo Egidi, Vasilis Palaskas

2023-08-31

Modeling football outcomes

Modeling football outcomes became incredibly popular over the last years. However, an encompassing computational tool able to fit in one step many alternative football models is missing yet.

With the footBayes package we want to fill the gap and to give the possibility to fit, interpret and graphically explore the following goal-based Bayesian football models using the underlying Stan ( Stan Development Team (2020)) environment:

Precisely, we’ll learn how to:

The package is also available at the following link:

https://github.com/LeoEgidi/footBayes

In my opinion building packages is like an art’s work, likely to be never-ending. To say in art’s terms, I love this quote from Antoine de Saint-Exupéry (partially rephrased):

“A (software) designer knows that he has reached perfection not when there is nothing more to add, but when there is nothing left to take away!”

Bivariate-Poisson model

One main concern with the double Poisson model relies on the fact that the goals scored during a match by two competing teams are conditionally independent. However, in team sports, such as football, water-polo, handball, hockey, and basketball it is reasonable to assume that the two outcome variables are correlated since the two teams interact during the game. Consider, for instance, the realistic football case of the home team leading with 1-0, when only ten minutes are left to play. The away team can then become more determined and can take more risk in an effort to score and achieve the draw within the end of the match. Or, even when one of the two teams is leading say with 3-0, or 4-0, it is likely it will be relaxing a bit, and the opposing team could score at least one goal quite easily. To this aim, goals’ correlation due to a change in the behaviour of the team or both teams could be captured by a dependence parameter, accounting for positive correlation. Positive parametric goals’ dependence is made possible by using a bivariate Poisson distribution.

Consider random variables \(X_r, r =1,2,3\), which follow independent Poisson distributions with parameters \(\lambda_r >0\). Then the random variables \(X=X_1 + X_3\) and \(Y =X_2 + X_3\) jointly follow a bivariate Poisson distribution \(\text{BP}(\lambda_1, \lambda_2, \lambda_3)\), with joint probability function

\[\begin{align} \begin{split} P_{X, Y}(x,y)&= \textrm{Pr}(X=x, Y=y)\\ &= \exp \{ -(\lambda_1+\lambda_2+\lambda_3) \}\frac{\lambda_1^x}{x!}\frac{\lambda_2^y}{y!}\times \\ & \sum_{k=0}^{\min{(x,y)}}\binom{x}{k}\binom{y}{k}k!\left( \frac{\lambda_3}{\lambda_1 \lambda_2} \right)^k. \end{split} \label{eq:biv_density} \end{align}\]

Marginally each random variable follows a Poisson distribution with \(\textrm{E}(X)=\lambda_{1}+\lambda_{3}, \ \textrm{E}(Y)=\lambda_{2}+\lambda_{3}\), and \(\textrm{cov}(X,Y)=\lambda_{3}\); \(\lambda_{3}\) acts as a measure of dependence between the goals scored by the two competing teams. If \(\lambda_3 = 0\) then the two variables are conditionally independent and the bivariate Poisson distribution reduces to the product of two independent Poisson distributions, the double Poisson case.

Let \((x_{n}, y_{n})\) denote the observed number of goals scored by the home and the away team in the \(n\)-th game, respectively. A general bivariate Poisson model allowing for goals’ correlation, as in Karlis and Ntzoufras (2003) is the following:

\[\begin{align} X_n, Y_n| \lambda_{1n}, \lambda_{2n}, \lambda_{3n} & \sim \mathsf{BivPoisson}(\lambda_{1n}, \lambda_{2n}, \lambda_{3n})\\ \log(\lambda_{1n}) & = \mu+\text{home}+ \text{att}_{h_n}+\text{def}_{a_n}\\ \log(\lambda_{2n}) & = \mu +\text{att}_{a_n}+\text{def}_{h_n}\\ \log(\lambda_{3n}) &=\beta_0 + \gamma_1 \beta^{\text{home}}_{h_n}+\gamma_2\beta^{\text{away}}_{a_n} + \gamma_3\boldsymbol{\beta} w_n, \end{align}\]

where \(\lambda_{1n}, \lambda_{2n}\) represent the scoring rates for the home and the away team, respectively; \(\mu\) represents the constant intercept; \(\text{home}\) represents the home-effect, i.e. the well-known advantage of the team hosting the game; \(\text{att}_t\) and \(\text{def}_t\) represent the attack and the defence abilities, respectively, for each team \(t\), \(t=1,\ldots,T\); the nested indexes \(h_{n}, a_{n}=1,\ldots,T\) denote the home and the away team playing in the \(n\)-th game, respectively; \(\beta_0\) is a constant parameter; \(\beta^{\text{home}}_{h_n}\) and \(\beta^{\text{away}}_{a_n}\) are parameters that depend on the home and away team respectively, \(w_{n}\) is a vector of covariates for the \(n\)-th match used to model the covariance term and \(\boldsymbol{\beta}\) is the corresponding vector of regression coefficients. The parameters \(\gamma_1, \gamma_2\) and \(\gamma_3\) are dummy binary indicators taking values 0 or 1 which may activate distinct sources of the linear predictor. Hence when \(\gamma_1=\gamma_2=\gamma_3=0\) we consider constant covariance as in Egidi and Torelli (2020) , whereas when \((\gamma_1, \gamma_2, \gamma_3)=(1,1,0)\) we assume that the covariance depends on the teams’ parameters only but not on further match covariates, and so on.

The case \(\lambda_{3n}=0\) (the scores’ correlation parameter equals zero) reduces to the double Poisson model, as in Baio and Blangiardo (2010) .

To achieve model’s identifiability, attack/defence parameters are imposed a sum-to-zero constraint:

\[\begin{equation} \sum_{t=1}^{T} \text{att}_{t}=0, \ \sum_{t=1}^{T}\text{def}_{t}=0. \label{eq:sum_to_zero} \end{equation}\]

Another identifiability constraint, largely proposed in the football literature, is the corner-constraint, which assumes the abilities for the \(T\)-th team are equal to the negative sum of the others, and then achieves a sum-to-zero as well:

\[\begin{equation} \text{att}_T= -\sum_{t=1}^{T-1} \text{att}_{t}, \ \text{def}_T=- \sum_{t=1}^{T-1}\text{def}_{t}. \label{eq:corner2} \end{equation}\]

The current version of the package allows for the fit of diagonal-inflated Bivariate Poisson Karlis and Ntzoufras (2003) and zero-inflated Skellam models to better capture the probability of draw occurrences. A draw between two teams is represented by the outcomes on the diagonal of the probability table. To correct for the excess of draws we may add an inflation component on the diagonal of the probability function. This model is an extension of the simple zero-inflated model that allows only for an excess in (0,0) draws. Let’s focus on the diagonal-inflated bivariate Poisson model, specified as:

\[\begin{equation} P_{X, Y}(x,y) = \textrm{Pr}(X=x, Y=y) = \begin{cases} (1-p) \text{BP}(\lambda_1, \lambda_2, \lambda_3) \ \ \ & \text{if} \ x \ne y \\ (1-p) \text{BP}(\lambda_1, \lambda_2, \lambda_3) + pD(x, \eta) \ \ \ & \text{if} \ x = y, \end{cases} \end{equation}\]

where \(D(x, \eta)\) is a discrete distribution with parameter vector \(\eta\).

Now it’s time to fit and interpret the models, and we’ll mainly focus on the bivariate Poisson case. Classical estimates for BP models are provided, among the others, by Karlis and Ntzoufras (2003) (MLE through an EM algorithm) and Koopman and Lit (2015); in the following, we quickly revise the maximum likelihood approach, but we’ll deeply focus on Bayesian estimation with the underlying rstan software to better capture:

Likelihood approach

Given the parameter-vector \(\boldsymbol{\theta}= (\{\text{att}_t,\text{def}_t,t=1,\ldots,T \}, \mu, \text{home}, \beta^{\text{home}}_{h_n}, \beta^{\text{away}}_{a_n}, \beta_0, \boldsymbol{\beta})\), the likelihood function of the bivariate Poisson model above takes the following form:

\[\begin{eqnarray} L(\boldsymbol{\theta}) = & \prod_{n=1}^N \exp \{ -(\lambda_{1n}+\lambda_{2n}+\lambda_{3n}) \}\frac{\lambda_{1n}^{x_n}}{{x_n}!}\frac{\lambda_{2n}^{y_n}}{{y_n}!}\times \\ &\sum_{k=0}^{\min{(x_n,y_n)}}\binom{x_n}{k}\binom{y_n}{k}k!\left( \frac{\lambda_{3n}}{\lambda_{1n} \lambda_{2n}} \right)^k. \label{eq:lik_biv} \end{eqnarray}\]

Maximum-likelihood parameters estimation can be performed by searching the MLE \(\hat{\boldsymbol{\theta}}\) such that:

\[\hat{\boldsymbol{\theta}} = \underset{\theta \in \Theta}{\text{argmax}}\ L(\boldsymbol{\theta}),\]

by imposing the following system of partial (log)-likelihood equations:

\[l'(\boldsymbol{\theta})=0.\] Wald and deviance-confidence intervals may be constructed for the MLE \(\hat{\boldsymbol{\theta}}\). A 95% Wald-type interval satisfies:

\[\hat{\boldsymbol{\theta}} \pm 1.96 \ \text{se}(\hat{\boldsymbol{\theta}}).\] As we’ll see, the footBayes package allows the MLE computational approach (along with Wald-type and profile-likelihood confidence intervals) for static models only, i.e. when the model complexity is considered acceptable. As the parameters’ space grows—as it commonly happens when adding dynamic patterns—MLE becomes computationally expensive and less reliable.

Bayesian approach

The goal of the Bayesian analysis is to carry out inferential conclusions from the joint posterior distribution \(\pi(\boldsymbol{\theta}|\mathcal{D})\), where \(\mathcal{D}= (x_n, y_n)_{n=1,\ldots,N}\) denotes the set of observed data for the \(N\) matches. The joint posterior satisfies

\[\pi(\boldsymbol{\theta}|\mathcal{D}) = \frac{p(\mathcal{D}|\boldsymbol{\theta})\pi(\boldsymbol{\theta})}{p(\mathcal{D})} \propto p(\mathcal{D}|\boldsymbol{\theta})\pi(\boldsymbol{\theta}),\]

where \(p(\mathcal{D}|\boldsymbol{\theta})\) is the model sampling distribution (proportional to the likelihood function), \(\pi(\boldsymbol{\theta})\) is the joint prior distribution for \(\boldsymbol{\theta}\), and \(p(\mathcal{D})= \int_{\Theta} p(\mathcal{D}|\boldsymbol{\theta})\pi(\boldsymbol{\theta}) d\theta\) is the marginal likelihood that does not depend on \(\theta\).

In the majority of the cases, \(\pi(\boldsymbol{\theta}|\mathcal{D})\) does not have a closed form and for such reason we need to approximate it by simulation. The most popular class of algorithms designed to achieve this task is named Markov Chain Monte Carlo Methods (see Robert and Casella (2013) for a deep theoretical overview). These methods allow to sample weak correlated samples from some Markov chains whose stationary and limiting distribution coincide with the posterior distribution that we wish to approximate and sample from.

The footBayes package relies on a sophisticated MCMC enginery, namely the Hamiltonian Monte Carlo performed by the Stan software: the HMC borrow its name from the Hamiltonian dynamics of physics and is aimed at suppressing random-walk and wasteful behaviours in the exploration of the posterior distribution which typically arise when using the Gibbs sampling and the Metropolis-Hastings algorithm. For a deep and great summary about HMC, you may read the paper Betancourt (2017).

In terms of inferential conclusions, we are usually interested in summaries from the marginal posterior distributions of the single parameters: posterior means, medians, credibility intervals, etc.. We can write out the formula for the posterior distribution of the bivariate Poisson model above as:

\[\pi(\boldsymbol{\theta}|\mathcal{D}) \propto \pi(\boldsymbol{\theta}) \prod_{n=1}^N \mathsf{BivPoisson}(\lambda_{1n}, \lambda_{2n}, \lambda_{3n}),\]

where \(\pi(\boldsymbol{\theta})= \pi(\text{att}) \pi(\text{def})\pi(\mu)\pi(\text{home}) \pi(\beta^{\text{home}}_{h_n})\pi( \beta^{\text{away}}_{a_n})\pi(\beta_0)\pi(\boldsymbol{\beta})\) is the joint ptior distribution under the assumption of a-priori independent parameters’ components.

The standard approach is to assign some weakly-informative prior distributions to the team-specific abilities. These parameters are considered exchangeable from two common (prior) distributions:

\[\begin{align} \text{att}_t &\sim \mathcal{N}(\mu_{\text{att}}, \sigma_{\text{att}})\\ \text{def}_t &\sim \mathcal{N}(\mu_{\text{def}}, \sigma_{\text{def}}), \ \ t= \ 1,\ldots,T, \end{align}\]

with hyperparameters \(\mu_{\text{att}}, \sigma_{\text{att}}, \mu_{\text{def}}, \sigma_{\text{def}}\). The model formulation is completed by assigning some weakly-informative priors to the remaining parameters. In what follows, some priors’ options will be handled directly by the user.

Installing the package

Let’s install the footBayes package from Github:

library(devtools)
install_github("LeoEgidi/footBayes")

and load the following required packages (please, install them on your laptops):

library(footBayes)
library(bayesplot)
library(loo)
library(ggplot2)
library(dplyr)
library(tidyverse)

Model fit

Static fit

To start with some analysis, let’s now italy data about the Italian Serie A, specifically season 2000/2001: the season consists of \(T=18\) teams, we start fitting a static bivariate Poisson model using:

At this stage, we are currently ignoring any time-dependence in our parameters, considering them to be static across distinct match-times.

The Gelman-Rubin statistic \(\hat{R}\) (Rhat) is below the threshold 1.1 for all the parameters, whereas the effective sample size (n_eff), measuring the approximate number of iid replications from the Markov chains, does not appear to be problematic. Thus, HMC sampling reached the convergence.

As we could expect, there is a positive effect from the home-effect (posterior mean about 0.3), and this implies that if two teams are equally good (meaning that their attack and defence abilities almost coincide), assuming that the constant intercept \(\mu \approx 0\), we get that the average number of goals for the home-team will be \(\lambda_{1} = \exp \{0.3 \} \approx 1.35\), against \(\lambda_{2} = \exp \{0 \} = 1\).

In the model above, we are assuming that the covariance \(\lambda_{3n}\) is constant and not depending on the match and/or on teams characteristics/further covariates:

\[\begin{align} \lambda_{3n} =&\ \exp\{\rho\}\\ \rho \sim & \ \mathcal{N}^+(0,1),\\ \end{align}\]

where \(\rho\) is assigned an half-Gaussian distribution with standard deviation equal to 1. According to the fit above, this means that in the model above we get an estimate of \(\lambda_{3n}= \exp\{-4.25\} \approx 0.014\), suggesting a low, despite non-null, amount of goals-correlation existing in the 2000/2001 Italian Serie A. Of course, in the next package’s version, the user will be allowed to specify a more general linear predictor for \(\log(\lambda_{3n})\), as outlined in the BP presentation above, along with some prior distributions for the parameters involved in the covariance formulation.

We can also depict the marginal posterior distributions for \(\rho\) (and eventually for the other fixed-effects parameters) using the bayesplot package for Bayesian visualizations:

We can also access the original BP Stan code by typing:

We fit now the same model under the MLE approach with Wald-type confidence intervals. We can then print the MLE estimates, e.g for the parameters \(\rho\) and \(\text{home}\):

We got a very similar estimate to the Bayesian model for the home-effect.

Changing default priors

One of the common practices in Bayesian statistics is to change the priors and perform some sensitivity tests. The default priors for the team-specific abilities and their related team-level standard deviations are:

\[\begin{align} \text{att}_t &\sim \mathcal{N}(\mu_{\text{att}}, \sigma_{\text{att}}),\\ \text{def}_t &\sim \mathcal{N}(\mu_{\text{def}}, \sigma_{\text{def}}),\\ \sigma_{\text{att}}, \sigma_{\text{def}} &\sim \mathsf{Cauchy}^+(0,5), \end{align}\]

where \(\mathsf{Cauchy}^+\) denotes the half-Cauchy distribution with support \([0, +\infty)\). However, the user is free to elicit some different priors, possibly choosing one among the following distributions: Gaussian (normal), student-\(t\) (student_t), Cauchy (cauchy) and Laplace (laplace). The prior optional argument allows to specify the priors for the team-specific parameters \(\text{att}\) and \(\text{def}\), whereas the optional argument prior_sd allows to assign a prior to the group-level standard deviations \(\sigma_{\text{att}}, \sigma_{\text{def}}\). For instance, for each team \(t, t=1,\ldots,T\), we could consider:

\[\begin{align} \text{att}_t &\sim t(4, \mu_{\text{att}}, \sigma_{\text{att}}),\\ \text{def}_t &\sim t(4, \mu_{\text{def}}, \sigma_{\text{def}}),\\ \sigma_{\text{att}}, \sigma_{\text{def}} &\sim \mathsf{Laplace}^+(0,1), \end{align}\]

where \(t(\text{df}, \mu, \sigma)\) denotes a student-\(t\) distribution with df degrees of freedom, location \(\mu\) and scale \(\sigma\), whereas \(\mathsf{Laplace}^+\) denotes a half-Laplace distribution.

Then, we can compare the marginal posteriors from the two models, the one with Gaussian team-specific abilities and the default \(\mathsf{Cauchy}(0,5)\) for the team-level sds, and the other one specified above, with student-\(t\) distributed team-specific abilities and the \(\mathsf{Laplace}^+(0,1)\) for the team-level sds. We depict here the comparison for the attack team-level sds only:

The student+laplace prior induces a lower amount of group-variability in the \(\sigma_{\text{att}}\) marginal posterior distribution (then, a larger shrinkage towards the grand mean \(\mu_{\text{att}}\)).

When specifying the prior for the team-specific parameters through the argument prior, you are not allowed to fix the group-level standard deviations \(\sigma_{\text{att}}, \sigma_{\text{def}}\) to some numerical values. Rather, they need to be assigned a reasonable prior distribution. For such reason, the most appropriate specification for the prior argument is prior = 'dist'(0, NULL), where the scale argument is set to NULL (otherwise, a warning message is occurring).

Dynamic fit

A structural limitation in the previous models is the assumption of static team-specific parameters, namely teams are assumed to have a constant performance across time, as determined by the attack and defence abilities (att, def). However, teams’ performance tend to be dynamic and change across different years, if not different weeks. Many factors contribute to this football aspect:

  1. teams act during the summer/winter players’ transfermarket, by dramatically changing their rosters;

  2. some teams’ players could be injured in some periods, by affecting the global quality of the team in some matches;

  3. coaches could be dismissed from their teams due to some non satisfactory results;

  4. some teams could improve/worsen their attitudes due to the so-called turnover;

and many others. Perhaps, we could assume dynamics in the attach/defence abilities as in Owen (2011) and Egidi, Pauli, and Torelli (2018) in terms of weeks or seasons. In such framework, for a given number of times \(1, \ldots, \mathcal{T}\), the models above would be unchanged, but the priors for the abilities parameters at each time \(\tau, \tau=2,\ldots, \mathcal{T},\) would be auto-regressive of order 1:

\[\begin{align} \text{att}_{t, \tau} & \sim \mathcal{N}({\text{att}}_{t, \tau-1}, \sigma_{\text{att}})\\ \text{def}_{t, \tau} &\sim \mathcal{N}({\text{def}}_{t, \tau-1}, \sigma_{\text{def}}), \end{align}\]

whereas for \(\tau=1\) we have:

\[\begin{align} \text{att}_{t, 1} & \sim \mathcal{N}(\mu_{\text{att}}, \sigma_{\text{att}})\\ \text{def}_{t, 1} &\sim \mathcal{N}(\mu_{\text{def}}, \sigma_{\text{def}}), \end{align}\]

with hyperparameters \(\mu_{\text{att}}, \mu_{\text{def}}, \sigma_{\text{att}}, \sigma_{\text{def}}\) and with the standard deviations capturing the time’s/evolution’s variation of both the teams’ skills (here assumed to be constant across time and teams). Of course, the identifiability constraint must be imposed for each time \(\tau\).

We can use the dynamic_type argument in the stan_foot function, with possible options 'seasonal' or 'weekly' in order to consider more seasons (no examples are given in this course) or more week-times within a single season, respectively. Let’s fit a weekly-dynamic parameters model on the Serie A 2000/2001 season:

From the printed summary, we may note that the degree of goals’ correlation seems to be again very small here. Moreover, the Gelman-Rubin statistic for \(\sigma_{\text{att}}\) is relatively high, whereas the effective sample sizes for \(\sigma_{\text{att}}\) and \(\sigma_{\text{def}}\) are quite low. This is suggesting possible inefficiencies during the HMC sampling and that a model-reparametrization could be suited and effective at this stage. Another option is to play a bit with the prior specification for \(\sigma_{\text{att}}\) and \(\sigma_{\text{def}}\), for instance by specifying a prior inducing less shrinkage in the team-specific abilities.

To deal with these issues and broaden the set of candidate models, let’s fit also a dynamic double-Poisson model with the double_pois option for the argument model:

The fitting problems mentioned above remain also for the double Poisson model…Thus, it’s time to play a little bit with the prior distributions. Also in the dynamic approach we can change the default priors for the team-specific abilities and their standard deviations, respectively, through the optional arguments prior and prior_sd. The specification follows almost analogously the static case: with the first argument we may specify the prior’s family for the team-specific abilities and the specific priors for \(\text{att}_{t,1}, \text{def}_{t,1}\) along with the hyper-prior location \(\mu_{\text{att}}, \mu_{\text{def}}\), whereas \(\sigma_{\text{att}}\) and \(\sigma_{\text{def}}\) need to be assigned some proper prior distribution. Assume to fit the same double Poisson model, but here we suppose student-\(t\) distributed team-specific abilities with 4 degrees of freedom to eventually capture more extreme team-specific abilities (more variability, i.e. less shrinkage), along with a \(\text{Cauchy}^+(0,25)\) for their standard deviations (to better capture a possible larger evolution variability):

\[\begin{align} \text{att}_{t, \tau} &\ \sim t(4, {\text{att}}_{t, \tau-1}, \sigma_{\text{att}})\\ \text{def}_{t, \tau} &\ \sim t(4, {\text{def}}_{t, \tau-1}, \sigma_{\text{def}})\\ \sigma_{\text{att}}, \sigma_{\text{def}} &\ \sim \mathsf{Cauchy}^+(0,25), \end{align}\]

As we may conclude, the situation has been only slightly improved.

Model estimates and visualization tools

Plotting and interpreting team-specific abilities

Once the model has been fitted, there is a large amount of interesting summaries to explore. The function foot_abilities allows to depict posterior/confidence intervals for global attack and defense abilities on the considered data (attack abilities are plotted in red, whereas defense abilities in blue colors). The higher the attack and the lower the defence for a given team, and the better is the overall team’s strength.

We can produce the team-specific abilities for the two static fits above, fit1_mle (MLE) and fit1_stan (Bayes), with red bars for the attack and blue bars for the defence, respectively:

AS Roma, the team winning the Serie A 2000/2001, is associated with the highest attack ability and the lowest defence ability according to both the models. In general, the models seem to well capture the static abilities: AS Roma, Lazio Roma and Juventus (1st, 3rd and 2nd at the end of that season, respectively) are rated as the best teams in terms of their abilities, whereas AS Bari, SSC Napoli and Vicenza Calcio (all relegated at the end of the season) have the worst abilities.

We can also depict the team-specific dynamic plots for the dynamic models:

As we can see, dynamic abilities naturally evolve over the time: better teams (AS Roma, Lazio Roma, Juventus, Parma) are associated with increasing attack abilities and decreasing defence abilities, whereas the worst ones (AS Bari, SSC NApoli, and Hellas Verona) exhibit decreasing attacking skills and increasing defensive skills. The reason for these increasing/decreasing behaviours is straightforward: at the beginning, all the attack/defence parameters have been initialized to have location equal to 0. The user is free to change the location, and in the final package’s version he will also have the possibility to elicit different team-specific hyper-prior locations.

Model checking

Checking the model fit is a relevant and vital statistical task. To this purpose, we can evaluate hypothetical replications \(\mathcal{D}^{\text{rep}}\) under the posterior predictive distribution

\[p(\mathcal{D}^{\text{rep}}| \mathcal{D}) = \int p(\mathcal{D}^{\text{rep}}| \boldsymbol{\theta}) \pi(\boldsymbol{\theta}| \mathcal{D}) d\boldsymbol{\theta},\] and check whether these replicated values are somehow close to the observed data \(\mathcal{D}\). These methods comparing hypothetical replications with the observed data are named posterior predictive checks and have great theoretical and applied appeal in Bayesian inference. See Gelman et al. (2014) for an overview.

The function pp_foot allows to obtain:

## PP checks: aggregated goal's differences and ordered goal differences

pp_foot(data = italy_2000, object = fit1_stan, 
        type = "aggregated")

pp_foot(data = italy_2000, object = fit1_stan, 
        type = "matches")

The aggregated goal difference frequencies seem to be decently captured by the model’s replications: in the first plot, blue horizontal lines denote the observed goal differences frequencies registered in the dataset, whereas yellow jittered points denote the correspondent replications. Goal-difference of 0, corresponding to the draws occurrences, is only slightly underestimated by the model. However, in general there are no particular clues of model’s misfit.

In the second plot, the ordered observed goal differences are plotted against their replications (50% and 95% credible intervals), and from this plot also we do not have particular signs of model’s misfits.

Other useful PP checks, such as the overlap between data density and replicated data densities to check eventual inconsistencies, can be obtained through the standard use of the bayesplot package, in this case providing an approximation to a continuous distribution using an input kernel choice (bw = 0.5 in the ppc_dens_overlay used below):

## PPC densities overlay with the bayesplot package

# extracting the replications

sims <-rstan::extract(fit1_stan)
goal_diff <- italy_2000$hgoal-italy_2000$vgoal

# plotting data density vs replications densities

ppc_dens_overlay(goal_diff, sims$y_rep[,,1]-sims$y_rep[,,2], bw = 0.5)

From this plot above we have the empirical confirmation that the goal difference is well captured by the static bivariate Poisson model.

Predictions and predictive accuracy

Posterior out-of-sample probabilities

The hottest feature in sports analytics is to obtain future predictions. By considering the posterior predictive distribution for future and observable data \(\tilde{\mathcal{D}}\), we acknowledge the whole model’s prediction uncertainty (which propagates from the posterior model’s uncertainty) and we can then generate observable values \(\tilde{D}\) conditioned on the posterior model’s parameters estimates:

\[p(\tilde{\mathcal{D}}| \mathcal{D}) = \int p(\tilde{\mathcal{D}}| \boldsymbol{\theta}) \pi(\boldsymbol{\theta}| \mathcal{D}) d\boldsymbol{\theta}.\]

We may then predict test set matches by using the argument predict of the stan_foot function, for instance considering the last four weeks of the 2000/2001 season as the test set, and then computing posterior-results probabilities using the function foot_prob for two teams belonging to the test set, such as Reggina Calcio and AC Milan:

Darker regions are associated with higher posterior probabilities, whereas the red square corresponds to the actual observed result, 2-1 for Reggina Calcio. According to the posterior-results probabilities, this final observed result had in principle about a 5% probability to happen! (Remember, football is about rare events…).

Home-win posterior probabilities

We can also use the out-of-sample posterior-results probabilities to compute some aggregated home/draw/loss probabilities (based then on the \(S\) draws from the MCMC method) for a given match:

\[\begin{align} p_{\text{home}}= &\ \textrm{Pr}(X>Y) = \frac{1}{S}\sum_{s=1}^S| \tilde{x}^{(s)}> \tilde{y}^{(s)}|\\ p_{\text{draw}} = &\ \textrm{Pr}(X=Y) = \frac{1}{S}\sum_{s=1}^S| \tilde{x}^{(s)}= \tilde{y}^{(s)}|\\ p_{\text{loss}} = &\ \textrm{Pr} (X<Y)=\frac{1}{S}\sum_{s=1}^S| \tilde{x}^{(s)}< \tilde{y}^{(s)}|, \end{align}\]

where \((\tilde{x}^{(s)}, \tilde{y}^{(s)})\) represents the \(s\)-th MCMC pair of the future home goals and away goals for a given match, respectively. According to this scenario, we can depict the home-win posterior probabilities of a given test set through the function foot_round_robin:

Red cells denote more likely home-wins (close to 0.6), such as: Lazio Roma - Fiorentina (observed result: 3-0, home win), Lazio Roma - Udinese (observed result: 3-1, home win), Juventus - AC Perugia (observed result: 1-0, home win), Brescia Calcio - AS Bari (observed result: 3-1, home win). Conversely, lighter cells denote more likely away wins (close to 0.6), such as: AS Bari - AS Roma (observed result: 1-4, away win), AS Bari - Inter (observed result: 1-2, away win).

Finally, computations of well-known predictive measures on test set data such as the Brier score and the Epstein probability score will be included in the next package’s version, however they could be easily computed by the users.

Rank-league reconstruction

Statisticians and football amateurs are much interested in the final rank-league predictions. However, predicting the final rank position (along with the teams’ points) is often assimilated to an oracle, rather than a coherent statistical procedure.

We can provide here:

Model comparisons: LOOIC and WAIC

Comparing statistical models in terms of some predictive information criteria should conclude many analysis and can be carried out by using the Leave-one-out cross-validation criterion (LOOIC) and the Watanabe Akaike Information criterion (WAIC) performed by using the loo package. For more details about LOOIC and WAIC, read the paper Vehtari, Gelman, and Gabry (2017).

The general formulation for the predictive information criteria is the following:

\[ \text{crit}=-2 \widehat{\text{elpd}} = -2 (\widehat{\text{lpd}}- \text{parameters penalty})\]

The interpretation is the following: the lower is the value for an information criterion, and the better is the estimated model’s predictive accuracy. Moreover, if two competing models share the same value for the log predictive density, the model with less parameters is favored.

This is the Occam’s Razor occurring in statistics:

“Frustra fit per plura quod potest fieri per pauciora”

We can perform Bayesian model comparisons by using the loo and waic functions of the loo package. We are going to compare the static and the weekly dynamic models on the Italian Serie A 2000/2001:

According to the above model LOOIC comparisons, the weekly-dynamic double Poisson models attain the lowest LOOIC values and are then the favored models in terms of predictive accuracy. The static model’s fit1_stan final looic is suggesting that the assumption of static team-specific parameters is too restrictive and oversimplified to capture teams’ skills over time and make reliable predictions. Anyway, from model checking we have the suggestion that even the static model has a reliable goodness of fit and could be used for some simplified analysis not requiring complex dynamic patterns.

Extensions in next versions

Extensions and to-do list for the next package’s versions:

  1. Data Web-scraping: automatic routine to scrape data from internet;

  2. More numerical outputs: posterior probabilities, confusion matrices, betting strategies, etc.;

  3. Diagnostics, pp checks designed for football;

  4. Teams’ statistics

  5. More covariates to be included in the model (possibly by users).

  6. More priors choices

References

Baio, Gianluca, and Marta Blangiardo. 2010. “Bayesian Hierarchical Model for the Prediction of Football Results.” Journal of Applied Statistics 37 (2): 253–64.

Betancourt, Michael. 2017. “A Conceptual Introduction to Hamiltonian Monte Carlo.” arXiv Preprint arXiv:1701.02434.

Egidi, Leonardo, Francesco Pauli, and Nicola Torelli. 2018. “Combining Historical Data and Bookmakers’ Odds in Modelling Football Scores.” Statistical Modelling 18 (5-6): 436–59.

Egidi, Leonardo, and Nicola Torelli. 2020. “Comparing Goal-Based and Result-Based Approaches in Modelling Football Outcomes.” Social Indicators Research, 1–13.

Gelman, Andrew. 2014. “Stan Goes to the World Cup.” Statistical Modeling, Causal Inference, and Social Science Blog. https://statmodeling.stat.columbia.edu/2014/07/13/stan-analyzes-world-cup-data/.

Gelman, Andrew, John B Carlin, Hal S Stern, and Donald B Rubin. 2014. Bayesian Data Analysis. Vol. 2. Chapman & Hall/CRC Boca Raton, FL, USA.

Groll, Andreas, Gunther Schauberger, and Gerhard Tutz. 2015. “Prediction of Major International Soccer Tournaments Based on Team-Specific Regularized Poisson Regression: An Application to the Fifa World Cup 2014.” Journal of Quantitative Analysis in Sports 11 (2): 97–115.

Karlis, Dimitris, and Ioannis Ntzoufras. 2003. “Analysis of Sports Data by Using Bivariate Poisson Models.” Journal of the Royal Statistical Society: Series D (the Statistician) 52 (3): 381–93.

———. 2009. “Bayesian Modelling of Football Outcomes: Using the Skellam’s Distribution for the Goal Difference.” IMA Journal of Management Mathematics 20 (2): 133–45.

Koopman, Siem Jan, and Rutger Lit. 2015. “A Dynamic Bivariate Poisson Model for Analysing and Forecasting Match Results in the English Premier League.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 178 (1): 167–86.

Owen, Alun. 2011. “Dynamic Bayesian Forecasting Models of Football Match Outcomes with Estimation of the Evolution Variance Parameter.” IMA Journal of Management Mathematics 22 (2): 99–113.

Robert, Christian, and George Casella. 2013. Monte Carlo Statistical Methods. Springer Science & Business Media.

Stan Development Team. 2020. “RStan: The R Interface to Stan.” https://mc-stan.org/.

Vehtari, Aki, Andrew Gelman, and Jonah Gabry. 2017. “Practical Bayesian Model Evaluation Using Leave-One-Out Cross-Validation and WAIC.” Statistics and Computing 27 (5): 1413–32.