# Resources

The internet abounds nowadays with lots of helpful resources for learning data science, programming, etc. For classical statistics however, I find most of the popular online resources are fairly cursory. My hunch is that the dearth of accessible, detailed resources is that nowadays statistics for most professionals boils down to A/B testing e.g. the new name for Stats 101 hypothesis testing, and predictive machine learning models. This leaves people like me who work more on the inference side high and dry when you find yourself wondering why a particular method isn’t working or if you’re trying to improve your inference for a future analysis.

Udemy and the like are helpful as a starting point when learning about new statistical methods, but I find individual blog posts, forums like crossvalidated, and (unfortunately :P) academic papers are the best resources for insightful discussions by experts on the finer details of statistical methods.

This page is my attempty to compile helpful, lesser known resources in one place. I’ve tried to only select resources that were approachable enough so that I can understand them, and therefore actually useful to my work as a statistician.

## General Statistics Stuff

“Basic” hypothesis testing, Ordinary Gaussian linear regression and its modifications, variable coding, power calculations, and other general philosophical considerations.

- This blog has a good overview for a lot of general statistical modeling, hypothesis testing, and R programming concepts (mostly in base R, but some common packages like car and MASS are also used)
- Paper going over two common fallacies with power calculations, “observed power” and “detectable effect size”
- Phases of a clinical trial
- A discussion of transformations in linear model
- The discussion leads to another post about the influence histogram binwidth has on how we visualize data
- Blog post discussing different contrast codings
- UCLA also has a tutorial on contrast coding, pretty comprehensive
- A pretty interesting discussion on how treatment coding makes slopes and intercept correlated
- Non-overlapping confidence intervals =/= statistical significance
- Use Anderson-Darling instead of the Kolmogorov-Smirnov test
- An almost unnecessarily thorough answer on figuring out what distribution fits some data the best
- A ridiculously comprehensive map of 800 statistical tests and what categories they belong to
- Adaptive/sequential designs are not the same as p-hacking
- An introduction to GLS/WLS using some R code
- Should you be centering your predictors?
- Why are survival models considered semi-parametric?
- How can you interpret R^2 when there is no intercept?
- Moving from p-values to estimation
- Lord’s paradox
- Conversation on why we don’t always bootstrap
- An insightful article on the importance of effective communication for statisticians

## Analyzing Experimental Data/Post-hoc comparisons

Focusing on estimating marginal means and multiplicity adjustments for experimental data. The Mixed Effects models section below also can be used to analyze experimental data (the resources from Keith Lohse are a good place to start).

- {emmeans} is a really popular R package for estimating marginal means after you have fit a model (many different models and R packages are supported) and a lot of vignettes revolve around testing experimental data
- Discussion from emmeans author on using “mvt” multivariate t-distribution multiplicity adjustment
- Difference between {glht} R packge and lsmeans, now {emmeans}, package
- Do we really need a global F test before doing post-hoc tests?

## GLMs

General considerations on GLMs, what they mean, how they are computed, deciding on distributions and link functions, etc.

- Probably the best set of introductory lecture notes to GLMs, quasilikelihood, different link functions, and applications (the only bad thing is a lot of figures are missing, but the R code is there)
- If you prefer textbooks, chapter 15 from John Fox’s book goes over GLMs in a pretty approachable way
- A nice answer on thinking conceptually about GLMs really modeling conditional distributions
- A nice intro by Clay Ford to negative binomial regression using MASS
- Simon Wood has a nice blog post on making CIs for GLMs
- The statistics complement to MASS has an interesting section about the dispersion parameter in Gamma regression
- A not-completely-unrelated SE discussion on gamma regression vs. lognormal regression
- SAS and R give different SE estimates in gamma regression due to a difference in parameterization
- Issues with link functions in gamma GLMs
- Using offsets in Poisson regression
- log-transform vs. log-link?
- Defining your own link function in binomial regression

## Mixed Effects Models

LMMs and GLMMs, and their various considerations. This topic can get very hairy, so there are a ton of resources out there. Some are more helpful than others.

- Ben Bolker is the GOAT when it comes to mixed effects models, this lecture is a good place to start
- He has a bunch of textbooks too, none of which I have read. Some of the supplementary material is available online though
- For getting started with mixed effects models in R, the CRAN page is actually a great resource, with brief explanations on the overwhelming number of different packages
- lme4 and nlme are the most commonly used R packages, here is a good answer summarizing their strengths and weaknesses
- A nice conceptual tutorial to LMMs with R examples is provided here (On the same site, the author also has some nice articles on the difference between ML and REML that I found interesting, as well as a comparison of LMMs+Boostrapping and Bayesian Hierarchical Model)
- Mixed effects model tutorials for planned factorial designs (Keith Lohse is a professor at WashU and has a Youtube channel that walks through a lot of these workshops)
- A workshop by University of North Dakota on power analyses in GLMMs using the simr package, emphasizing tests of fixed effects
- An insightful presentation by Bates on why lme4 doesn’t have p-values/CIs/etc. on variance components
- A nice answer on using bootstrapping to make CIs for lmer models
- A discussion of crossed vs mixed random effects
- Paper comparing conditional models (e.g. GLMMs) and marginal models (e.g. GEEs), arguing that conditional models are the logical superset of models
- An SE answer also talking about conditional models vs. marginal models
- Yet another one, but this one I think is more cogent
- This one gets right to the point in binary model setting, no fluff
- Paper from Gelman about the strengths (prediction) and limitations (causal inference in observational data)
- A paper on Kenward-Rogers degrees of freedom when doing hypothesis testing with LMMs
- Satterthwaite vs. Kenward-Rogers
- An SE discussion of GLMMs vs. GLS models
- Doug Bates doesn’t like the term BLUP and prefers “conditional mode”
- A discussion of compound symmetry in covariance matrix, and how it makes certain models equivalent
- Speed test! Fitting mixed effects models in Python, Julia, and R
- Really interesting post on negative intraclass correlations in GLMMs

## GEEs

Marginal models, when you should use them, and how they stack up conceptually with other models like GLMMs.

- Tbh SAS documentation is pretty good and their GEE overview is a good place to start
- Notes from a UNC course on fitting binary GEE model
- CV answer on sandwich estimators for variance and conditions for robustness
- Similar content but less approachable
- Andrew Heiss combines GEEs and IPW and months of crying into one blog post

## Bayesian Statistics

Emphasis on McElreath’s statistical rethinking course.

- Intro to R resources in Bayesian statistics (rstan, brms), with basic examples
- McElreath’s Statistical Rethinking but using brms
- Paper on Bayesian model evaluation using LOOCV - haven’t read this yet - might eventually get to it
- What does Frank Harrell mean when he says “make the sample size a random variable when possible?”
- Vignette for estimating nonlinear models in brms
- p-values vs. Bayes Factors

## Survival Models

- Good place to start in R
- A deep dive into the intuition of Cox models
- Violation of proportional odds is not fatal
- Related to PO
- Also related

## Robust Statistics

- A compendium of resources available here
- R package WRS2 included in the above has nice vignette for basic robust statistical tests, like robust ANOVA

## Nonlinear modeling (nls, GAMS, NLMMs)

- Simon Wood is the person to go to about GAMs, his blog is a good place to start
- M. Clark is also great
- Related to mgcv
- Also Related to mgcv
- Not about GAMs, but still about nonlinear modeling (in this case NLMMs)

## R Programming and Shiny

- Transforms in ggplot using scale_ happen before computing the model using stat_smooth
- Adding custom legends to ggplot (I look this up like every other week)
- No more !! and quoting, please
- Using greek symbols in R
- Obscure tidyverse functions
- How to use …?
- You can’t use local packages when deploying to shinyapps.io
- Dealing with long labels in ggplot
- Giving geom_smooth additional model parameters
- Common Shiny errors
- More Shiny errors
- reactive is lazy, observe is eager
- More on reactive
- Setting row.names in a dplyr pipe?
- SuperLearner package for fitting ensemble machine learning models for prediction
- This man hates R but he has a point
- An overview of Freddy Drennan’s ndexr solution for deploying Shiny apps

If you are subscribed to Shiny tags in LinkedIn, you’ve probably seen this guy evangelizing his “truly open source” alternative to Posit Connect and Appsilon, to be fair, he probably knows what he’s doing.

## Miscellaneous Interesting Things

Basically like the general statistics section up above, but this section goes into more advanced “niche” topics.

- Comparing AUCs of different ML classifiers using DeLong’s method
- Don’t use the elbow method in clustering, use BIC or Calinski-Harabasz
- An interesting post explicating orthogonal polynomial contrasts for regression and where they come from
- Dunning-Kruger is just autocorrelation
- Jarque-Bera test instead of Shapiro-Wilk?
- Is Shapiro-Wilk the best?
- Double Robust Estimation for causal inference (TMLE)
- Causal inference visual guide
- The intuition behind IPW in causal inference
- A similar article, little windier
- Youden’s J statistic
- Biostats consultancy that specializes in adaptive designs
- Building machine learning models from scratch using Python
- scikit-learn uses L2 penalization by default
- What are chunk tests?
- Tweedie Regression

## Nightmares from MS program

Things that remind me of probability classes in grad school. Surprisingly enough, grad school probability (beyond knowing pbinom and the like) can occasionally be pretty useful in a workplace context, though I prefer to estimate probabilities through simulation when needed.