Written by: Stephen Hsu
Primary Source: Information Processing
This is a recent MSU seminar on genomic prediction. Vimeo won’t let me embed the video, so click here to watch the talk.
Results are presented for models ranging from simple linear and linear + dominance to reproducing Hilbert space kernels and neural nets. Results are consistent with sub-dominant nonlinear (non-additive) effects, but interesting GxE effects are seen in some plant breeding experiments.
The paper below is by the speaker and MSU professor Gustavo de los Campos.
ABSTRACT Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis.