Keynote and Featured Speakers

Bengt Muthén

250 Bengt_O_MuthenAdvances in Mixture Modeling

Abstract: After a brief overview of the many uses of finite mixture modeling, applications of new mixture modeling developments are discussed.  One major development goes beyond the conventional mixture of normal distributions to allow mixtures with flexible non-normal distributions.  This has interesting applications to cluster analysis, factor analysis, SEM, and growth modeling.  The talk focuses on applications of Growth Mixture Modeling for continuous outcomes that are skewed.  Examples are drawn from national longitudinal surveys of BMI as well as twin studies.  Extensions of this modeling to the joint study of survival and non-ignorable dropout are also discussed.


Bengt Muthén obtained his Ph.D. in Statistics at the University of Uppsala, Sweden and is Professor Emeritus at UCLA. He was the 1988-89 President of the Psychometric Society and the 2011 recipient of the Psychometric Society’s Lifetime Achievement Award. He has published extensively on latent variable modeling and is one of the developers of the Mplus computer program, which implements many of his statistical procedures.

Dr. Muthén’s research interests focus on the development of applied statistical methodology in areas of education and public health. Education applications concern achievement development while public health applications involve developmental studies in epidemiology and psychology. Methodological areas include latent variable modeling, analysis of individual differences in longitudinal data, preventive intervention studies, analysis of categorical data, multilevel modeling, and the development of statistical software (namely Mplus!).

For more information about Bengt Muthén, check out his website:

Andrew Gelman

250 andrew gelman 399Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. He has received the Outstanding Statistical Application award from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40. His books include Bayesian Data Analysis (with John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin), Teaching Statistics: A Bag of Tricks (with Deb Nolan), Data Analysis Using Regression and Multilevel/Hierarchical Models (with Jennifer Hill), Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do (with David Park, Boris Shor, and Jeronimo Cortina), and A Quantitative Tour of the Social Sciences (co-edited with Jeronimo Cortina).

Andrew has done research on a wide range of topics, including: why it is rational to vote; why campaign polls are so variable when elections are so predictable; why redistricting is good for democracy; reversals of death sentences; police stops in New York City, the statistical challenges of estimating small effects; the probability that your vote will be decisive; seats and votes in Congress; social network structure; arsenic in Bangladesh; radon in your basement; toxicology; medical imaging; and methods in surveys, experimental design, statistical inference, computation, and graphics.

Jeffrey R. Harring

250 New-JeffDr. Harring is Associate Professor of Measurement, Statistics, and Evaluation (EDMS) in the Department of Human Development and Quantitative Methodology at the University of Maryland. Prior to joining the the EDMS faculty in the fall of 2006, Dr. Harring received a M.S. degree in Statistics in 2004, and completed his Ph.D. in the Quantitative Methods Program within Educational Psychology in 2005–both degrees coming from the University of Minnesota. Before that, Dr. Harring taught high school mathematics for 12 years.

Dr. Harring teaches a variety of graduate-level quantitative methods courses including: General Linear Models I & II, Statistical Analysis of Longitudinal Data, Statistical Computing and Monte Carlo Simulation, Multivariate Data Analaysis and Finite Mixture Models in Measurement and Statistics.

Dr. Harring’s research interests focus on applications of (i) statistical models for repeated measures data, (ii) linear and nonlinear structural equation models, (iii) multilevel models and (iv) statistical computing.

Jamie Robins

The principal focus of Dr. Robins’ research has been the development of analytic methods appropriate for drawing causal inferences from complex observational and randomized studies with time-varying exposures or treatments. The new methods are to a large extent based on the estimation of the parameters of a new class of causal models – the structural nested models – using a new class of estimators – the G estimators. The usual approach to the estimation of the effect of a time-varying treatment or exposure on time to disease is to model the hazard incidence of failure at time t as a function of past treatment history using a time-dependent Cox proportional hazards model. Dr. Robins has shown the usual approach may be biased whether or not further adjusts for past confounder history in the analysis when:
(A1) there exists a time-dependent risk factor for or predictor of the event of interest that also predicts subsequent treatment, and (A2) past treatment history predicts subsequent risk factor level.
Conditions (A1) and (A2) will be true whenever there are time-dependent covariates that are simultaneously confounders and intermediate variables.
In contrast to previously proposed methods, Dr. Robins’ methods can:

  1. be used to estimate the effect of a treatment (e.g., prophylaxis for PCP) or exposure on a disease outcome in the presence of time-varying covariates (e.g., number of episodes of PCP) that are simultaneously confounders and intermediate variables on the causal pathway from exposure disease;
  2. allow an analyst to adjust appropriately for the effects of concurrent non-randomized treatments or non-random non-compliance in a randomized clinical trial. For example, in the AIDS Clinical Trial Group (ACTG) trial 002 of the effects of high-dose versus low-dose AZT on the survival of AIDS patients, patients in the low-dose arm had improved survival, but they also took more aerosolized pentamidine (a non-randomized concurrent treatment);
  3. allow an analyst to adequately incorporate information on the surrogate markers (e.g., CD4 count) in order to stop at the earliest possible moment, randomized trials to the effect of the treatment (e.g., AZT) on survival.

Dr. Robins teaches at the Harvard School of Public Health.

Edward Vytlacil

Edward Vytlacil received his PhD in Economics from the University of Chicago in 2000. He is currently a Professor of Economics at New York University, having previously been a faculty member at Stanford University, Columbia University, and Yale University. He is a Co-Editor of the Journal of Applied Econometrics, and an Associate Editor for Econometrica and the Journal of Econometrics. Vytlacil’s work has focused on the micro-econometric methodology for treatment effect and policy evaluation using disaggregate data. A theme in his work has been in allowing for the effects of a treatment to vary across people, and allowing individuals to have some knowledge of their own idiosyncratic treatment effect and to act upon that knowledge. In addition to his work in econometric methodology, he has published empirical work in labor economics and health economics evaluating the returns to schooling, the returns to job training programs, and the effectiveness of medical interventions.

Dr. Vytlacil will be speaking about Accounting for Individual Heterogeneity in Treatment Effect Analysis.

Sophia Rabe-Hesketh

Sophia Rabe-Hesketh

Simple Methods for Handling Non-Randomly Missing Data

Abstract: In multiple linear or logistic regression, multiple imputation has become increasingly popular for handling missing covariate values. The much simpler approach of listwise deletion or complete-case analysis is often dismissed as making overly strong assumptions.  However, I will point out that complete-case analysis is consistent and performs better than multiple imputation for many types of non-random missingness mechanisms.

In longitudinal data analysis, dropout or intermittently missing responses are typically dealt with by specifying a joint model for the responses, such as a growth-curve/hierarchical/multilevel model, and estimating the parameters by maximum likelihood. This approach is consistent if missingness of a response depends on observed responses for the same individual but not if if it depends on the response itself or on the random effects in the model. One way of handling such non-random missingness is to model missingness jointly with the response variable of interest, but these joint models are complex, require specialized software, and make unverifiable assumptions. I will suggest simple fixed-effects approaches that are consistent if missingness depends on the random effects and, in the case of binary responses, if missingness depends on the response itself or previous (observed or unobserved) responses.


Sophia Rabe-Hesketh is a Professor of Education and Biostatistics at the University of California, Berkeley.  She was previously a Professor of Social Statistics at the University of London. Her research interests include hierarchical/multilevel models, item-response theory, structural equation models, and generalized latent variable models. She has developed a general model framework “Generalized Linear Latent and Mixed Models,” that unifies and extends these models and corresponding software, gllamm, that has been used in over 550 different journals. Some recent research is on estimation methods for random effects  and latent variable models and non-ignorable missing data problems. She has co-authored 6 books, including “Generalized Latent Variable Modeling” and “Multilevel and Longitudinal Modeling Using Stata” (both with Anders Skrondal). Her books and over 100 peer-reviewed journal articles are highly cited with a Google Scholar h-index of 52. Rabe-Hesketh is a member of the technical advisory committees for the U.S. National Assessment of Educational Progress (NAEP) and the Programme for International Student Assessment (PISA) and an elected member of the International Statistical Institute. She is the current president-elect of the Psychometric Society.

For more information about Sophia Rabe-Hesketh, check out her website!

Karl Jöreskog

Karl Jöreskog

50 Years of SEM in 50 Minutes??


Click here for slides!



Karl G Jöreskog is Professor Emeritus at Uppsala University in Sweden. He was born in Åmål, Sweden 1935 and did his undergraduate studies at Uppsala University in 1955-1957, with a major in Mathematics and Physics. He received a PhD in Statistics at Uppsala University 1963 with a dissertation entitled Statistical Estimation in Factor Analysis: A New Technique and Its Foundation, a topic suggested to him by Professor Herman Wold. He was a Research Statistician at Educational Testing Service and a Visiting Professor at Princeton University 1964-1971. During these years he published several papers in Psychometrika on the method of maximum likelihood applied to exploratory and confirmatory factor analysis, covariance structure analysis, and multiple group factor analysis. These papers laid the foundation for the LISREL model which was presented for the first time at the conference Structural Equation Models in the Social Sciences held at Madison Wisconsin in November 1970.

In 1971 Jöreskog returned to Sweden to become Professor of Statistics at Uppsala University. In 1984 he was appointed a Research Professor of Multivariate Statistical Analysis, a position he held until his retirement in 2000.

Jöreskog has received three Doctors honoris causa (Honorary Doctors): By the Faculty of Economics and Statistics at the University of Padua, Italy, 1993, by the Norwegian School of Economics, Bergen, Norway, 1996, and by the Faculty of Psychology at the Friedrich-Schiller- Universität, Jena, Germany, 2004. He became Honorary Professor at Tianjin University of Finance and Economics, Tianjin, China in 2006.

Jöreskog is a member of the Swedish Royal Academy of Sciences, a Fellow of the American Statistical Association, and an Honorary Fellow of the Royal Statistical Society. He served as President of Psychometric Society in 1977-78 and organized the first European Psychometric Society Meeting in Uppsala 1978.

Jöreskog received the Arnberg Prize by the Swedish Royal Academy of Sciences in 1971, the Ubbo Emmius Medal by the University of Groningen, Netherlands in 1983, the ETS Award for Distinguished Service to Measurement by Educational Testing Service in 1987, the Sells Award by the Society of Multivariate Experimental Psychology in 2000, and the Olaus Rudbeck Medal by Uppsala University in 2005.

In 2014 he was awarded Jubilee Doctor by Uppsala University at the occasion of the 50th anniversary of his promotion to PhD. Jöreskog has authored several books and numerous journal articles on factor analysis and its extensions and on structural equation modeling. Together with Dag Sörbom he developed the LISREL computer program.

Thomas Cook



Bigger Data and Stronger Causal Inference from Quasi-Experiments



This presentation summarizes an ongoing line of work in which estimates from an experiment are compared to various quasi-experimental design and analytic practices where the treatment group is shared with the experiment but the way of forming the non-equivalent group obviously is not. Work of this kind on regression discontinuity (RD) and comparative RD is summarized in terms of bias reduction and precision both at and away from the RD cutoff. Also summarized is work on interrupted time series (ITS) designs and comparative ITS designs. But most attention is paid to simpler non-equivalent control group designs to illustrate practices in this area that reproduce experimental estimates. Included here is work on various ways of selecting intact but non-equivalent comparison groups and work on various ways of selecting covariates to control for any selection that remains after non-equivalent comparison groups have been chosen.

Click here for slides!.pdf



Thomas Cook is interested in social science research methodology, program evaluation, school reform, and contextual factors that influence adolescent development, particularly for urban minorities.

Cook has written or edited 10 books and published numerous articles and book chapters. He received the Myrdal Prize for Science from the Evaluation Research Society in 1982, the Donald Campbell Prize for Innovative Methodology from the Policy Sciences Organization in 1988, the Distinguished Scientist Award of Division 5 of the American Psychological Association in 1997, and the Sells Award for Lifetime Achievement, Society of Multivariate Experimental Psychology in 2008, and the Rossi Award from the Association for Public Policy Analysis and Management in 2012. Cook was chair of the board of the Russell Sage Foundation from 2006 to 2008. He was elected to the American Academy of Arts and Sciences in 2000 and was inducted as the Margaret Mead Fellow of the American Academy of Political and Social Science in 2003. He was part of the congressionally appointed committee evaluating Title I (No Child Left Behind) from 2006 to 2008.


For more information about Dr. Cook, please visit his website.

Donald Hedeker

Donald Hedeker

Modeling between and within-subject variances using mixed effects location scale models for intensive longitudinal data



Intensive longitudinal data are increasingly encountered in many research areas.  For example, ecological momentary assessment and/or experience sampling methods are often used to study subjective experiences within changing environmental contexts. In these studies, up to 30 or 40 observations are usually obtained for each subject over a period of a week or so. Because there are so many measurements per subject, one can characterize a subject’s mean and variance and can specify models for both. In this presentation, we focus on an adolescent smoking study using ecological momentary assessment where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances and also extend the statistical model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses.  Models for both continuous and ordinal outcomes are described and will be illustrated with examples.  These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.

Click here for slides!.pdf



Donald Hedeker, PhD., is a Professor of Biostatistics in the Department of Health Studies at The University of Chicago.  Previously, from 1993 to 2014, Don was a faculty member of the School of Public Health, University of Illinois at Chicago. He received his Ph.D. in Quantitative Psychology from The University of Chicago.

Don’s main expertise is in the development and use of advanced statistical methods for clustered and longitudinal data, with particular emphasis on mixed-effects models. He is the primary author of several freeware computer programs for mixed-effects analysis: MIXREG for normal-theory models, MIXOR for dichotomous and ordinal outcomes, MIXNO for nominal outcomes, and MIXPREG for counts.  In 2008, these programs were restructured into the Supermix software program distributed by Scientific Software, Inc.

With Robert Gibbons, Don is the author of the text “Longitudinal Data Analysis,” published by Wiley in 2006.  More recently, Don has developed methods for intensive longitudinal data, resulting in the freeware MIXREGLS program.
In 2000, Don was named a Fellow of the American Statistical Association, and he is an Associate Editor for Statistics in Medicine and Journal of Statistical Software.


For more information about Dr. Hedeker, please visit his website.

Judea Pearl

What on earth are we modeling? Data or Reality? Reflections on structural equations, external validity, heterogeneity and missing data


Recent developments in graphical models and the logic of counterfactuals have given rise to major advances in causal inference, including confounding control, policy analysis, misspecification tests, mediation, heterogeneity, selection bias, missing data and the integration of data from diverse studies.  I attribute these developments to two methodological commitments that define the “deductive” or “model-based” approach”. First, a commitment to commence the analysis by asking what reality should be like for a solution to exist and, second, a commitment to encode reality in terms of data-generating processes, rather than distributions of observed or counterfactual variables. These two principles have led to a fruitful symbiosis between graphs and counterfactuals that has unified the potential outcome framework of  Neyman, Rubin and Robins. with the SEM tradition of Wright, Duncan and Joreskog, and the econometric tradition of Haavelmo, Marschak and Heckman. Recent works further show that deductive causal analysis is helpful in meta-analysis and missing data applications, two problem areas previously thought to be the sole province of statistical analysis.

The talk will focus on the following questions:

1. What mathematics can tell us about “external validity” or

“generalizing across populations”

2. When and how can sample-selection bias be circumvented

3. What population data can tell us about

unsuspected heterogeneity.

4. What relationships are estimable from

partially missing data, and how.

Reference: J. Pearl, Causality (Cambridge University Press, 2000,


Working papers:


Judea Pearl is a professor of computer science and statistics at UCLA, and distinguished visiting professor at the Technion, Israel Institute of Technology.  He has joined the faculty of UCLA in 1970, where he currently directs the Cognitive Systems Laboratory and conducts research in artificial intelligence, human reasoning and philosophy of science. Pearl has authored several hundreds research papers and three books: Heuristics (1984), Probabilistic Reasoning (1988), and Causality (2000;2009), He is a member of the National Academy of Engineering, the American Academy of Arts and Sciences, and a Fellow of the IEEE, AAAI and the Cognitive Science Society. Pearl received the 2008 Benjamin Franklin Medal for Computer and Cognitive Science and the 2011 David Rumelhart Prize from the Cognitive Science Society. In 2012, he received the Technion’s Harvey Prize and the ACM A.M. Turing Award. for the development of a calculus for probabilistic and causal reasoning.