Traditional linear regression at the level taught in most introductory statistics courses involves the use of 'fixed effects' as predictors of a particular outcome. This treatment of the independent variable is often sufficient. However, as research questions have become more sophisticated, coupled with the rapid advancement in computational abilities, the use of random effects in statistical modeling has become more commonplace. Treating predictors in a model as a random effect allows for more general conclusions-a great example being the treatment of the studies that comprise a meta-analysis as random rather than fixed. In addition, utilization of random effects allows for more accurate representation of data that arise from complicated study designs, such as multilevel and longitudinal studies, which in turn allows for more accurate inference on the fixed effects that tend to be of primary interest. It is important to note the distinctions between fixed and random effects in the most general of settings, while also knowing the benefits and risks to their simultaneous use in specific yet common situations. © 2011 Wiley Periodicals, Inc.