Probability and Conditional Expectation
Rolf Steyer, Institute of Psychology, University of Jena, Germany Werner Nagel, Institute of Mathematics, University of Jena, Germany
Probability and Conditional Expectation
Why another book on probability?
This book has two titles. The subtitle, 'Fundamentals for the Empirical Sciences', reflects the intentions and the motivation of the first author for writing this book. He received his academic training in psychology but considers himself a methodologist. His scientific interest is in explicating fundamental concepts of empirical research (such as causal effects and latent variables) in terms of a language that is precise and at the same time compatible with the statistical models used in the analysis of empirical data. Applying statistical models aims at estimating and testing hypotheses about parameters such as expectations, variances, covariances, and so on (or of functions of these parameters, such as differences between expectations, ratios of variances, regression coefficients, etc.), all of which are terms of probability theory. Precision is necessary for securing logical consistency of theories, whereas compatibility of theories about real-world phenomena with statistical models is crucial for probing the empirical validity of theoretical propositions via statistical inference.
Much empirical research uses some kind of regression in order to investigate how the expectation of one random variable depends on the values of one or more other random variables. This is true for analysis of variance, regression analysis, applications of the general linear model and the generalized linear model, factor analysis, structural equation modeling, hierarchical linear modeling, and the analysis of qualitative data. Using these methods, we aim at learning about specific regressions. A regression is a synonym for what, in probability theory, is called a factorization of a conditional expectation, provided that the regressor is numerical. This explains the main title of this book, 'Probability and Conditional Expectation'.
What is it about?
Since the seminal book of Kolmogoroff (1933-1977), the fundamental concepts of probability theory are considered to be special concepts of measure theory. A probability measure is a special finite measure, random variables are special measurable mappings, and expectations of random variables are integrals of measurable mappings with respect to a probability measure. This motivates Part I of this book with three chapters on the measure-theoretical foundations of probability theory. Although at first sight this part seems to be far off from practical applications, the contrary is true. This part is indispensable for probability theory and for its applications in empirical sciences. This applies not only to the concepts of a measure and an integral but also, in particular, to the concept of a measurable mapping, although we concede that the full relevance of this concept will become apparent only in the chapters on conditional expectations. The relevance of measurable mappings is also the reason why chapter 2 is more detailed than the corresponding chapters in other books on measure theory.
Part II of the book is fairly conventional. The material covered - probability, random variable, expectation, variance, covariance, and some distributions - is found in many books on probability and statistics.
Part III is not only the longest; it is also the core of the book that distinguishes it from other books on probability or on probability and statistics. Only a few of these other books contain detailed chapters on conditional expectations. Exceptions are Billingsley (1995), Fristedt and Gray (1997), and Hoffmann-Jørgensen (1994). Our book does not cover any statistical model. However, we treat in much detail what we are estimating and which the hypotheses are that we test or evaluate in statistical modeling. How we are estimating is important, but what we are estimating is of most interest from the empirical scientist point of view, and