جزییات کتاب
Ten years ago Bill Gale of AT&T Bell Laboratories was primary organizer of the first Workshop on Artificial Intelligence and Statistics. In the early days of the Workshop series it seemed clear that researchers in AI and statistics had common interests, though with different emphases, goals, and vocabularies. In learning and model selection, for example, a historical goal of AI to build autonomous agents probably contributed to a focus on parameter-free learning systems, which relied little on an external analyst's assumptions about the data. This seemed at odds with statistical strategy, which stemmed from a view that model selection methods were tools to augment, not replace, the abilities of a human analyst. Thus, statisticians have traditionally spent considerably more time exploiting prior information of the environment to model data and exploratory data analysis methods tailored to their assumptions. In statistics, special emphasis is placed on model checking, making extensive use of residual analysis, because all models are 'wrong', but some are better than others. It is increasingly recognized that AI researchers and/or AI programs can exploit the same kind of statistical strategies to good effect. Often AI researchers and statisticians emphasized different aspects of what in retrospect we might now regard as the same overriding tasks.