I report on two topics presented at the PHYSTAT 2007 conference at CERN.
1) Event weighting has long been used, but is typically maligned (under the rubric of the "method of moments") as statistically inefficient (producing parameter estimates with worse uncertainty) compared to maximum likelihood fitting. However, event weighting is quite fast, requiring only one pass through the data with no iteration. Further, it has fairly recently been understood that the choice of weight function has a substantial effect on the errors, and by choosing to minimize the parameter error via calculus of variations, near-ideal uncertainty can result.
2) Evaluation of systematic errors in MC is a tedious fact of life; it's slow. We have for generations done it one variable at a time. However, it turns out that doing so makes us blind to certain systematic effects--even when the systematic errors themselves are uncorrelated. And it also turns out that statisticians knew about this since the 1920's. I'll show what our method blinds us to.