Home / plentyoffish com online dating / Updating mean and variance estimates

Updating mean and variance estimates

Our approach to making such comparisons strikes some as highly counterintuitive, and noticeably different from that of other “prioritization” projects such as Copenhagen Consensus.

Rather than focusing on a single metric that all “good accomplished” can be converted into (an approach that has obvious advantages when one’s goal is to maximize), we tend to rate options based on a variety of criteria using something somewhat closer to (while distinct from) a “1=poor, 5=excellent” scale, and prioritize options that score well on multiple criteria.

Give Well – both our traditional work and Give Well Labs – is fundamentally about maximization: doing as much good as possible with each dollar you donate.

This introduces some major conceptual challenges when making certain kinds of comparisons – for example, how does one compare the impact of distributing bednets in sub-Saharan Africa with the impact of funding research on potential high-risk responses to climate change, attempts to promote better collaboration in the scientific community or working against abuse of animals on factory farms?

It is not the most stable possible calculation, but is sufficient for almost all purposes.

If stability does become an issue for you there are some other things that can be done.

(It is not the only alternative, for a detailed numerical study of possible options, see the paper linked below.) The West algorithm supports mean and variance computation for positively weighted samples Alternative algorithms and variants for higher-order moments can be found on the excellent Wikipedia page on the topic.

Addendum: (October 2015) A recent paper by (Meng, 2015) contains a variant of the above algorithm for the unweighted case to compute the first four central moments in a numerically stable manner.

Your activity appears to be coming from some type of automated process.A clever solution to this problem for streaming mean and variance computation was proposed by West in 1979.In his algorithm the summed quantities are controlled to be on average of comparable size.The data do not need to be saved for a second pass.This better way of computing variance goes back to a 1962 paper by B. Welford and is presented in Donald Knuth’s Art of Computer Programming, Vol 2, page 232, 3rd edition.Furthermore, the method computes a running variance.That is, the method computes the variance as the ‘s arrive one at a time.See Comparing three methods of computing standard deviation for examples of just how bad the above formula can be.There is a way to compute variance that is more accurate and is guaranteed to always give positive results.In the absence of information to the contrary I assume you want univariate calculations and you want the $n-1$-denominator version of variance.One useful way is to update the sums of squares of deviations from the mean, which I'll call SSE.

691 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

*