Stepping back

Okay, I need to take a step or two back. I realized this while I was slogging through the details of \Lambda poisedness that, thinking the way I do, I need to stop periodically and refresh my view of the big picture. Otherwise, I'm going to get lost. So, here's a refresh of the big picture.

All derivative free optimization algorithms, at their core, rely on sampling. You sample the space, pick the best value you find (the current best iterate), and somehow use the values you've already sampled to pick your next set of locations for sampling. That's it. All of the rest of the machinery in any DFO algorithm is there to ensure one of a very few things:

  1. Your sampling set adequately represents the region of your space it purports to. (This is what all of the fuss of poisedness is about).
  2. Your choice of a new sampling set for the next iteration will improve your current best iterate.
  3. The improvement will be sufficiently large for you to care about it

So, when I'm trying to grok multiindices and Newton Fundamental Polynomials and \Lambda-poisedness, I need to keep asking: how does this make me choose a better sample set? or, how will this let me use a sample set to improve my current best iterate?

Comments are disabled for this post