text.skipToContent text.skipToNavigation
background-image

Guided Randomness in Optimization von Clerc, Maurice (eBook)

  • Erscheinungsdatum: 04.05.2015
  • Verlag: Wiley-ISTE
eBook (ePUB)
116,99 €
inkl. gesetzl. MwSt.
Sofort per Download lieferbar

Online verfügbar

Guided Randomness in Optimization

1
Necessary Risk

Il arrive souvent de ne rien obtenir parce que l ' on ne tente rien (Often, nothing is gained because nothing is attempted)

Jacques Deval

In using chance to solve a problem, there is always a risk of failure, unless an unlimited number of attempts are permitted: this is rarely possible. The basic idea involved in stochastic optimization is that this risk is necessary, for the simple reason that no other solution is available; however, it may be reduced by carefully controlling the use of random elements. This is generally true, in that a correctly-defined optimizer will produce better results than a purely random search for most test cases. However, this is not always the case, and the ability to identify these "anomalous" situations is valuable.
1.1. No better than random search

Let us take a set of permutation tests. A precise definition is given in the Appendices ( section 7.1 ). Here, note simply that based on one discrete finite function, all of the other functions can be generated by permutations of possible values at each point.

The definition space is E = (0, 1, 2, 3) and the value space is V = (1, 2, 3). A function is therefore defined by its values at the points of E , for example f 1 (1, 3, 2, 2). One possible permutation of this function is f 2 (1, 2, 3, 2); there are 12 such functions in total, each of which is a permutation of the others, shown in the first column of Table 1.1 . Each function has a minimum value of 1 (to simplify our discussion, optimization in this case will always be taken to mean minimization). Now, let us consider three iterative algorithms, and calculate the probability that they will find the minimum of each function. These algorithms are all without repetition, and conserve the best position obtained along with the associated value (the ratchet effect). A brief, informal description of these algorithms is given below. For each, the result is given as a pair ( x , f ( x )), where x is the proposed solution.
1.1.1. Uniform random search

This algorithm, like those which follow, includes an initialization phase, followed by an iteration phase (see section 1.1 .). Let us calculate the probability p ( t ) of finding the solution after t position draws. As there is only one solution, p (1) = 1/4, the probability of not obtaining the solution on the first try is therefore 1 - p (1). In this case, as three nonsampled permutations remain, the probability of obtaining the solution on the second try is 1/3. Thus, the probability of obtaining the solution on the first or second try is p (2) = p (1) + (1 - p (1)) 13 = 1/4 + 3/ 4 1/ 3 = 1/2. Similarly, the probability of obtaining the solution on the first, second or third try is p (3) = p (2) + (1 - p (2) 1/2) = 3/4. Evidently, as the algorithm is without repetition, the probability of having found the solution on the fourth try is 1, as an exhaustive search will have been carried out.
Algorithm 1.1. Random search without repetition

Initialization

- Draw a position x at random, following a uniform distribution (each position has the same selection probability).
Iterations

As long as the STOP criterion (for example a maximum number of iterations) has not been reached:

- draw a position x at random from the unsampled population;
- if f ( x ) < f ( x ), then replace x by x . 1.1.2. Sequential search

This method consists of drawing positions one by one, not at random (without repetition), but in a predefined order, for example position 1, position 2, etc. To calculate p

Produktinformationen

    Format: ePUB
    Kopierschutz: AdobeDRM
    Seitenzahl: 100
    Erscheinungsdatum: 04.05.2015
    Sprache: Englisch
    ISBN: 9781119136453
    Verlag: Wiley-ISTE
    Größe: 5347 kBytes
Weiterlesen weniger lesen

Guided Randomness in Optimization

1
Necessary Risk

Il arrive souvent de ne rien obtenir parce que l ' on ne tente rien (Often, nothing is gained because nothing is attempted)

Jacques Deval

In using chance to solve a problem, there is always a risk of failure, unless an unlimited number of attempts are permitted: this is rarely possible. The basic idea involved in stochastic optimization is that this risk is necessary, for the simple reason that no other solution is available; however, it may be reduced by carefully controlling the use of random elements. This is generally true, in that a correctly-defined optimizer will produce better results than a purely random search for most test cases. However, this is not always the case, and the ability to identify these "anomalous" situations is valuable.
1.1. No better than random search

Let us take a set of permutation tests. A precise definition is given in the Appendices ( section 7.1 ). Here, note simply that based on one discrete finite function, all of the other functions can be generated by permutations of possible values at each point.

The definition space is E = (0, 1, 2, 3) and the value space is V = (1, 2, 3). A function is therefore defined by its values at the points of E , for example f 1 (1, 3, 2, 2). One possible permutation of this function is f 2 (1, 2, 3, 2); there are 12 such functions in total, each of which is a permutation of the others, shown in the first column of Table 1.1 . Each function has a minimum value of 1 (to simplify our discussion, optimization in this case will always be taken to mean minimization). Now, let us consider three iterative algorithms, and calculate the probability that they will find the minimum of each function. These algorithms are all without repetition, and conserve the best position obtained along with the associated value (the ratchet effect). A brief, informal description of these algorithms is given below. For each, the result is given as a pair ( x , f ( x )), where x is the proposed solution.
1.1.1. Uniform random search

This algorithm, like those which follow, includes an initialization phase, followed by an iteration phase (see section 1.1 .). Let us calculate the probability p ( t ) of finding the solution after t position draws. As there is only one solution, p (1) = 1/4, the probability of not obtaining the solution on the first try is therefore 1 - p (1). In this case, as three nonsampled permutations remain, the probability of obtaining the solution on the second try is 1/3. Thus, the probability of obtaining the solution on the first or second try is p (2) = p (1) + (1 - p (1)) 13 = 1/4 + 3/ 4 1/ 3 = 1/2. Similarly, the probability of obtaining the solution on the first, second or third try is p (3) = p (2) + (1 - p (2) 1/2) = 3/4. Evidently, as the algorithm is without repetition, the probability of having found the solution on the fourth try is 1, as an exhaustive search will have been carried out.
Algorithm 1.1. Random search without repetition

Initialization

- Draw a position x at random, following a uniform distribution (each position has the same selection probability).
Iterations

As long as the STOP criterion (for example a maximum number of iterations) has not been reached:

- draw a position x at random from the unsampled population;
- if f ( x ) < f ( x ), then replace x by x . 1.1.2. Sequential search

This method consists of drawing positions one by one, not at random (without repetition), but in a predefined order, for example position 1, position 2, etc. To calculate p

Weiterlesen weniger lesen

Kundenbewertungen