You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/num_methods/simulated_annealing.md
+30-17Lines changed: 30 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ tags:
9
9
10
10
##The Problem
11
11
12
-
We are given a function $E(s)$, which calculates the potential of the state $s$. We are tasked with finding the state $s_{best}$ at which $E(s)$ is minimized.SA is suited for problems where the states are discrete and $E(s)$ has multiple local minima. We'll take the example of the[Travelling Salesman Problem (TSP)](https://en.wikipedia.org/wiki/Travelling_salesman_problem).
12
+
We are given a function $E(s)$, which calculates the potential of the state $s$. We are tasked with finding the state $s_{best}$ at which $E(s)$ is minimized.**SA** is suited for problems where the states are discrete and $E(s)$ has multiple local minima. We'll take the example of the[Travelling Salesman Problem (TSP)](https://en.wikipedia.org/wiki/Travelling_salesman_problem).
13
13
14
14
###Travelling Salesman Problem (TSP)
15
15
@@ -20,29 +20,43 @@ You are given a set of nodes in 2 dimensional space. Each node is characterised
20
20
State space is the collection of all possible values that can be taken by the independent variables.
21
21
A State is a unique point in the state space of the problem. In the case of TSP, all possible paths that we can take to visit all the nodes is the state space, and any single one of these paths can be considered as a State.
22
22
23
-
###NeighbouringState
23
+
###Neighbouringstate
24
24
25
-
It is aState in theState space which is close to the previousState. This usually means that we can obtain the neighbouringState from the originalState using a simple transform. In the case of the Travellingsalesman problem, a neighbouring state is obtained by randomly choosing 2 nodes, and swapping their positions in the current state.
25
+
It is astate in thestate space which is close to the previousstate. This usually means that we can obtain the neighbouringstate from the originalstate using a simple transform. In the case of the TravellingSalesman Problem, a neighbouring state is obtained by randomly choosing 2 nodes, and swapping their positions in the current state.
26
26
27
-
###E(s)
27
+
###The Energy FunctionE(s)
28
28
29
29
$E(s)$ is the function which needs to be minimised (or maximised). It maps every state to a real number. In the case of TSP, $E(s)$ returns the distance of travelling one full circle in the order of nodes in the state.
30
30
31
-
##Approach
31
+
##TheApproach
32
32
33
-
We start of with a random state $s$. In every step, we choose a neighbouring state $s_{next}$ of the current state $s$. If $E(s_{next}) < E(s)$, then we update $s = s_{next}$. Otherwise, we use a probability acceptance function $P(E(s),E(s_{next}),T)$ which decides whether we should move to $s_{next}$ or stay ats. T here is the temperature, which is initially set to a high value and decays slowly with every step. The higher the temperature, the more likely it is to move tos_next.
34
-
At the same time we also keep a track of the best state $s_{best}$ across all iterations.Proceed till convergence or time runs out.
33
+
We start of with a random state $s$. In every step, we choose a neighbouring state $s_{next}$ of the current state $s$. If $E(s_{next}) < E(s)$, then we update $s = s_{next}$. Otherwise, we use a probability acceptance function $P(E(s),E(s_{next}),T)$ which decides whether we should move to $s_{next}$ or stay at$s$. T here is the temperature, which is initially set to a high value and decays slowly with every step. The higher the temperature, the more likely it is to move to$s_{next}$.
34
+
At the same time we also keep a track of the best state $s_{best}$ across all iterations.Proceeding till convergence or time runs out.
35
35
36
+
###How does this work?
37
+
38
+
This algorithm is called simulated annealing because we are simulating the process of annealing, wherein a material is heated up and allowed to cool, in order to allow the atoms inside to rearrange themselves in an arrangement with minimal internal energy, which in turn causes the material to have different properties. The state is the arrangement of atoms and the internal energy is the function being minimised. We can think of the original state of the atoms, as a local minima for its internal energy. To make the material rearrange its atoms, we need to motivate it to go across a region where its internal energy is not minimised in order to reach the global minima. This motivation is given by heating the material to a higher temperature.
39
+
40
+
Simulated annealing, literally simulates this process. We start off with some random state (material) and set a high temperature (heat it up). Now, the algorithm is ready to accept states which have a higher energy than the current state, as it is motivated by the high value of $T$. This prevents the algorithm from getting stuck inside local minimas and move towards the global minima. As time progresses, the algorithm cools down and refuses the states with higher energy and moves into the closest minima it has found.
<i>A visual representation of simulated annealing, searching for the maxima of this function with multiple local maxima.</i>.
47
+
<br>
48
+
<i>This <ahref="https://upload.wikimedia.org/wikipedia/commons/d/d5/Hill_Climbing_with_Simulated_Annealing.gif">gif</a> by[Kingpin13](https://commons.wikimedia.org/wiki/User:Kingpin13) is distributed under <ahref="https://creativecommons.org/publicdomain/zero/1.0/deed.en">CC0 1.0</a></i> license.
This function takes in the current state, the next state and the Temperature , returning a boolean value, which tells our search whether it should move to $s_{next}$ or stay ats. Note that for $E_{next} < E$ , this function will always return True.
57
+
\end{cases}$
58
+
59
+
This function takes in the current state, the next state and the Temperature , returning a boolean value, which tells our search whether it should move to $s_{next}$ or stay at$s$. Note that for $E_{next} < E$ , this function will always return True.
46
60
47
61
```cpp
48
62
boolP(double E,double E_next,double T){
@@ -99,13 +113,12 @@ pair<int,state> simAnneal(){
99
113
Fill in the state class functions as appropriate. If you are trying to find a global maxima and not a minima, ensure that the $E()$ function returns negative of the function you are maximising and finally print out $-E_{best}$. Set the below parameters as per your need.
100
114
101
115
###Parameters
102
-
-T : Temperature. Set it to a higher value if you want the search to run for a longer time
103
-
-u : Decay. Decides the rate of cooling. A slower cooling rate (larger value of u) usually gives better results. Ensure $u < 1$.
116
+
-$T$ : Temperature. Set it to a higher value if you want the search to run for a longer time
117
+
-$u$ : Decay. Decides the rate of cooling. A slower cooling rate (larger value of u) usually gives better results. Ensure $u < 1$.
104
118
105
119
The number of iterations the loop will run for is given by the expression
106
-
$$
107
-
N = \lceil -\log_{u}{T} \rceil
108
-
$$
120
+
121
+
$N = \lceil -\log_{u}{T} \rceil$
109
122
110
123
To see the effect of decay rate on solution results, run simulated annealing for decay rates 0.95 , 0.97 and 0.99 and see the difference.