Discord

Gradient Descent Optimizations

We’re releasing two simulation behavior libraries for gradient descent based optimization:

  • Stochastic Gradient Descent (SGD): A classic optimization behavior, stochastic gradient descent optimizes a set of parameters by randomly exploring the solution space and ‘moving’ potential solutions up or down a gradient to find local maxima/minima. By generating solution agents randomly in the solution space, SGD is likely (though not guaranteed) to find the global maxima/minima. SGD Behavior Library
  • Simulated Annealing: Akin to SGD, simulated annealing works by exploring the solution space through hill climbing behavior. However, simulated annealing implements an explore-exploit technique by randomly moving in a direction. If the choice is fitness improving its accepted; otherwise with some probability it’s rejected. Over time the probability of accepting a random deleterious move is decreased. As the simulation runs, its more likely to preserve the best move and reach the local minima/maxima. Simulated Annealing Behavior Library.

Explore the simulations and use the behaviors in your own simulations.