Discord

Product Updates

What's new at HASH?

Latest changes

Stopping Conditions

We’ve added a new feature to HASH where simulations can be stopped at a specific point by sending a stop message to the engine. This built-in message is useful for stopping a simulation after a given number of steps, or when a particular condition has been reached in the simulation.

state.addMessage("hash", "stop", { status: "success", reason: "completed the initial optimization" })

Additional logging data can be attached to the message to help with debugging. Read more about stop messages in the HASH docs.

Older changes

More Optimization Strategies

To complement our existing optimization libraries, we’ve released two additional optimization simulations, A* Search and Monte Carlo Tree Search.

A* Search

A* search is one of the most popular search algorithms. It uses an optimistic heuristic plus breadth first search to find the best possible route through a graph to a target destination node. Use the A* search library in combination with an agent based model to have agents navigate graphs efficiently.

Monte Carlo Tree Search

Monte Carlo Tree Search (MCTS) is a modified tree search that uses heuristics to prioritize searching certain branches of a game tree based on the likelihood of finding winning moves. A Monte Carlo distribution determines the game tree moves, optimizing for the branches that have returned the highest score in previous iterations but balancing that with exploring novel choices. MCTS has had a lot of success in games, most notably serving as the underlying algorithm for AlphaGo. In the simulation, an MCTS behavior powers the search of an agent playing tic-tac-toe.

Older changes

Gradient Descent Optimizations

We’re releasing two simulation behavior libraries for gradient descent based optimization:

  • Stochastic Gradient Descent (SGD): A classic optimization behavior, stochastic gradient descent optimizes a set of parameters by randomly exploring the solution space and ‘moving’ potential solutions up or down a gradient to find local maxima/minima. By generating solution agents randomly in the solution space, SGD is likely (though not guaranteed) to find the global maxima/minima. SGD Behavior Library
  • Simulated Annealing: Akin to SGD, simulated annealing works by exploring the solution space through hill climbing behavior. However, simulated annealing implements an explore-exploit technique by randomly moving in a direction. If the choice is fitness improving its accepted; otherwise with some probability it’s rejected. Over time the probability of accepting a random deleterious move is decreased. As the simulation runs, its more likely to preserve the best move and reach the local minima/maxima. Simulated Annealing Behavior Library.

Explore the simulations and use the behaviors in your own simulations.

Older changes

Genetic Programming

We’ve released a genetic programming simulation that showcases using a genetic algorithm to evolve solutions to an optimization problem.

The simulation is made up of four key behaviors:

  • fitness.py: Calculates a fitness score for a potential solution.
  • evaluate.py: Compares and determines the best fitness score among the solutions.
  • crossover.py: Create new solution options from the existing solutions.
  • mutate.py: Randomly introduce changes in the agents.

When added to a pool of agents the behaviors will converge to the optimal solution. Read more about genetic programming in our accompanying blog post.

Older changes

Reinforcement Learning Behaviors

We’ve published a library of behaviors and example simulations demonstrating how to implement the popular Q-learning reinforcement algorithm in a HASH simulation. The library contains a generic set of Q-learning behaviors that can be added to an agent to train it to take an optimal action in its environment. The simulations are:

In both simulations you can see how the agent’s rewards converge to a steady state where it has, over many iterations, learned a policy to execute.

Older changes

Travel back in time

You can now travel back in time to any point in your simulation’s history, by clicking on an edit in the activity sidebar to reload the simulation source code at that exact moment – hovering over edits will also now show a message explaining which files were affected by each edit. When visiting any historical version, you can click ‘Fork’ in the file menu to create a copy of that source code and see where a different path for your digital world leads.

Analysis output improvements

To help inspect the outputs of your simulation, you can now export your analysis metrics along with raw simulation state by right-clicking on any run in the history and clicking ‘Export as JSON’.

We’ve also introduced options to hide plot labels to the plots wizard (click edit when viewing any plot), and fixed a bug with setting custom labels for multi-line plots.

Older changes

Custom viewer colors

You can now set custom colors for the stage and grid in the viewer, allowing you to give your simulations extra visual flair. Any combination you choose for each simulation will be saved to your browser.

We’ve also given the viewer settings menu a makeover. If you haven’t explored it before, it offers a range of customisation options, including switching between 3D and 2D view, toggling elements on and off, and more.

Python upgrades

We’ve significantly improved Python support to offer new and upgraded libraries available for use locally in-browser, as well as support for running Python simulations client-side in Safari:

  • New libraries: future, autograd, freesasa, lxml, python-sat, traits, astropy, pillow, scikit-image, imageio, numcodecs, msgpack, asciitree, zarr.
  • Upgraded libraries: numpy 1.15.4, pandas 1.0.5, matplotlib 3.3.3.
  • Safari support: Python behaviors can now be run locally when using Safari (version 14+), building on existing support for experiments run in hCloud, and bringing the browser to parity with Chrome, Firefox and Edge.
Older changes

Basic Physics Library

HASH has a new physics library, which introduces four behaviors that you can use to create simulated physics environments.

  • forces.rs gives your agents realistic movement through space.
  • gravity.rs will pull agents to the ground.
  • collision.rs causes agents to bounce off one another, conserving momentum and energy.
  • spring.rs mimics springs with various strengths.

Explore a pendulum built with the physics library library behaviors >

Older changes

Discrete Event Library

We’re releasing a behavior library to add discrete event timing features to your simulations. The library provides behaviors to trigger specific agents to take actions when events are generated within the simulation. The events can be created based on attached datasets, or from other agents. Read about discrete event simulations in the HASH wiki or see our accompanying blog post on the new library to learn more.

Older changes

hCore Messaging API

HASH simulations can communicate with external web applications through the recently released web messaging API for HASH. You can read the state, change the files, and create new runs of an embedded HASH model by sending and receiving messages. Read the HASH docs here.

Pageof