Product Updates

What's new at HASH?

Latest changes

Major performance improvements

Ahead of open-sourcing hEngine, we’ve migrated hCloud to a new optimized architecture. This results in up to 20x faster Python simulations, and 10x faster JavaScript simulation.

Behavior keys

In conjunction with our major update to hCloud, you can now statically type inputs and outputs of behaviors, giving the underlying hEngine more information to optimize their execution. We advise you to add these to simulations at the point of creation. Read more about Behavior Keys >

Higher free-tier limits

All users now receive 10hrs free hCloud compute time each month (a 10x increase over our previous free tier limits). If you hit your cap, we’ll reach out to discuss your use-case. Feel free to contact us ahead of time if you’d like to learn more about how HASH can support production simulation workflows.

Other updates

Built-in 3D models (introduced 2020-11-16) should now load more quickly in the 3D viewer. Previously these were pulled from Google Poly on-request, however we’re now serving these most commonly used files from our own CDN which allows us to deliver them to you much faster.

We’ve also launched a blog. Our older posts are still available on Medium, but we’ll be posting here on the site going forward.

Older changes

3D meshes

Agents in HASH can now make use of custom meshes to give them form in the 3D viewer. Previously agents were constrained to basic geometric shapes.

Built-in meshes

We’ve built a number of optimized meshes directly into HASH ready for use. These represent a varied range of real-world ‘things’: from birds and fish, to planes and cars.

Custom meshes

Allowing users to expand the visualization options available to them beyond those built-in to hCore, custom meshes can also be imported into HASH by referencing a Google Poly model ID. Via this mechanism, users can import a wide array of community-built meshes into your 3D HASH environments.

Learn more in the docs >

Multi-parameter sweeping

Previously users were limited to varying one parameter at a time in experiment runs. We now support a wide array of multi-parameter experiments allowing for speedier optimization and exploration within complex systems. Coupled with upcoming speed improvements we expect this to dramatically accelerate learning in multi-agent environments.

Learn more about experiments in HASH >

Experiment-level analysis

It is now possible to visualize the results of whole experiments (combining outputs from multiple simulation runs into single graphs). With this, we bring support for mean and error plots, box plots, scatter plots, density plots, and 3D surface plots.

Find out more about creating experiment-level analyses in our docs >

Older changes

Open access

We’ve dropped the login requirement users face when viewing projects in hCore, or modifying simulation globals. This means you can open up any public model in HASH without being logged in, and run an unlimited number of single-run simulations locally before needing to sign up for an account. 🕺

Please note: you’ll still need to be logged in if you want to run experiments, edit behavior logic, import additional behaviors, or create new models from scratch. But HASH accounts are, and will always be, free to create and access. Sign up for a HASH account 🚀


We’ve added a “Share” button to the top-right of hCore, so you can quickly generate a link to share your public models with others. Now the login requirement has been dropped, we hope to this will help make simulations far more accessible than they were before. ✨


If you’re already familiar with GitHub or GitLab, you’ll have a good sense of how Issues in HASH work. We’ve introduced the ability to open up a thread on any project in HASH from its hIndex listing page, so you can ask clarificatory questions around utilizing a model, suggest extensions to a datasets, or yes… even report issues. 🙏

Finally, we’d like to highlight a particularly cool user-created public model: Snowpiercer by Gareth Morinan. A write-up on its build has been published in Towards Data Science. 🥳

Older changes


Anybody can now star a project in HASH, as promised, allowing bookmarking of models or datasets of interest, and the ability to browse popular projects by their number of stars.

Package management

Following our migration to a Git-based filesystem, we’ve started exposing HASH project dependencies to users at last in a versioned fashion via the dependencies.json file in every project. This will contain a running list of all hIndex behaviors, datasets and other imports that a project relies on. Previously we stored this information as metadata attached to HASH projects, but in preparation for the open-sourcing of hEngine, and improving the portability of models, this now lives within the project repo. When you next open a model again in hCore this file will be automatically generated (if it doesn’t already exist).

And finally…

We’re sunsetting HASH Drive, our file explorer-like interface for navigating HASH projects, files and data. Instead you’ll now find all of your assets in standalone projects on your HASH profile, more akin to other code sharing communities like GitHub, GitLab and Glitch.

Older changes

We skipped our last update to bring you one really big update with two major pieces of news.

hCloud launch

hCloud is now in public beta, and available to all HASH users. That means that anybody with an account can start taking advantage of free cloud resources to run agent-based simulations, although we are capping usage as we scale the platform up.

Git migration… complete!

We’ve finished our Git migration, so that every HASH project now exists in a standalone repository. A wide array of new features will be coming soon, the first of which will ship later this week. In no particular order, these are:

  • starring of projects to easily bookmark them and show your appreciation
  • project-based issue tracking and discussions
  • proper forking of projects in hIndex
  • pull requests on projects
  • more granular project-based access permissions
  • complete activity histories in hCore (immutable records of all actions taken in a HASH project, providing a complete chain of provenance that allow you to recreate an experiment at any time)

Little things

  • You can now manually flush past simulation and experiment runs from local memory using the right-hand activity history sidebar. Great for users of older machines!
  • Better in-product linking to our docs throughout hCore
  • Much improved Python docs and code examples
Older changes

hCloud Early Access

We’re currently inviting users of HASH to the early access program for hCloud. To date all simulation in HASH has been done client-side, in-browser, meaning models have been constrained to the resources made available to them by users’ browsers. This is great for prototyping, but doesn’t scale well to millions of agents, or extremely long-running simulations. Apply for Early Access to hCloud

More Experiment Options

There are now more ways to set up and run experiments. Check out the updated docs.

In addition, we’re running office hours to help users get started. Book in and swing by to ask us any questions you might have about building models in HASH.

Something to look forward to

Eagle-eyed users may have spotted that their project URLs have changed in hCore. This is part of migration work we’re doing to move all simulations into their own Git repositories. In the next few weeks the fruits of this labor will become more visible, with functionality such as proper forking and PR reviews coming to the platform.

Older changes

Experiment Support

Support for experimentation has landed in Core! It’s now possible to run multiple simulations simultaneously. This unlocks a faster and more powerful workflow for exploring all the possible outcomes of a scenario. We anticipate this to be extremely useful for exploring search spaces, optimizing designs, and making better informed decisions from data.

To make experimentation easier, we’ve added a handful of utility functions for generating distributions and parameter sweeps. These include

  • linspace – vary a single parameter within a range
  • arange – vary a parameter based on an increment
  • values – manually enter values for a specific parameter
  • monte-carlo – generate random numbers according to a distribution
  • group – group together multiple experiment types into a single experiment

Read more about experiments at

Explore the power of experiments at:

More Massive Simulations

Along with experiment support, we’ve made some tweaks that should enable much larger simulations in the web browser! Check out our financial ABBA Financial Model, which simulates regulatory capital reserve in the banking system which now easily scales to thousands of agents.

Under the hood

  • Use a lot of shared behaviors? We’ve added, dependencies.json , a way of tracking HASH Index Package use in projects.
  • We’ve fixed the Step Explorer to ensure it properly updates when switching between multiple simulation runs.
Older changes

GPT-3 Initial State Generation

We are excited to announce integration with OpenAI’s GPT-3 natural language model to quickly build initial state conditions for simulations. Simply enter your desired initial state with natural language, and GPT-3 will generate a custom init.json with agents and corresponding properties.


We’ve introduced hotkey support for starting, stopping, stepping, and creating new simulations.

  • ctrl/cmd + enter starts and stops the simulation
  • alt + enter pauses and reset the simulation
  • ctrl/cmd + shift + enter single steps the simulation

We expect this to make it easier to iterate faster and simulate with more precision than before.

Simulation Template

With this new version, it’s now possible to create new simulations from two templates

  • A completely empty template for advanced users
  • A starter template for beginners

The starter template includes everything it takes to get up and running with a new simulation and demonstrates how to use tools like neighbors, configuration, and shared behaviors

Other improvements

  • We’ve upgraded the simulation engine to be both faster and to scale to larger simulations
  • The 2D viewer has been upgraded and with some bugs fixed
  • Agents can now be hidden from the 3D and geospatial views by setting their ‘hidden’ field to true
  • We’ve added a “single-step” button to increment the simulation with more precision.

Older changes


We released two very exciting simulations that showcase the incredible versatility and real-world application of multi-agent simulations possible with HASH Core.

Both of these address solving real-world challenges and serve as a solid foundation to solve operational problems with multi-agent modeling.

Editor Improvements

Search and replace across the entire simulation has landed! It’s now possible to mercilessly refactor large models with a search and replace experience that should feel natural for VSCode users. Project-wide search and replace supports wildcards, regex, and even a diff-view for precise refactoring.

New Rust Behavior

We’re happy to bring a new built-in Rust behavior for users interested in modeling kinematics and dynamics in their simulations. The vintegrate behavior is the new tool of choice for modeling moving objects – agents need the expected properties:

agent: {
   position:  [0,0,0],
   velocity:  [0,0,0],
   force:     [0,0,10],
   behaviors: ["@hash/"]

This new behavior is fast, and can bring large kinematics simulations to life very quickly. Check it out in action here.

Stdlib updates

We are now very excited to support the generation of statistical distributions for JavaScript behaviors. Today, it’s now possible to generate nearly any distribution of agents and parameters. This is enabled by re-exporting the entire jStat library via hash_stdlib.

jStat provides more functions than most libraries, including the weibull, cauchy, poisson, hypergeometric, and beta distributions. For most distributions, jStat provides the pdf, cdf, inverse, mean, mode, variance, and a sample function, allowing for more complex calculations.

These are accessible via the stats parameter in the editor:

const { stats } = hash_stdlib;
const { diff, stdev, coeffvar, chisquare } = stats;

Older changes

HASH Stdlib

The beginnings of a HASH standard library has landed! We’ve added three utility functions to make working with vector components in HASH easier:

  • randomPosition: uses the simulation’s topology layout to generate random positions
  • normalizeVector: normalizes any 3-wide vector in HASH (position and direction)
  • distanceBetween: find the distance between any two agents with 4 different distance functions

Read the docs to learn more about how to use these functions in behaviors. Let us know on Slack or in the forum what you’d like to see included!

Multicast Agent Messaging

We’ve improved how message sending works – now it’s possible to send messages to entire groups of agents with Agent Names. Messages sent with a specified “to” will be delivered to all agents with a corresponding Agent Name. The following code will deliver a message to all agents named “ants.”

agent.set("messages", {
    to: "ants",
    data: {
        message: "ping"

We’re excited to see what new simulations you will build with these features!

Other fixes and improvements

  • Improved ergonomics around analysis — and plots tab is now scrollable
  • File naming modals have been improved