What's new at HASH?
What's new at HASH?
We’ve published a range of packages for folks developing Block Protocol blocks and applications. These include two major utilities:
mock-block-dock): a lightweight environment and mock datastore for developing and testing Block Protocol blocks.
create-block-app): a command-line utility to help you quickly create the scaffolding required for new Block Protocol blocks, using our
The homepage provides an overview of our projects, and how they fit together, while the HASH Developer Blog will contain our developer-facing news and announcements going forward.
We’ll be adding reference guides and tutorials for engineers working with the Block Protocol and HASH’s other projects to the hash.dev site soon.
A lightweight version of the schema editor from our open-source HASH workspace software has been made available via the Block Protocol website. While this was already available to folks self-hosting HASH, it was not yet available hosted online.
Using the schema editor, properties of entity types can be individually defined as well as linked or ‘crosswalked’ to their equivalents in other schemas to make them interpretable as RDF or JSON-LD. For example, a
firstName field might be linked to its schema.org equivalent of
Schemas created through the Block Protocol Hub will be hosted in perpetuity at an immutable permalink provided at the point of creation, and accessible via your Block Protocol user profile page. To view your schemas as raw JSON instead of as web pages, request the URLs with
application/json in an “Accept” HTTP header.
We’ve published the code behind our forthcoming HASH workspace application under the open-source AGPL license.
You can find this within the
hash package of the HASH public monorepo. It is currently in pre-release and we welcome early feedback.
You can now create an account on the Block Protocol website and generate an API key. If you already have a hosted HASH user account, your namespace on the Block Protocol site has been reserved. Simply enter the same email address you used to sign up for hash.ai to claim this.
API keys are usable by embedding applications who want to search the Block Protocol Hub for blocks (or allow their users to do the same). Further down the line you’ll also be able to publish and update blocks listed on the Block Protocol Hub using your API key.
We’ve open-sourced an initial set of blocks under the MIT license. Although basic, these give a sense for the types of things we expect users to build. You can find them in the
blocks package within the HASH public monorepo on GitHub.
We’ve published the draft specification of the Block Protocol at blockprotocol.org/spec alongside an implementation guide for application maintainers wishing to support the standard, and a developer guide for block creators interested in contributing new blocks to the ecosystem. We’ll be evolving the standard a lot in the coming weeks in response to feedback. To read more about the Block Protocol, and why we’re building it, check out our recent Block Protocol announcement.
We have published the code at the heart of HASH simulations: you can view hEngine, the open computational engine, in this public repository. This is an alpha-stage product under active development, made available to allow developers and interested parties to explore and provide feedback on the code.
By making our simulation engine public we are delivering on several key principles:
We will now be working towards a first formal release, shipping a number of improvements and exciting new features in the weeks to come. We’d love for you to get involved, whether through contributing, raising issues, or getting in touch to share feedback and feature requests.
You can start using your existing projects with the open engine straight away, by exporting a project’s files and dependencies from hCore. This allows it to be downloaded for local use with hEngine, and enables you to backup your simulation code and data offline.
To do this, open up the File menubar and hit the Export Project button.
Coming soon: We’ll be introducing the ability to upload simulation project files to HASH, enabling seamless transition between developing on your machine and running simulations in-browser (in hCore) and at scale (with hCloud).
Using the new waypoint navigation library, agents will move towards their next waypoint until they are close enough to receive their next destination. Once agents have reached the waypoint closest to their final destination, they navigate directly to the location. This allows for manual or assisted creation of realistic and efficient agent movement paths around obstacles.
Our first course delves into computational economics and introduces users to five basic agent-based models, explaining step-by-step how to use those models in HASH.
We’re releasing two new simulation templates that make it easy to use your data to generate realistic models.
Modern Warehouse: A simulation template for generating a warehouse with realistic proportions and picking operations. Forklifts move between the racks and crates picking up orders following a series of waypoints that determine their routes. Forklifts will orient to the nearest waypoint and make sensible decisions to get to their pickup and delivery location.
Cloud Infrastructure: A simulation template for creating representations of cloud infrastructure with Terraform. Using either a terraform file or manual specification, the simulation generates a model of a Kubernetes cluster with requests coming in from real request data or estimated distributions.
By default, hCore retains all the state data from each step in a simulation. This limits how long you can run a sim for, as the state data eventually uses all the available memory.
We’ve introduced an option to retain only the most recent data – for any number of steps you choose – allowing sims to run with a much, much lower memory footprint.
Analysis can still be computed for the entire run by watching the analysis tab as the sim is computing.
Memory efficiency has also been improved elsewhere, most significantly by plugging a memory leak in the 3D viewer, which overall leads to memory use around a third of that previous for typical simulations, even when retaining all step data.
Until now, analysis metrics could only be exported as JSON, requiring further transformation in order to use this data in traditional spreadsheet-based software like Excel or Google Sheets.
Metrics can now additionally be downloaded as a structured CSV file for direct easy import into third-party applications, allowing you to complement in-IDE analysis with your other favourite tools.
To download simulation state and analysis data, right click on a run in the activity history, and click ‘Export run data’.
This week we’ve shipped a bunch of smaller changes that improve platform quality of life.
We squashed a ton of bugs, including a couple of notable nasties:
We’ve also moved the HASH glossary into our open-source monorepo, so you can add pages or edit definitions directly. Feel free to check out our contributor guidelines and open up a Pull Request! We’ve also migrated our docs away from GitBook and into a new unified learn HASH website.