What is optimization?
Optimization is the minimization or maximization of one or more things within a system. For example, in operations management the goal of optimization may be to minimize waste in the production of goods or services. In industrial engineering the goal may be to improve resiliency and reduce downtime by introducing interchangeable parts but only stocking them at a level proportional to the system’s needs.
Optimization through simulation
Simulations or “digital twins” are often created to model real-world problems or environments in a safe, cost-effective fashion.
Often, the goal of creating these simulations is to find the optimal combination of policies or parameters for achieving some objective: e.g. to maximize revenues, minimize costs, improve the efficacy of an advertising campaign, or boost throughput in a factory.
Whilst configuring real-world environments and systems in different ways takes time, costs money, requires observation over a period to generate data, and is physically limited in its ability to test multiple setups at once, simulation allows many different versions of a system to be tested in paralllel.
Simulation-based optimization involves running simulations many times with different parameters on each occasion. Because HASH simulations can comprise thousands or more policies and parameters, this search process is automated. But even utilizing modern cloud infrastructure, trying all possible combinations of parameters (“grid searching”) to find the optimal strategy within a system is both time-consuming and expensive in terms of compute.
Optimization experiments in HASH allow for more rapid discovery of results, whilst expending fewer compute resources. HASH does this by combining dozens of algorithms for efficiently sampling and searching the “possibility space” of parameters, and intelligently scheduling simulation runs.
Techniques used include incorporating Tree-Parzen Estimators, Bayesian optimization, Hyperband, gradient methods, and early-stopping.
At the same time simulation runs are distributed and parallelized across hCloud allowing for thousands of simulation runs per second.
Easy to use
HASH makes it possible to run large-scale optimization experiments in seconds, and provides a graphical interface for specifying objectives and constraints. As a result even non-technical users can run experiments within simulations to inform business decision-making.
All HASH simulations are automatically ready to be run on hCloud, eliminating the need for data scientists and domain experts to engage in DevOps or infrastructure management.
HASH can be used to predict the emergence of unexpected phenomena in both predefined scenarios and under conditions of uncertainty because it employs generative agent-based simulation. The resiliency of systems can be measured in a general sense, probabilistically across many runs, and specifically in the face of individual scenarios.
As part of HASH Enterprise, simulations can be run on a scheduled basis, programmatically triggered via the HASH API, or run automatically in response to observed changes in external data (e.g. sensor data, a database, or remote data warehouse).