Key concepts in simulation and modeling explained
Key concepts in simulation and modeling explained
There are two main approaches to building agent-based simulations: object-oriented programming and the actor-based model.
ABMs simulate entities in virtual environments, or digital twins, in order to help better understand both entities and their environments.
Applicant tracking systems help employers manage recruitment and hiring.
Autocorrelation is a measure of the degree of similarity between any time series and a lagged or offset version of itself over successive time intervals.
Business Intelligence allows companies to make data-driven decisions.
Business Process Modeling (BPM) helps organizations catalog, understand and improve their processes.
Content management systems allow you to build and manage websites.
Customer relationship management systems track and coordinate interactions between a company and its customers.
If you don’t know your DAGs from your dogs, you can finally get some clarity and sleep easily tonight. Learn what makes a Directed Acyclic Graph a DAG.
Data Drift is the phenomenon where changes to data degrade model performance.
Data meshes are decentralized database solutions.
Data Mining is a process applied to find unknown patterns, correlations, and anomalies in data. Through mining, meaningful insights can be extracted from data.
Data pipelines are processes that result in the production of data products, including datasets and models.
DRL is a subset of Machine Learning in which agents are allowed to solve tasks on their own, and thus discover new solutions independent of human intuition.
Diffs are used to track changes between different versions or forks of a project, providing an overview regarding files changed, and the nature of those changes.
Digital twins are a detailed simulated analogue to a real-world system
DES is a modeling approach that focuses on the occurrence of events in a simulation, separately and instantaneously, rather than on any chronological-scale.
Ego networks are a framework for local analysis of larger graphs.
Enterprise resource planning uses an integrated software system to manage a business' daily tasks.
An electronic healthcare standard for data interoperability.
Forking something means to create a copy of it, allowing individual developers or teams to work on their own versions of it, in safe isolation.
Graph Databases are a type of database that emphasizes the relationships between data.
Graph representation learning is a more tailored way of applying machine learning algorithms to graphs and networks.
Knowledge graphs are information-dense inputs to machine learning algorithms, and can capture more human-readable outputs of algorithms.
Knowledge Graphs contextualize data and power insight generation.
Machine Learning is a subfield of Artificial Intelligence where parameters of an algorithm are updated from data inputs or by interacting with an environment.
Merging is the process of reconciling two projects together. In HASH merging projects is handled by submitting, reviewing and approving “merge requests”.
Metadata is data about data. It’s quite simple, really. Learn more about how it’s used within.
Models tend to become less accurate over time.
There are lots of ways to license simulation models. Here we outline some key considerations and things to be aware of.
There are lots of ways to share simulation models: blackbox, greybox, closed, open, transparent, and output-only. Here we explain what these terms all mean.
Multi-Agent Systems represent real-world systems as collections of intelligent agents.
Artificial Neural Networks are computer models inspired by animal brains. They consist of collections of nodes, arranged in layers, which transfer signals.
The key to finding the best solution to any problem.
Parameters control specific parts of a system's behavior.
Process mining is an application of data mining with the purpose of mapping an organization’s processes. It is used to optimize operations, and identify weaknesses.
Project management software is used to manage teams completing complex projects.
Robotic process automation uses software to perform repeatable business tasks.
Robustness is a measure of a model's accuracy when presented with novel data.
Schemas are descriptions of things: agents in simulations, and the actions they take. They help make simulations interoperable, and data more easily understood.
Simulation Models seek to demonstrate what happens to environments and agents within them, over time, under varying conditions.
Single synthetic environments allow you to build, run, and analyze data-driven models and simulations.
Stochasticity is a measure of randomness. The state of a stochastic system can be modeled but not precisely predicted.
Generating data that mimics real data for use in machine learning.
System Dynamics models represent a system as a set of stocks and the rates of flows between them.
Time series data is data that has been indexed, listed, or graphed in time order. For example, the daily closing value of the NASDAQ, the price of a cryptocurrency per second, or a single step in a simulation run.
In continuous time, variables may have specific values for only infinitesimally short amounts of time. In discrete time, values are measured once per time interval.