Simulation as Tokenomics Engineering Method

Before we begin, let's define the "why" of it all. Consider the customer's requirements and how the simulation can meet their needs.

Suppose you are a developer of an innovative new crypto product, or you are creating a clone of one already on the market, but with some improvements. It doesn't matter. You've calculated everything, figured out how the token will be used in the project, how NFTs will be used, how the project will earn, where and what commission to put up, and so on. According to the preliminary calculations, everything is clean and smooth, and it's time to do tokenomics.

In most cases, the question comes up, "How many tokens to distribute and to whom to distribute them, what timing to unlock them?" Then you start guessing at coffee grounds: how to distribute it all, what retention terms to set, and so that no one gets offended. This is an incredibly bad approach to tokenomics design.
Receive articles by mail

What to do?

Any crypto project works with uncertainties, and when there is a lack of data, conventional methods of analysis are not suitable, because we have to operate with assumptions. Fortunately, science has long since developed techniques for dealing with data scarcity. Tools such as "Monte Carlo Method", "Markov chains" are widely used in science, economics, games and many other fields to solve mathematical problems.

The Monte Carlo method is used for approximate numerical solution of mathematical problems that are difficult or impossible to solve analytically.

Markov chains are a mathematical tool used to model random processes with discrete states.

How are they used in crypto projects?

Let's imagine that you have developed a protocol, a game, or any other crypto-project with different variants of user behavior. The options can be random or have a choice of predefined events. The events can be related to each other or have an arbitrary nature. That is, the user after a certain action has a set of possible options to continue the interaction, for example: the user bought tokens and put them in the stack; his next step is options: either he just waits, or he adds more tokens to the stack, or he takes tokens from the stack.

There can be many such choices during the course of a cryptoproject at each step. We cannot know in advance how the user will act at any given moment, but in the case of Markov Chains, we can determine the probabilities of an event at each specific point, and in the case of the Monte Carlo Method, we can go through the possible choices by choosing all possible outcomes.
As mentioned above, we are working with a lack of data and this translates into freedom of choice for the user. If his choice affects the state of the system and we know how, we can create an experiment in which we simulate a sequence of user choices and see how this affects the performance of the system.

In this way, we can generate data based on assumptions. Consider, for example, a GameFi project. Suppose players have several choices:

1) Player bought NFT - farm coins - pump NFT - sell remaining coins.
2) Player bought NFT - farms coins - sells all coins.

Using Markov chains, we form a sequence of user actions. In each state, the choice of the next step is predetermined by known possible outcomes.

For example, a player cannot open a chest until he reaches level 10. Thus, the player at level 5 cannot perform an action with the chest, and the probability of the event in this state is 0. However, if the player has level 10, he can decide to open the chest or not to do so.

In addition to this, we use the Monte Carlo method, which is to try all possible choices at random. We end up with multiple outcomes depending on the randomness of the choices:
The more complex and diverse the protocol, the greater the number of outcomes we have in the output. This is the essence of simulation research, where the data are generated randomly, but within the probabilities at each step.

How do I use the data?

This is a good question, because the data set obtained is just a variation of the outcome, and you need to capitalize on it somehow. Once the data is generated, there comes the traditional analytics phase. We analyze the dataset, identifying dependencies and patterns as if the data were from the actual product.

1. First, we filter the data and identify negative behavioral scenarios. Remember when we were making up the variants above? Well, there could be hundreds of them, and some behaviors will lead to unsatisfactory results.

2. Once we have that data, we describe it.

3. Next, we choose positive scenarios. We also describe them.

4. The next step is to do a comparative analysis and identify the dependencies of influence on the outcome.

The result of this work will be an understanding of how the user's behavior affects the operation of the protocol. This will allow us to adjust his behavior so that negative outcomes are eliminated or minimized. That is, he is no longer able to choose the options that lead to problems.

Where's the tokenomics here?

This is a painful question for us, because we don't consider token distribution to be tokenomics. Distribution is just a consequence of tokenomics, but tokenomics itself is the rules of the protocol and the effective handling of user behavior.

In other words, tokenomics is the protocol itself!

Developing tokenomics means creating the rules according to which tokens are issued, burned, held or converted. These rules determine how the entire project works. Simulation, on the other hand, is a research and data generation tool that is used to optimize tokenomics. Conclusions drawn from simulations can lead to the need to change tokenomics. This may include changes to the token unlocking schedule, but more likely will require more in-depth changes to the way the project works.

Thus, tokenomics includes any rules the user encounters when interacting with tokens. And the simulation research method helps determine how well the tokenomics are functioning and what rules should be changed to prevent negative consequences.

Simulation is like a way to look into the future and see the multifaceted outcomes of that future. And interpreting the data correctly will allow you to understand what needs to be done and what behavioral protocols need to be developed to gain full control of that future.
Subscribe to the blog, there
is a lot of interesting and
useful information ahead

Receive articles by mail