Mint ERC721 tokens in pseudorandom order

Hi,

Due to the specifics of the project I'm working on I need a way to mint token IDs in a random fashion rather then sequentially. The requirements for randomness are not particularly strict, and block.timestamp + totalSupply() is enough for this particular case.

The problem comes as the amount of minted tokens increases. Since we need to ensure a new token ID was not minted yet, a !_exists(tokenId) lookup is required. But eventually the number of these calls becomes so big as to exhaust the gas limit.

(I know you might ask why not randomize the assets mapped to these IDs instead? But I wouldn't have to ask you this question if this would be possible — it is a strict project requirement.)

So the question is: Is there a more computationally efficient way to do this? Or a sequential lookup is the only solution which isn't going to work?

Have you considered using a hash? The likelihood if the resulting token id having been minted already is negligible.

Actually, in the setup you propose with block.timestamp + totalSupply(), why would this result in clashes? It sounds relatively unlikely.

1 Like

Thank you for the response.

Could you please elaborate on this proposal? What exactly has to be hashed?

The IDs pool is still narrow — 10000 tokens. The timestamp construct is used as an offset modulo 10000 to make a starting point for minting. You're correct, initially the clashes don't pose a problem but as the tokens pool gets progressively depleted, the consecutive ranges of owned tokens grow, and the gas usage for token availability check starts to grow pretty dramatically as well: the mint call with a single lookup uses 110889 gas, 139580 for 10 lookups, 414080 for 100 which is already prohibitively expensive to mint a single token.

I see what you mean...

Why do you need this to be random? Would it be ok for the user to submit a seed with the mint function call, so that they can validate that the id is available.

Due to the fact that collectible traits are going to be open from the start and their IDs are related to token IDs of an existing collection: randomized minting would've been the most fair for a presale phase by thwarting rarity snipers.

I've been investigating this problem further and there seems to be no practical way to achieve the desired result even by introducing additional storage variables. The easiest solution might've been an EnumerableSet populated with 10000 IDs to draw from but Solidity lacks an ability to define mapping values while declaring the variable (and populating it after the fact is going to cost >$20k just in gas fees).

So we're going to relax the requirements and move to a classic sequential IDs minting.

1 Like

Just re-posting my answer from another related thread which is relevant here (I found both threads highly related as I was doing my research on these forums):

I would like to share my idea on solving this problem (been thinking a lot lately about fair random minting) Please check the evolution of my ideas. First two assume all metadata is uploaded to IPFS prior the minting for transparency and fairness. In this case we need to ensure randomness of assigning the tokens:

  1. My first idea was kind of what @swixx shown above with their code snippet (generating random ID and checking it against the list of minted token IDs), but it does become prohibitively expensive progressively as the number of non-minted tokens depletes because you have to read a lot from expensive storage as mentioned by @frangio - not an option (especially for tokens with average-to-big supply). Not to mention the on-chain RNG which is prone to attacks like the mentioned Meebits exploit.

  2. Getting RNG seed off-chain (eg. Chainlink VRF) - solves RNG problem, but does not solve gas fees problem with iterations - still not an option.

Then eventually I came to option 3 - sequential token IDs with batched metadata uploads. In this case, the contract creator would upload metadata to IPFS periodically, eg. every N mints. With total supply being T = M x N, M would the number of such batches.

Now you would say it defeats the purpose of on-chain fairness - but I have a solution to this. The contract creator can hard-code an M-length array of MD5 digests for each metadata batch, and then just unveil the batches JSON on their website as they are uploaded to IPFS. This guarantees that all the metadata was pre-generated in advance and was not tampered with afterwards (you can easily calculate MD5 digest on a string and compare it with what's in the contract).

The only risk with this approach is that the contract creator may disappear and won't upload the remaining metadata. In this case it may affect certain number of buyers in the last non-revealed batch, and afterwards the project would go bust anyway. But I think this problem is much wider than a few buyers not getting their token metadata, so for serious projects this shouldn't be a concern.

I am happy to hear back your thoughts on this.

1 Like