ERC1155 unbounded loops

I was told that unbounded loops are bad practice and I should omit them as much as possible.

However, while trying to take a glance at how OpenZeppelin solves a similar problem, it turned out that the ERC1155 standard actually has no protection over huge arrays input into either transferBatch, mintBatch, or any other "batch".

I believe that "unbounded loops" refer to loops which do not consist of an explicit stop condition (that is, typically, while loops).

There are no such loops in the ERC1155 contract, and in fact, unbounded loops (as well as loops with an expected large number of iterations) aren't really a problem by themselves, as the ecosystem embeds a "natural" protection mechanism AKA Block Gas Limit.

To put it simply, once exceeding the block gas limit (in this case, as a result of the loop iterating too many times), the entire transaction reverts.

I believe that the "bad practice" which you've been warned about, refers to something else altogether, namely - you should avoid iterating your contract's dynamic data structure without specifying boundaries.

For example, suppose you have an array of values which can grow to any arbitrary size as a result of user actions.

If you intend to expose (as part of your contract's API) any function which iterates this array - whether it's for reading the array or for updating it - then you should always allow the user to specify the maximum number of iterations that the function should execute (for example, by specifying start index and end index, or start index and length).

Thus, the entire array can still be iterated by the function, no matter how large it grows.

It may require more than a single transaction to complete the entire process, but it's better than reaching a point where the the function becomes non-executable due to the block gas limit.

The reason is "slightly" different for read (view) functions, since there's no gas involved here (hence the block gas limit does not apply in this case).

But the basic motivation is the same, i.e., a read function iterating a large data structure may cause your dapp (or whoever the caller of the function is) to become slower and even non-responsive at some point.

In addition to that, there can also be bandwidth-restrictions imposed by your web3 provided (infura, alchemy, etc), which would prevent you from calling the function once the returned data grows above a certain size.

1 Like