Does that mean Tether should have optimized itself for 112 million runs or more?
Would Tether even want to risk optimization? Does Tether reach a point where it becomes completely unoptimized?
What is the least buggy way of launching my token? I want my token to have the least bugs as possible even if gas costs are higher. Should I uncheck optimization in that case?
For simple erc20 tokens, optimizations would yield around a few hundred gas savings for calls. (For a simple benchmark, see: https://github.com/ethereum/solidity/pull/11598/files; compare gas legacy and gas legacy optimized there.)
Note that there are several OZ based ERC20 tokens deployed with the optimizer turned on. So it’s unlikely that your token would run into optimizer bugs. From the list of known bugs (https://docs.soliditylang.org/en/latest/bugs.html), one can see that optimizer bugs are rather rare, and only affected rather artificial code.
Also, I would argue that tether would be a poor choice for an inspiration on token design, with their ability to blacklist anyone and burn anyone’s funds at any point. AFAIK, they haven’t enabled the optimizer in their contract.
On another forum someone was telling me that there are diminishing returns on optimizer figures in Remix and that 200 would be enough for token with heavy transaction numbers. In other words, a token with a 1,000,000 optimizer would not be 1,000 times more efficient than a token with a 1,000 optimizer and that most of the effect would be felt with 200. Another person said that 1,000 would be good.
What would you recommend?
What would be the downside of using a 1,000,000,000 optimizer?
No definitely not. I don't think there is a big downside to using 1,000,000,000 as runs value. It may result in larger code but cheaper runtime cost. You can experiment with your own contract and see.