What I am trying to achieve is to account for possible increased future complexity of the storage, not just increased future complexity of the logic.
Apparently, as far as I understand it, by current upgradeability standards, if I want to introduce a more complex storage to an already functioning storage, I have to either deploy a side storage contract, or, to put that complexity in a newer version of implementation contract. Basically what I have to do is to build a storage on top of a storage. And if there will be a need to increase complexity of the storage even more in the future, it will require to either deploy yet another side storage contract, or it will require to move all storage from older implementation to a newer one. Something like that? But how would you actually operate within all that possible complexity spread over many contracts? Isn’t it just a flawed solution? Isn’t it more convenient to have a mapping to a string, which can contain possibly an infinite amount of complexity?
So as far as I understand how upgradeability standards are designed, we could have this as an example of a storage which could be upgraded:
mapping (address => uint256) private _balances;
uint public reserved;
struct Proposal {
uint id;
bool executed;
}
And if we need to upgrade, alter that storage, for example something of this gets deprecated, or add more complexity in the future, then what we will have to do is to deploy:
mapping (address => uint256) private _balances;
mapping (address => bool) private _staked;
struct Proposal {
uint id;
bool executed;
bool pending;
}
Correct?
==============================================================================
And what I am thinking of, is that why bother, just use strings as reserve, so the first storage could look like this instead:
mapping (address => uint256) private _balances;
mapping (address => string) private _stringValues; // reserved for future unaccounted values
struct Proposal {
uint id;
bool executed;
string stringValues;
}
an example of how it is stored could be:
_stringValues[msg.sender] = "{bool_authorized=true,bool_staked=true,uint_lended=1.000000001000000000}";
it also could be bytes32 type instead and look like this:
_bytes32Values[msg.sender] = "{b_a=t,b_s=t,u_l=1.000000001}";
And it could be even better, like an infinite array of bytes32 values, could be cheaper and easier, no?
mapping (address => bytes32[]) private _bytes32Values;
So surprisingly for me what am I actually talking about is reserving as many possible variations of arrays of bytes, could be like this:
mapping (address => bytes32[]) private _bytes32Values;
mapping (address => mapping (address => bytes32[])) private _bytes32ValuesAddyAddy;
mapping (address => mapping (uint => bytes32[])) private _bytes32ValuesAddyUint;
// etc
There will be less need to upgrade storage, add side storage, lower possibility of storage collision in general, it will all be in one contract and it seems more convenient to me.
I could still be terribly wrong, it could be that side storage, or putting a part of a storage into newer implementation is easier or more effective? What am I missing? Is it actually a viable solution, or I just should try harder to understand current upgradeability standards better?
I am thinking that when OpenZeppelin has decided to go with unstructured storage solution, using byte arrays was maybe on the table, and for some reason was rejected. Then the question is, why? Was it because it is more comprehensive in some way than using infinite arrays? I don’t want to ruin possible work of all my life by accidentally mixing up storage in my complex project.
Also, if anybody has any examples of current live upgradeable contracts repositories on GitHub, please link. I am really new to all this, and you guys will be doing God’s work, if you help me to understand what’s better.