Integrating hardhat-deploy and openzeppelin/hardhat-upgrades

Hey all, creator of hardhat-deploy plugin here.
would love to integrate open-zeppelin proxy into the tool

The tool currently supports its own proxy but I started working on support for external proxy code with special support for openzeppelin transparent proxies here : https://github.com/wighawag/hardhat-deploy/tree/openzeppelin-transparent-proxy (work in progress)

The main thing that changes is the need for the ProxyAdmin but I think doing a similar thing to what you do in the hardhat-upgrades plugin, that is: using one ProxyAdmin per project, should not be hard to add.

I did not look into the details yet, but what would be great is a js library to be able to reuse the storage layout verification if that is not already available on its own, so I can easily reuse that into my plugin.

5 Likes

Hi @wighawag! I would definitely like to see a smooth integration between the two plugins so I think we should work together towards that.

One concern that I have is that I think we should avoid redundancy of features between the two plugins. I don’t think exposing the storage layout verification on its own is the approach that we should follow for this integration. Because there is another component to storage layout verification when upgrading a proxy, that is keeping track of the storage layout of previous versions. We do this in our own metadata files (e.g. .openzeppelin/mainnet.json) and for now we really want to have full control of this metadata and its format, in case we might have to change it in the future.

What this means is that I think hardhat-deploy should integrate with hardhat-upgrades at a higher level, allowing for this separation of concerns.

The question is how would the user use them together?

In PR https://github.com/OpenZeppelin/openzeppelin-upgrades/pull/273 we’re considering an option to use the upgrades plugin with a custom deployment function, or we might call it an “executor”. Could this be one way the two plugins might work together? For example:

await hre.upgrades.deployProxy(Foo, { executor: hre.deployments.executor })

I’m not yet familiar with hardhat-deploy, so could you let me know if this would make sense?

From my initial look at the readme I don’t think this would work :slightly_smiling_face: but I’d like to hear your thoughts, and any ideas you might have about how the two plugins could work together.

3 Likes

Hi @frangio thanks for your comment

As far as I see, we are talking about 2 different user flows :

  1. someone using hardhat-deploy that want to use hardhat-upgrades api to perform proxy operation (deployment and upgrade), basically mixing 2 different api for similar tasks.
  2. someone that want to use hardhat-deploy and its existing proxy api for openzeppelin proxies (With the added validation features), and so using one single api

For 1. the executor option might work (but I think with some changes, as it seems from the PR that deployment is tied to ethers factory), though I am not sure what the user would expect. Will they need to save both .openzeppelin files and deployments files ? I guess so. This feels messy to me. it also make the hardhat-upgrade api not very elegant as user need to specify an executor.

I personally think that options 2 is more natural for hardhat-deploy users that are already using it for contract that do not need a proxy. This allow them to continue using what they are already familiar with.

Now as mentioned, this could be achieved by having hardhat-upgrades extract some of its feature (most notably validation) as a library.

You mention that this would not possible because of the requirement for format independence. I don’t think this is a problem. the library can be designed so api users (so here hardhat-deploy) provide the mechanism for storage and retrieval of the validation information. this could be achieved via a load and save callback mechanism for example. hardhat-deploy would not even need to parse it. It could just save it as part of the deployment and give it to the library when asked.

hardhat-deploy already save all previous upgrade info, including bytecode, abi, metadata, any solc output, including storageLayout, etc. and as such it would not be hard to save hardhat-upgrade specific info too for each upgrades.

You might be thinking that hardhat-deploy could still use hardhat-upgrades directly behind the scene to offer them same experience as described in option 2, while not needing hardhat-upgrades to extract its feature as a library. There are few issues that make it not a great option:

  • it adds some extra files as mentioned (.openzeppelin folder)
  • make hardhat-deploy depends on hardhat-ethers which hardhat-upgrades add as a dependency. (While hardhat-deploy use ethers behind the scene it is agnostic to the library users choose to use.)
  • could constraint some of the hardhat-deploy features. For example, currently hardhat-upgrades do not support multiple network with the same chainId, which is something both hardhat and hardhat-deploy support.

Looking forward to your thoughs as I agree it would be great if hardhat-deploy and openzeppelin proxies could work well together

3 Likes

@wighawag I understand your points and I think storing our information directly with the hardhat-deploy metadata could work.

However, building the functionality that would allow that kind of deep integration is beyond what we can tackle with our current resources, so I think it will be easier to simply expose the storage layout functionality for you to integrate those in hardhat-deploy as you see fit.

I’m still concerned about us eventually modifying the format of the metadata that we expect, but we can figure out how to allow hardhat-deploy to migrate to that new format. (This is just a precaution, there are no concrete plans, and I don’t expect it will happen often at all.)


Here are some pointers to what we have now, and we can settle on requirements for what hardhat-deploy needs from our library.

The storage extraction and comparison logic is in our @openzeppelin/upgrades-core package.

Storage Layout Type

We have a StorageLayout type. It’s similar to what the compiler emits with the storageLayout output selection, but not the same, so you need to use our own layout extraction routine. We do this to support older Solidity versions where the compiler output selection isn’t present. The differences with the layout generated by solc are quite small… but we’re not keeping track of these differences, and things might change.

Extraction

The storage layout is extracted from the AST. The function extractStorageLayout works on an AST node for a contract definition, and some additional AST helpers that we obtain based on the compiler output.

This function should be wrapped in a higher level function that takes the solc input and output JSON, so that you don’t have to concern yourself with creating these AST helpers.

Comparison

Comparison between an original and an upgraded storage layout happens in a class called StorageLayoutComparator, through its compareLayouts function. This function returns an instance of LayoutCompatibilityReport, which can then be printed through its explain method.

We use a higher level wrapper around this class that is called assertStorageUpgradeSafe. The wrapper will throw an exception if the layouts are not compatible, and print the error report.

If you don’t want this to throw an exception, you could use the comparator class directly, but we’d need to adjust a few things because the storage layout has to be “expanded” before it can be passed to the comparator.


If you want to take a look at these things, we can define what else we have to add for hardhat-deploy to be able to use this.

From what I can tell, we need:

  1. A higher level function to extract storage layout from solc input and output JSON.
  2. Adjust the input to the StorageLayoutComparator so that it is the plain storage layout that was extracted previously without modifications.
5 Likes

Hey @frangio that’s great! thanks for the information. I am happy to integrate it myself and push any PR to @openzeppelin/upgrades-core that might be required to make it easier. I do not have any timelime for it yet but I ll keep you posted. My first step would be to integrate the proxy deployment and upgrade and later add in the validation step.

3 Likes

I should’ve mentioned also that there are safety checks other than storage layout compatibility that need to run on any contract that will be deployed as the implementation behind a proxy. Again this is something that works on the solc output json, and we might have to adjust a few things but it’s probably already close to what you’d need. This is found in the directory called validate.

3 Likes

did anything ever come of this combination? I'm trying to launch all upgradeable governance and all documentation points to utilizing { deploy } but I can't find anything to verify it supports upgrades. thank you.

There is no integration between these two plugins yet. hardhat-deploy has rolled its own support for upgradeability, I can't vouch for it.

Just noting that this is also tracked in https://github.com/OpenZeppelin/openzeppelin-upgrades/issues/680

I come across this issue again, so bumping this.

I think is to retain the security of @openzeppelin/hardhat-upgrades, while keeping the project working with hardhat-deploy.

One approach I researching now is:

  • Deploy the contracts with openzeppelin
  • Run hardhat-deploy with the option on skip:true, he will generate the deployments directory without deploying the contracts.

Now I have a variety of local issues in my repo to make this work. I hope this can work. It sounds easier to me than the previous approach.