Hardhat+UUPS: Deployment at address 0x... is not registered

We are facing an issue with trying to test an upgrade of our live deployed proxies via upgradeProxy with the error:
Error: Deployment at address 0xDc4db.... is not registered.

The UUPS proxy has been deployed via hardhat/hardhat-deploy. Due to lack of support for UUPS proxies, we are not aware of a way to upgrade a UUPS proxy via hardhat-deploy.

Existing issue discussion proposed to manually copying over into a mainnet.json file but these solutions seem only to work for Truffle.

Related discussions:

Hi @Marcel_Jackisch. The solution you described should also work for Hardhat.

I would recommend doing a deployment of a UUPS proxy for your existing contract to Hardhat Network (i.e. locally), and then copying the generated .openzeppelin/unknown-31337.json file to .openzeppelin/mainnet.json, replacing the relevant addresses (impl and proxy) for your addresses on mainnet. Then you should be able to run the upgrade based on that file.

1 Like

Thank you @frangio

It ran into an issue where the upgrade did not happen without any error, and logging just printed that the "new" implementation was in fact just the old. This is probably due to legacy artifacts. If someone else comes across this thread: try deleting the artifacts and recompile with the changes you've made.

@frangio As we are using hardhat-deploy for sharing artifacts, do you see any simple way of making both tools compatible?
Because as it seems, if the new implementation is deployed via the upgrades-plugin, the artifacts won't be shared to hardhat/deployments/.

We currently don't have a way to use both tools together but it's in the roadmap to look into it.

1 Like

@frangio I still get the error Error: Deployment at address 0x123.... is not registered which is the implementation, even though I pass in the proxy correctly. Both proxy and implementation are correctly set in the matic.json (assuming that matic would correspond to a hardhat network).

The network files for Polygon are named with their chain IDs: unknown-137.json for Polygon Mainnet or unknown-80001.json for Polygon Mumbai testnet.

Note that we will soon be adding support for importing existing proxies -- subscribe to https://github.com/OpenZeppelin/openzeppelin-upgrades/issues/175 for updates.

Good to know, so the Upgrades plugin tracks network names differently than how they are named in the hardhat config.

So even if the network file is named unknown-137.json the error remains, resulting in a removal from the file:

Attempting upgrade on chainId: 137
An unexpected error occurred:

InvalidDeployment [Error]: No contract at address 0x63a...... (Removed from manifest)

And even more strangely I'm observing the behavior that when I replace the correct implementation contract address with some erroneous like "address": "0x123xyz", I still get the error that the original address 0x63a is not registered. I've seen this before, and it makes me wonder if there is another place where the OZ plugin caches files, or is the address resolved from querying the on-chain contract?

Update: To add to @frangio proposed solution (which worked with an older OZ plugin), it seems that now you also have to replace the tx hashes.

Now the storage layout is being tested, but unfortunately there is no transaction being sent.

Attempting upgrade on chainId: 137
Upgrade considered compatible with existing storage layout. 
New implementation deployed to 0x63a....   (<--- the same address as the original implementation)

This is an issue i've seen before related to artifacts, but even deleting them all and recompiling did not solve it this time.

Can you share what your Hardhat script looks like?

/// --- snippet --- ///
 .setAction(async (taskArgs, hre: HardhatRuntimeEnvironment) => {
    await prepareUpgrade(

export const prepareUpgrade = async (
) => {
  const { upgrades, ethers, getChainId } = hre;
  console.log('Attempting upgrade on chainId:', await getChainId());
  const newImplFactory = await ethers.getContractFactory(contractName);

  const newImplementation = await upgrades.prepareUpgrade(
    { unsafeAllowRenames: true }
      `Upgrade considered compatible with existing storage layout. \n`
    ) + chalk.bold(`New implementation deployed to ${newImplementation}`)

The plugin would not redeploy an implementation if an identical one was already previously deployed (it keeps track of implementation versions in the network file).

To check, can you enable debug output by running the command export DEBUG=@openzeppelin:upgrades:* and then run the script again? This would show whether the plugin is reusing an implementation (e.g. found previous deployment) or if it is trying to deploy a new one (e.g. deployment of implementation _ not found and initiated deployment)

Thanks for that command, that is helpful, yet I think generally the plugin should output whether a contract has been reused (based on identical bytecode?).

During our tests we discovered a new issue, which seems quite critical: To show to my teammates how the plugin behaves when storage layout is incompatible, I inserted a new state var at the top of the contract and recompiled. Formerly, this showed me an error "Not compatible" but then all of the sudden it actually send a transaction to the network.

Which role in the upgrades plugin takes the file hardhat/cache/validations.json? Because we noticed that after recompile the new state var foo is part of hardhat/artifacts/contracts/MyContract.sol/MyContract.json but not part of the validations.json.

This occurs because you have { unsafeAllowRenames: true } set in your prepareUpgrades options.

We found the issue. For testing the upgrades in unit tests, we used the original contracts from the original deploy commit and there was a name conflict that must have confused the solidity compiler as it did not use the actual updated storage with the clash variable string foobar.

But the issue that it's not deploying remains:

Attempting upgrade on chainId: 137
  @openzeppelin:upgrades:core fetching deployment of implementation 7479059768e7690c632ebd86c2f0f503aff0ab00a1babdd7a378f55baae29d35 +0ms
  @openzeppelin:upgrades:core found previous deployment 0x4c49821bf4c5fc7df26988bcb62d15aa645ea0899c04a81e6fc422043b198bc8 +3ms
  @openzeppelin:upgrades:core resuming previous deployment 0x4c49821bf4c5fc7df26988bcb62d15aa645ea0899c04a81e6fc422043b198bc8 +181ms
  @openzeppelin:upgrades:core polling timeout 60000 polling interval 5000 +1ms
  @openzeppelin:upgrades:core verifying deployment tx mined 0x4c49821bf4c5fc7df26988bcb62d15aa645ea0899c04a81e6fc422043b198bc8 +0ms
  @openzeppelin:upgrades:core succeeded verifying deployment tx mined 0x4c49821bf4c5fc7df26988bcb62d15aa645ea0899c04a81e6fc422043b198bc8 +163ms
  @openzeppelin:upgrades:core verifying code in target address 0x639dFeA994b139A3d6C3750D4C4E24daEc039BD7 +1ms
  @openzeppelin:upgrades:core code in target address found 0x639dFeA994b139A3d6C3750D4C4E24daEc039BD7 +185ms
Upgrade considered compatible with existing storage layout. 

Is the contract code at 0x639dFeA994b139A3d6C3750D4C4E24daEc039BD7 different than what you are expecting?

Yes, not expected, because the implementation code changed. But I start to assume that the OZ plugin does not actually fetch the bytecode from the blockchain to compare it with the contract factory specified as argument.

It doesn't compare bytecode from the blockchain at the moment, but it uses a hash of the contract factory's bytecode to look up the contract version in the network file (to determine if that implementation was previously deployed).

In your post above, this hashed version number is 7479059768e7690c632ebd86c2f0f503aff0ab00a1babdd7a378f55baae29d35 and you can find a JSON entry with that number in the network file. That entry contains the address and storage layout (the variable names and types) of your contract.

This means the new version of your implementation contract is hashing to that entry, which has the old implementation address (maybe due to a manual error while performing this workaround). Can you check if the storage layout in that JSON entry looks like it belongs to the old or new version of your contract?

If I understand correctly, and the on-chain bytecode is not used, I don't see how prepareUpgrade provides any benefit over the unit tests that test to upgrade from an existing contract file/factory.

The implementation 7479059768e7690c632ebd86c2f0f503aff0ab00a1babdd7a378f55baae29d35 was based on a "mock" deployment of the original contract, as I've just changed the addreses as suggested by @frangio here, which we had to do since the original deployment was made via hardhat-deploy and so that there are no OZ style artifacts in existence.

It has the storage layout of the old contract, there must be a glitch somewhere else. If I insert a new variable foo it tells me the violation but once it' s compatible it just does not try to deploy a new version. Yet it did before, so generally the behavior is quite inconsistent, which makes me wonder what else the tool does under the hood.

I haven't been able to follow along the full discussion but I think it's worth explaining what the plugin is doing under the hood because this certainly is not true:

When you run p = deployProxy(ContractV1) (I will use simplified syntax), the plugin will:

  1. Check ContractV1 for errors such as using selfdestruct.
  2. If no errors, deploy ContractV1 as the implementation address.
  3. (If using transparent proxies, deploy an admin if there isn't one already, otherwise it's reused.)
  4. Deploy the proxy p connected to the implementation ContractV1.
  5. Write information about these deployments in the network file. Mainly it will save information about ContractV1:
    • The address
    • An identifier based on the bytecode
    • The storage layout of the contract

When you later run upgradeProxy(p, ContractV2), the plugin will:

  1. Check ContractV2 for errors independently of the current implementation.
  2. Get the address of the implementation behind p which in this case will be that of ContractV1 and look up the entry for this contract in the network file.
  3. Check ContractV2's storage layout for compatibility with ContractV1's storage layout (which had been stored in step 5 of deployProxy).
  4. If it's compatible, deploy ContractV2, and write its metadata in the network file.
  5. Execute the upgrade of p.

prepareUpgrade(p, ContractV2) will do the same except for the last step. Here you can see the main benefit over simple unit tests: the upgrades plugin will check your storage layout for compatibility and provide strong guarantees that you will not corrupt your contract's state.

Additionally, if you later deploy another proxy for either ContractV1 or V2, the plugin will first check if there is an existing deployment that matches the bytecode identifier. If an existing deployment is found, the implementation is reused in stead of being deployed from scratch.

If you're seeing behavior that deviates from what I described here it may be a bug, assuming you didn't manually modify the network file which would otherwise likely be the cause of the error.

You may sometimes see weird behavior when running against a local development network like Hardhat Node, because you may see contracts deploy at the same address even though they're different. The plugin tries to handle this well but there may be some edge cases we don't catch.

1 Like

Thanks a lot @frangio for this elaborate answer.

I wonder, how is the layout compatibility check under point number 3 for upgradeProxy conducted? I assume it's checking the layout of ContractV2 against the layout given in the network file and not against on-chain bytecode (not sure whether this is even theoretically possible).