Decimal values calculation and transfer the calculated eths to the address

I am trying to implement a function where, I want to first calculate the payable amount, whose result is most likely to come as a decimal value and then, transfer that decimal value to the address that has called that function.

To give an overview, this is how the function looks like:

function _distributeClaim(
        address _claimant,
        uint256 _balances
    ) internal returns (bool){
        uint256 payableAmount = tradingProfits * (_balances / totalInvestedTFs);
        require(
            payableAmount < address(this).balance,
            "TF: Insufficient Balance in Contract!"
        );
        isClaimed[_claimant] = !isClaimed[_claimant];
        (bool success, ) = payable(_claimant).call{ value: payableAmount }("");
        return success;
    }

In the following function, my main concern is for the line:

uint256 payableAmount = tradingProfits * (_balances / totalInvestedTFs);

Here, I am certain, that balances and totalInvestedTFs are going to be in uint value. However, the tradingProfits is most likely to be in decimals, like 1.5 or 3.2, etc.

When, doing the calucaltions, since solidity does not have a fixed point math support, I always get the payableAmount value to be 0.

To get me past this issue, I tried to search and look for potential solutions, to which I found libraries like ABDK Maths but I am not able to implement the above library in my function.
Can someone help me out with it?
I also found this discussion topic, but right now I am naive to implement the fixed-point value Designing Fixed Point Math in OpenZeppelin Contracts

The Maths for the payableAmount may look like this:

_balances = 2000
_totalInvestedTF = 10000
Trading profit = 1.5
Hence, payable amount = 1.5 * (2000/10000)
 = 0.3 eth.

Now, with the obtained 0.3 eth, can I send this decimal value to the claimant address by calling this line:

 (bool success, ) = payable(_claimant).call{ value: payableAmount }("");

Please someone help me out to solve my above roadblocks.

There are two problems here - one in your description, and the other one in your code.


Let's start with the problem in your description:

This is not a non-integer value (what you refer to in your question as "decimal value").

The resolution of eth is 18 decimals, which means that 1 eth = "1 followed by 18 zeros" wei eth.

Hence, the value of 0.3 eth is in fact the value of "3 followed by 17 zeros", which is of course integer.

And the reason for this resolution of eth is precisely in order to allow calculating very accurate amounts, despite the fact that the underlying architecture does not support non-integer arithmetic.

So, if Trading profit = 1.5 eth, then in the contract it is actually "15" followed by 17 zeros.


As for the problem in your code:

You seem to already be aware of the underlying architecture not supporting non-integer arithmetic.

This means that calculating (_balances / totalInvestedTFs) returns the floor value of that division, which means that it is subjected to precision-loss.

Ideally, in order to reduce precision-loss to a minimum, you should strive to postpone the division as far as you can towards the end of the calculation (and that's generally true for every calculation).

So, instead of this, where the division takes place before the multiplication:

tradingProfits * (_balances / totalInvestedTFs)

You are better off using this, where the division takes place after the multiplication:

tradingProfits * _balances / totalInvestedTFs

Of course, it increases the probability of an arithmetic overflow during the intermediate calculation of tradingProfits * _balances, which would result in the entire transaction being reverted.

So the general approach here is to "offline" assess the probability of a real-world scenario which would cause something like that, and then decide accordingly, whether or that probability is realistic in any way.

For example:

  • If you know that the value of _balances rarely exceeds 200 bits
  • And you know that the value of tradingProfits never exceeds 56 bits
  • Then you know that the calculation of tradingProfits * _balances will rarely overflow
1 Like

Thanks for the reply @barakman.

Let me give you some more information about the contract. To set the value for the tradingProfit there is an access control function like this:

function setTradingProfitsInWei(uint256 _tradingProfits) external onlyHasRole(ADMIN_ROLE){
        tradingProfits = _tradingProfits;
    }

So here, in this case the ADMIN_ROLE can pass the value 1.5 x 10^18 which will make the trading profit as a uint256

And then the contract can perform the logic for payableAmount (Multiplication first)

uint256 payableAmount = tradingProfits * _balances / totalInvestedTFs;

This should, in most cases, output an unsigned Integer.
Let's consider our previous example itself:

payableAmount = 1.5X10^18 * 2000 / 10000
or, payableAmount = 3x10^17 

I am all clear till this section and thanks for helping me out till here.

However, I am still bit confused on how will the transfer function work.
When I call to transfer the eths with this line:

(bool success, ) = payable(_claimant).call{ value: payableAmount }("");

How will the contract know that the value in payableAmount is actually in Wei and not an eth? Because in this case, the value that is sent is 3x10^17.
Apologies, if the question is naive but I am still in the learning phase, so not really sure if the eths worth of only 0.3 eth will be sent or not or how exactly the contract will understand that the value stored is in wei.

@barakman

Meanwhile, I created a dummy contract to gain a better understanding of how Ethereum (eths) is transferred from a contract to an address. Utilizing this knowledge, I generated dummy transactions and comprehended the logic behind the call function.

This resolves my previous inquiry regarding how the contract determines whether the value is in ether or in wei.

For those interested in delving into the topic, here is the contract code deployed on sepolia:

Also, here's a snippet of the code itself:

// SPDX-License-Identifier: MIT

pragma solidity ^0.8.20; 

contract testTransfer {

    uint256 public tradingProfits;
    uint256 public balance;
    uint256 public total;
    
    function setTraingProfitsinWei(uint256 _tradingProfits) external {
        tradingProfits = _tradingProfits;
    }

    function setBalance(uint256 _balance) external {
        balance = _balance;
    }

    function setTotal(uint256 _total) external {
        total = _total;
    }

    function claimDividend() external {
        uint256 payableAmount = tradingProfits * balance / total;
        (bool success, ) = payable(msg.sender).call{ value: payableAmount }("");
        success = !success;
    }
    receive() external payable{}
}

Thanks again! :muscle:

This always gives an unsigned integer by definition.


The value of 1.5X10^18 * 2000 / 10000 is equal to 3x10^18, not to 3x10^17.


The contract works with absolute amounts.
It doesn't "know" what units are implied by these amounts (eth units, wei units, apple units, etc).

Hence, whenever an offchain application interacts with a contract, it should:

  1. Convert every amount to from eth units to wei units before passing it as input to the contract function
  2. Convert every amount to from wei units to eth units after receiving it as output from the contract function

In other words:

  • The user of the application sees everything in eth units, but that's purely for readability purpose (e.g., it's easier to understand what "1.5 eth" means, than it is to understand what "1500000000000000000 wei" means)
  • The application interacts with the contract in wei units, in order to reduce the loss of precision to a bare minimum

Of course, the above refers only to amounts which denote eth, and not to every amount in general.

1 Like

@barakman Alright got it. Thanks for the rescue every time :handshake:

1 Like