I've had to deal with this situation a number of times now. Tends to happen when you get carried away with features and ideas, or when you don't understand what contributes to bytecode bloat.
You have three approaches:
-
Run the contract through the optimizer.
-
Condense and optimize your source code.
-
Move some of your desired mechanics into separate external contracts.
Maybe I'm just superstitious, but I don't mess with the optimizer unless I have no other choice.
The way I see it, if the optimizer was 100% safe then it would be activated by default. For anyone who isn't a Solidity core developer (probably 99.99% of us), the optimizer is a "magic black box" whose inner workings and possible side-effects are arcane and mysterious. We don't know how it produces more compact bytecode or how it reduces gas fees, and those that do live on another planet from everyone else. It is plausible (though I don't know how likely) that the optimizer could introduce bugs into your contract that result in "unexpected behavior".
So, I'll share some tips on optimizing your code.
Since there aren't any definitive articles or handbooks on managing bytecode size in Solidity, the only way to learn is by experimentation. A good way to approach thisis to set up a testbench for split-testing bytecode size differences between contracts:
contract A {
// Test code A
}
contract B {
// Test code B
}
contract CodeSizeChecker {
function getContractSize(address _addr) public view returns(uint) {
uint length;
assembly {
length := extcodesize(_addr)
}
return length;
}
}
With this testbench, you can try different variations of the same code to see which ones produce less bytecode than others. It's the only way to know for certain what works and what doesn't.
I've also looked at your contract and I have a few ideas you might consider. It looks like you need to lose about 1500 bytes, at least it does when I compile it in Remix.
The first thing you can do is move up to compiler version 0.8.21, which brings the bytecode size down to 25479 bytes. That's a quick and easy 500 byte reduction! That leaves around 900 bytes that need to be lost.
Your state variable deadAddress
is a public constant. However, keeping it internal/private will save you around 59 bytes. There isn't a reason to make the dead address public, so that's another easy bytes to lose.
You should go through and shorten your require
error strings.
Each string character occupies 1 byte, and every string with at least one character costs you at least 38 bytes of bytecode to store the string, which increases by 38 bytes every 32 characters. Locate all error strings that are over 32 characters long and reword them so they are 32 characters maximum. ChatGPT can help with that.
For example, instead of saying "cannot set buyback more often than every 10 minutes" (51 characters == 76 bytes) you say "Insufficient time elapsed" (25 characters == 38 bytes) or "Must wait 10 minutes" (20 characters == 38 bytes).
You have several setter functions that return bools. While this is generally a good practice, each of these return bools is costing you 20 bytes in bytecode size. While they are nice to have, they aren't always necessary. I think I counted about 5-6 of them, so that's 100-120 bytes (or more) you could get rid of right away, assuming you don't need this feature.
If you need these functions to return a bool flag, then leave them alone.
Another improvement is that some of your functions are setter functions for bool values, which can be rewritten as toggle functions to remove the bool input parameter.
For example:
function blacklistAccount (address account, bool isBlacklisted) public onlyOwner {
_blacklist[account] = isBlacklisted;
}
Can be rewritten as:
function toggleBlacklistAccount (address account) public onlyOwner {
_blacklist[account] = !_blacklist[account];
}
When I replicated this function in Remix I saw a 76 byte reduction in the bytecode. I don't know if you'll get anything like that, but it's an easy solution to implement, so give it a try. Considering you have a few of these functions, you could potentially see a couple hundred bytes of weight gone right away.
All of your state variables are made public
. This creates a function on the back-end for each state variable, which contributes massively to bytecode bloat. Keep in mind that simple (not complex!) private
state variables contribute pretty much nothing to bytecode size. I don't know why, but my testing showed they contribute nothing when they are private.
While it is important to be able to access these variables, if bytecode size is a problem then we can condense similar-themed state variables into custom getter functions to save on function declaration bytecode.
Here's an example of one such function:
// Turn all similar-themed state variables private
uint256 private buyTotalFees;
uint256 private buyMarketingFee;
uint256 private buyLiquidityFee;
uint256 private buyDevFee;
// Return groups of themed state variables from functions
function getBuyFees()
public view
returns ( // Name the return variables for easier implementation on the front-end
uint256 totalFee,
uint256 marketingFee,
uint256 liquidityFee,
uint256 devFee
) {
return (buyTotalFees, buyMarketingFee, buyLiquidityFee, buyDevFee);
}
With this approach, you can reduce the total number of function declarations by grouping together common state variables into the same getter functions, which helps reduce bytecode bloat--but at the expense of ease of implementation. In my experiment, this particular example yielded 63 bytes in savings. However, my results are fairly inconsistent across experiments, so YMMV.
You have a ton of these state variables too, so this is an easy solution to try. It makes front-end implementation a little more annoying, but not by much.
One thing I think might make a huge difference--but which requires a lot of work--is moving your fee allocation system to an external contract. Specifically, OpenZeppelin's PaymentSplitter, or a contract based on it.
If you aren't familiar with it, the PaymentSplitter is an external contract that uses a clever shares-based system to determine withdrawal amounts across shareholders from its current balance. It works for ETH and ERC20 tokens, and uses the native balance tracking systems of both to operate. It's my favorite OpenZeppelin contract for good reason--it's pure genius.
With PaymentSplitter, you could condense your buying and selling fees to just two state variables that track the entire percentage removed from each transfer, and then _transfer
the entire fee to the PaymentSplitter. The PaymentSplitter doesn't need to update its state when it receives a payment, as it tracks balances passively.
For example, this code block:
if(takeFee){
// on sell
if (automatedMarketMakerPairs[to] && sellTotalFees > 0){
fees = amount.mul(sellTotalFees).div(100);
tokensForLiquidity += fees * sellLiquidityFee / sellTotalFees;
tokensForDev += fees * sellDevFee / sellTotalFees;
tokensForMarketing += fees * sellMarketingFee / sellTotalFees;
}
// on buy
else if(automatedMarketMakerPairs[from] && buyTotalFees > 0) {
fees = amount.mul(buyTotalFees).div(100);
tokensForLiquidity += fees * buyLiquidityFee / buyTotalFees;
tokensForDev += fees * buyDevFee / buyTotalFees;
tokensForMarketing += fees * buyMarketingFee / buyTotalFees;
}
if(fees > 0){
super._transfer(from, address(this), fees);
}
amount -= fees;
}
Could be reduced to this:
if(takeFee){
uint256 totalFeePercentage;
// Obtain sell fee percentage
if (automatedMarketMakerPairs[to]) {
totalFeePercentage = sellTotalFees;
}
// Otherwise, obtain buy fee percentage
else if (automatedMarketMakerPairs[from]) {
totalFeePercentage = buyTotalFees;
}
// Calculate and send fees to the Payment Splitter, if any
if (totalFeePercentage > 0) {
uint256 totalFees = amount.mul(totalFeePercentage).div(100);
super._transfer(from, address(paymentSplitter), totalFees);
amount -= totalFees;
}
}
Notice how much logic is removed in this one code block. That doesn't include the state variables and any other supporting infrastructure that can also be yeeted out of this contract and into the PaymentSplitter. Offloading this mechanic to an external contract--one created by OpenZeppelin no less--will take a huge chunk out of your bytecode size.
To make things even better, this would also have the twin effect of reducing gas fees for the users by a substantial amount, since the PaymentSplitter system relies on only a single balance mapping being updated every time an amount of tokens is allocated across multiple parties. It's a win-win solution.
Otherwise, just use the optimizer and hope you aren't the lucky winner of the "unexpected optimizer behavior" lottery. It probably won't hurt you, but for a contract with this much logic that is intended for extensive reuse, you might be the lucky one who finds out why the optimizer is not enabled by default 