My experience collaborating with OpenZeppelin

I’m a Professor at the University of Buenos Aires and my main field of research is automated program analysis. Even though I’ve been involved in the development of several software analysis tools I had no previous experience neither in crypto currencies nor in the ethereum ecosystem.

I knew about OpenZeppelin when one of my former MSc students mentioned Smart Contract security and increasing interest in the ethereum community about program analysis. Then, I had the chance to talk with some colleagues from my University that work at OpenZeppelin and learned a little bit about the company. I told them about my interests in understanding the ethereum ecosystem and smart contracts audits and I offered them to share my experience about program analysis techniques. The idea was to help them understand to what extent program analysis tools could improve software auditors effectiveness.

Participating in Software Audits

I started my collaboration project by participating in software audits. The idea was to familiarize with the ethereum platform and the solidity language. I wanted to understand how auditors work, in particular what kind of issues they look for, the strategy they use to find them and to get an initial intuition on how program analysis techniques would be helpful for assisting them.

The experience was really fun and fruitful. From that experience I got a first grasp on how software auditors think and work during audits. I learned how they build a mental model of the contracts, how they quickly find common issues or vulnerabilities and, more importantly, the importance of understanding functional requirements. I was impressed about how they took a piece of code that looked fishy to build an attacker model that finally led to an exploit. Amazing!

Of course, I didn’t perform as well in my first attempts as software auditor but I tried to use my own strategies based on my experience as a researcher on program analysis and educator. Essentially, I saw a smart contract as a complex API and I tried to understand its underlying protocol. Then, I looked carefully into contract storage variables trying to understand what would constitute a valid contract state. In a programming language discipline this is known as object or class invariants. Object invariants are relations between state variables that ensure the object is in a consistent state every time that is invoked by a client code. Invariants can be assumed at the entry point of each public method but the code should make sure the invariant still holds at the end of the method. In the case of smart contracts the concept of contract invariant is similar but invariants should be preserved at the transaction level, meaning that internal function invocations can violate the invariant if it is reconstituted before returning the control to the client.

As a consequence it is worth noticing that : i) a contract having a broken invariant is a contract in an inconsistent state. This can lead to vulnerabilities such as loss of money or a denial of service. ii) developers usually assume the contracts have implicit invariants (even if they do not document them) and, therefore, invariant are implicit in preconditions, not always appearing in the code. iii) Once making them explicit, invariant violations can be uncovered by proper transaction sequences. One can think that, in some cases, when software auditors create transaction sequences for exploits, they are implicitly trying to break some contract invariant. Tools like model checkers can do this task automatically if invariants are made explicit.

Finally, similarly to other auditors, I also looked for intrincated or fishy code. I looked for specific program points (e.g., operations affecting the storage or balances that are protected by some guarding condition) and found particular inputs that might lead to those program points. I intentionally applied that strategy because I suspected that symbolic/concolic execution tools may produce inputs for covering those specific statements.

Survey on smart contract analysis tools

I was lucky to have @VeraBE as my MSc student. She joined our collaboration project and together we started to research smart contract analysis tools. We recommend the reader to take a look at her MSc thesis and her post on software analysis for further information.

We performed the following tasks:
Analyzed the state-of-the-art and practice on SmartContract analysis tools
Tried to run every available tool

I don’t plan to go into details (at least not in this post) but we analyzed hundreds of papers and tools. Most of those works are referred in Vera’s thesis, but we went through several symbolic/concolic execution tools, fuzzers, tools looking for bug patterns (using different encodings of the contracts), abstract interpreters, modelcheckers, deductive verifiers, tools extracting dependency information and callgraphs and so forth.

We learned that tools cannot replace in anyway the tasks performed by software auditors :-). This is due to unavoidable factors such as undecidability of program analysis, the need of user provided specifications (some issues are even hard to specify using classic logics) and other technical factors like scalability and performance. Moreover, there is no tool that can deal with all the different tasks an auditor will need to perform during the audit process.

Nevertheless, tools can be an amazing add-in for auditors. Auditors can rely on tools for: accelerating program understanding (callgraphs, class diagrams, dependency information, etc.), confirm guesses or smells (by adding assertions, reachability queries, invariants, etc.), produce exploits (symbolic execution, model checkers, program synthesis) and debug programs.

We also found out that there is a lot of space for research in tools for helping developers and auditors to better understand and validate smart contracts. We imagine tools that would help them compare actual implementations against mental models or informal specifications.

Playing with Smartcontracts

After playing with many tools, we wanted to play with real contracts that OpenZeppelin had already audited. We wanted to check whether some of the interesting issues they found could be uncovered by analysis tools. Unfortunately, we failed in our attempt. In many cases the problem was due to the tool crashing either by scalability problems or it not supporting some language constructs.

At that precise moment OpenZeppelin disclosed the MakerDao Vulnerability in the DsChief contract. The report included hints about how to exploit it. We suspected that this issue could be highlighted and the exploit synthetized using an automated tool.
With this idea of mind, Vera simplified the contract to a version that was more amenable for analysis but keeping the same functionality, proposed a partial invariant modeling the property the contract must preserve, and analyzed it with several tools.

We reported our findings here. We are extremely excited by the repercussions of that post. Many colleagues working on analysis tools have provided very illustrative explanations and have even fixed their tools!

As a bonus track of this phase we also wrote a small tutorial on how to verify an ERC20 implementation using VeriSol.

Our first prototypes

After playing with tools we decided to implement our own prototypes. We started with something very simple, a tool that relied on other tools to produce a result. Vera implemented a prototype called VeriMan which takes a solidity file with assertions and produces test cases breaking some of the assertions. It combines a symbolic execution tool like Manticore with the modelchecker VeriSol.

Our second tool, also implemented by Vera and also called Veriman :man_facepalming:t4:, goes one step further: it takes as input a solidity file and a set of properties written in Past Linear Time Logic (PLTL) and produces a new solidity file instrumented with assertions needed to check those properties. One interesting aspect of the tool is that the obtained solidity file could be checked with a modelchecker like VeriSol, a symbolic execution tool like Manticore or Mythril or a fuzzer like echidna.

Final thoughts

2019 was a very exciting year for me. I am very grateful to OpenZeppelin that introduced me to this amazing world of crypto currencies and smart contracts. I had the chance to learn a lot about software audits and tools, got to know extremely intelligent people and even had the chance to provide some advice :smile:.

Now is the time for a little break, but in the very near future, I expect to keep working on these topics. The plan is to work together with my colleagues at the University in new techniques and tools for analyzing smart contracts. We are also interested in studying logics and formalisms that could be better suited to express properties of smart contracts.

Thank you very much OpenZeppelin! I look forward to further collaboration in the near future!


Thank you @garbervetsky for your work, effort and passion! It was great working side by side with you, I think we both learned from each other a lot. I truly hope we get a chance to collaborate once again :smiley:

Enjoy your break! :beach_umbrella:


It was exciting to see this collaboration as it happened, even though I wasn’t directly involved. This was a great summary. Thank you so much!


Thank you @garbervetsky and @VeraBE! It was great working with you, thanks for your super interesting explorations :smiley: