B005: Audit Scopes
A very important step that takes place in a project’s formal audit is the scoping process in which auditors are called to gauge the complexity of a project and assess how much time it would theoretically take to audit the project’s code in full.
I will attempt to explain my personal methodology for forming sensible audit scopes as well as some potentially misleading indicators of “simple” code that could lead to incorrect assessment. Although this article represents my personal methodology, I try to approach the matter impartially and showcase what I think an “impartial” project scope should look like.
Forked Code
The first step I take into assessing the scope of a project is to detect whether it is a fork of some other well-known project which would significantly reduce the time necessary to audit the project either because it has been audited multiple times or because its documentation is quite expansive and can be based on.
Detecting such code is a matter of experience, however, some indicators can be noted down and be used as a loose guide:
- Uniswap V2 AMM: Contract filename structure (XERC20.sol, XFactory.sol, XPair.sol, XRouter.sol) & notion of
token0
andtoken1
etc. within code. - Balancer V1 AMM: Contract filename structure (XPool.sol, XConst.sol XMath.sol etc.) & notion of
denormalizedWeight
andnormalizedWeight
along withbind
/unbind
/rebind
functions. - Compound Governance: Presence of
Timelock.sol
, thoroughly documented code for theProposal
struct &castVote
/castVoteBySig
functions. - SushiSwap Staking:
Chef
contract suffix,UserInfo
&PoolInfo
anddeposit
,withdraw
&emergencyWithdraw
functions.
Although some projects were noted down, this list is, of course, non-exhaustive given the abundance of projects being forked from, such as forks of forks, etc. As a general rule of thumb, if one is unsuspecting of the contract’s structure by just glancing over the code, one should simply look up the contract interfaces and see if a match is found.
When auditing forked code, one should simply perform a diff-check to identify which areas were changed and how relevant they were to the overall logic of the contract. Adjusting the reward rate of SushiSwap, for example, is a minimal change and one that encompasses a small review scope. On the other hand, if entirely new functionality is introduced the scope is enlarged quite a bit.
Sometimes, identifying forks tends not to be this straightforward as the project may have renamed APIs or partially acquired another project’s code. When I have a hunch a project may be a fork, I immediately try to somehow inform the project requesting the audit scope and ask them whether we should be aware of forked code.
Scope Depth Level
I have historically found myself identifying vulnerabilities during the scoping process and this is usually an indicator of “too much” time spent in a project’s scope.
When a project is being scoped, the time spent should entirely be optimized to ensure a balance between a good level of confidence in the amount of time stated and time spent actually looking in the project’s code. After all, a project may not forward with the audit, and becoming familiar with its notions may ultimately not pay off.
I have found that the best approach to this issue is a quick scroll through each contract coupled with keeping an “eye” out for these things:
- Multiple “import” statements that are not of a well-known library (OpenZeppelin)
- Total contract length, per-function length, per-function total code paths (if-else chains, for loops etc.), and per-function state mutability
- Naming notions (i.e. “Oracle”, “Staking”, “Pool”, “Strategy” etc.)
- ERC Standards (i.e. “ERC20”, “ERC721” etc.)
However, the above should not be used as an “automatic” indicator of a project’s complexity. As an example, a contract may be long in length, contain multiple execution paths but have no imports meaning that each function is self-sufficient.
This significantly reduces its auditing time compared to an equivalent contract that imports the interface of Balancer V1 for example, which is a more complex interaction to assess than one with Uniswap. The “import” statements should also be assessed in “complexity” based on the project’s difficulty in integration.
As an example, integrating with the 0x protocol is more complex than integrating with SushiSwap and Compound is easier to integrate with than Aave. Generally, each DeFi “block” brings its own complexity level to the project that is being audited.
Documentation & Test Coverage
Although some may disagree, this particular aspect of a project’s code is not that definitive in my day-to-day workflow. A project rarely moves beyond the above stage during the scoping process as scrolling through the code will tend to give a good level of confidence in quotes.
Nonetheless, there have been cases whereby a very complex and innovative project has come to request an assessment. If the complexity assessed by the previous step is significant, a sensible next step is to review whether the project has sufficient documentation as well as test coverage.
Good coding practices usually guarantee code that is self-evident, such as good use of naming conventions & in-line documentation, although there are cases where people prefer to use illegible naming conventions in favor of code inimitability accompanied by extensive off-code documentation, such as MakerDAO.
Additionally, investigating test coverage is much better conducted during the actual audit of the project as it can identify pain points that have not been thoroughly tested yet and may contain logic bugs.
Conclusion
Assessing a project’s audit scope necessitates multiple aspects that should be considered to form a well-informed estimate. In my personal methodology, I have attempted to optimize the time it takes to form such an estimate by approaching a project with a set of mental guidelines applied to the code of the project.
Even so, audit scopes cannot be considered equivalent across multiple auditors given that each security engineer has a different level of skill. The purpose of the methodology described in this article is to synchronize the “factor” by which the estimates change between each security engineer and thus ensure a more “consistent” output which should eventually become consistent across engineers as their experience matures.