Here’s how to fix them – Cointelegraph Magazine

Fiverr
Here’s how to fix them – Cointelegraph Magazine
Fiverr


Blockchain exploits can be extremely costly; with poorly designed smart contracts, decentralized apps and bridges are attacked time and time again.

For example, the Ronin Network experienced a $625-million breach in March 2022 when a hacker was able to steal private keys to generate fake withdrawals and transferred hundreds of millions out. The Nomad Bridge later that year in August experienced a $190-million breach when hackers exploited a bug in the protocol that allowed them to withdraw more funds than they had deposited.

These vulnerabilities in the underlying smart contract code, coupled with human error and lapses of judgment, create significant risks for Web3 users. But how can crypto projects take proactive steps to identify the issues before they happen?

There are a couple of major strategies. Web3 projects typically hire companies to audit their smart contract code and review the project to provide a stamp of approval.

bybit

Another approach, which is often used in conjunction, is to establish a bug bounty program that provides incentives for benign hackers to use their skills to identify vulnerabilities before malicious hackers do.

There are major issues with both approaches as they currently stand. 

Web3 auditing is broken

Audits, or external evaluations, tend to emerge in markets where risk can rapidly scale and create systemic harm. Whether a publicly traded company, sovereign debt or a smart contract, a single vulnerability can wreak havoc.

But sadly, many audits – even when done by an external organization – are neither credible nor effective because the auditors are not truly independent. That is, their incentives might be aligned toward satisfying the client over delivering bad news.

“Security audits are time-consuming, expensive and, at best, result in an outcome that everything is fine. At worst, they can cause a project to reconsider its entire design, delaying the launch and market success. DeFi project managers are thus tempted to find another, more amenable auditing company that will sweep any concerns under the carpet and rubber-stamp the smart contracts,” explains Keir Finlow-Bates, a blockchain researcher and Solidity developer.

“I have had first-hand experience with this pressure from clients: arguing with developers and project managers that their code or architecture is not up to scratch receives push-back, even when the weaknesses in the system are readily apparent.”

Principled behavior pays off in the long run, but in the short term, it can come at the cost of profitable clients who are eager to get to market with their new tokens. 

“I can’t help noticing that lax auditing companies quickly build up a more significant presence in the auditing market due to their extensive roster of satisfied customers… satisfied, that is, until a hack occurs,” Finlow-Bates continues.

One of the leading companies in Web3 auditing, CertiK, provides “trust scores” to projects that they evaluate. However, critics point out they have given a stamp of approval to projects that failed spectacularly. For example, while CertiK was quick to share on Jan. 4, 2022, that a rug pull had occurred on the BNB Smart Chain project Arbix, they “omitted that they had issued an audit to Arbix 46 days earlier,” according to Eloisa Marchesoni, a tokenomics specialist, on Medium. 

But the most notable incident was CertiK’s full-scope audit of Terra, which later collapsed and brought half the crypto industry down with it. The audit has since been taken down as they have taken a more reflective approach, but bits and pieces remain online. 

Terra-Luna as envisaged by Cointelegraph’s art department
Terra as envisaged by Cointelegraph’s art department. They forgot to set the earth and moon on fire, however.

Terra-fied

Zhong Shao, co-founder of CertiK, said in a 2019 press release:

“CertiK was highly impressed by Terra’s clever and highly effective design of economy theory, especially the proper decoupling of controls for currency stabilization and predictable economic growth.”

He added, “CertiK also found Terra’s technical implementation to be of one of the highest qualities it has seen, demonstrating extremely principled engineering practices, mastery command of Cosmos SDK, as well as complete and informative documentations.” 

This certification played a major role in Terra’s increased international recognition and receipt of investment. The recently arrested Do Kwon, co-founder of Terra, said at the time:

“We are pleased to receive a formal stamp of approval from CertiK, who is known within the industry for setting a very high bar for security and reliability. The thorough audit results shared by CertiK’s team of experienced economists and engineers give us more confidence in our protocol, and we are excited to quickly roll out our first payment dApp with eCommerce partners in the coming weeks.”

For its part, CertiK argues its audits were comprehensive and the collapse of Terra was not down to a critical security flaw but human behavior. Hugh Brooks, director of security operations at CertiK, tells Magazine:

“Our Terra audit did not come up with any findings that would be considered critical or major because critical security bugs that could lead a malicious actor to attacking the protocol were not found. Nor did this happen in the Terra incident saga.”

“Audits and code reviews or formal verification can’t prevent actions by individuals with control or whale’s dumping tokens, which caused the first depeg and subsequent panicked actions.”

Certik
CertiK has just released its new security scores, which it says are independent of any commercial relationship. (CertiK)

Giving a stamp of approval for something that later turned out to be dodgy is not confined to the blockchain industry and has repeated itself throughout history, ranging from top five public accounting firm Arthur Anderson giving the nod to Enron’s books (later destroying parts of the evidence) to rating agency Moody’s paying out $864 million for its dodgy optimistic bond ratings that fueled the housing bubble of 2008–2009 and contributed to the Global Financial Crisis.

So, it’s more that Web3 audit companies face similar pressures in a much newer, faster-growing and less regulated industry. (In the past week, CertiK released its new “Security Scores” for 10,000 projects — see right for details).

The point here is not to throw CertiK under the bus – it is staffed with well-intentioned and skilled workers – but rather that Web3 audits don’t look at all of the risks to projects and users and that the market may need structural reforms to align incentives.

“Audits only check the validity of a contract, but much of the risk is in the logic of the protocol design. Many exploits are not from broken contracts, but require review of the tokenomics, integration and red-teaming,” says Eric Waisanen, tokenomics lead at Phi Labs.

“While audits are generally very helpful to have, they are unlikely to catch 100% of issues,” says Jay Jog, co-founder of Sei Networks. “The core responsibility is still on developers to employ good development practices to ensure strong security.”

Stylianos Kampakis, CEO of Tesseract Academy and tokenomics expert, says projects should hire multiple auditors to ensure the best possible review.

“I think they probably do a good job overall, but I’ve heard many horror stories of audits that missed significant bugs,” he tells Cointelegraph. “So, it’s not only down to the firm but also the actual people involved in the audit. That’s why I wouldn’t ever personally trust the security of a protocol to a single auditor.” 

zkSync agrees on the need for multiple auditors and tells Magazine that before it launched its EVM compatible zero knowledge proof rollup Era on mainnet on March 24, it was thoroughly tested in seven different audits from Secure3, OpenZeppelin, Halburn and a fourth auditor yet to be announced.

White hat hackers and bug bounties

Rainer Böhme, professor for security and privacy at the University of Innsbruck, wrote that basic audits are “hardly ever useful, and in general, the thoroughness of security audits needs to be carefully tailored to the situation.” 

Instead, bug bounty programs can provide better incentives. “Bug bounties offer an established way to reward those who find bugs… they would be a natural fit for cryptocurrencies, given they have a built-in payment mechanism,” Böhme continued.

White hat hackers are those who leverage their talents to identify a vulnerability and work with projects to fix them before a malicious (“black hat”) hacker can exploit it. 

White hat hackers find the bugs before the black hat hackers do
White hat hackers find bugs before black hat hackers do. (Pexels)

Bug bounty programs have become essential to discovering security threats across the web, generally curated by project owners who want talented programmers to vet and review their code for vulnerabilities. Projects reward hackers for identifying new vulnerabilities and upkeep and integrity maintenance on a network. Historically, fixes for open-source smart contract languages — e.g., Solidity — have been identified and fixed thanks to bug bounty hackers.

“These campaigns began in the ‘90s: there was a vibrant community around the Netscape browser that worked for free or for pennies to fix bugs that were gradually appearing during development,” wrote Marchesoni.

“It soon became clear that such work could not be done in idle time or as a hobby. Companies benefited twice from bug bounty campaigns: in addition to the obvious security issues, the perception of their commitment to security also came by.”

Bug bounty programs have emerged across the Web3 ecosystem. For example, Polygon launched a $2-million bug bounty program in 2021 to root out and eliminate potential security flaws in the audited network. Avalanche Labs operates its own bug bounty program, which launched in 2021, via the HackenProof bug bounty platform.

However, there is tension between the extent of the security gaps they believe they have found and how significantly the issue is taken by projects. 

White hat hackers have accused various blockchain projects of gaslighting community members, as well as withholding bug-bounty compensation for white hat services. While it goes without saying, actually following through with the payment of rewards for legitimate service is essential to maintain incentives.

A team of hackers recently claimed that it was not compensated for its bug bounty services to the Tendermint application layer and Avalanche.

On the other side of the fence, projects have found some white hat hackers are really black hats in disguise.

Read also

Features

Saving the planet could be blockchain’s killer app

Columns

We tracked down the original Bitcoin Lambo guy

Tendermint, Avalanche and more

Tendermint is a tool for developers to focus on higher-level application development without having to deal directly with the underlying communication and cryptography. Tendermint Core is the engine that facilitates the P2P network via proof-of-stake (PoS) consensus. The Application BlockChain Interface (ABCI) is the tool with which public blockchains link to the Tendermint Core protocol.

In 2018, a bug bounty program for the Tendermint and Cosmos communities was created. The program was designed to reward community members for discovering vulnerabilities with rewards based on factors such as “impact, risk, likelihood of exploitation, and report quality.” 

Last month, a team of researchers claimed to have found a major Tendermint security exploit, resulting in a services crash via remote API – a Remote Procedure Call (RPC) Tendermint vulnerability was discovered, impacting over 70 blockchains. The exploit would have a severe impact and could potentially include over 100 peer-to-peer and API vulnerabilities since the blockchains share similar code. Ten blockchains in the top 100 of CertiK’s “Security Leaderboard” are based on Tendermint.

Tendermint remote API crash from Padillac’s desktop
Tendermint remote API crash from Padillac’s desktop. (Pad on YouTube)

However, after going through the proper channels to claim the bounty, the hacker group said it was not compensated. Instead, what followed was a string of back-and-forth events, which some claim was a stalling attempt for Tendermint Core, while it quickly patched the exploit without paying the bounty hunter their dues. 

This, among others that the group has supposedly documented, is known as a zero-day exploit.

“The specific Tendermint denial-of-service (DoS) attack is another unique blockchain attack vector, and its implications aren’t yet fully clear, but we will be evaluating this potential vulnerability going forward, encouraging patches and discussing with current customers who may be vulnerable,” said CertiK’s Brooks.

He said the job of security testing was never finished. “Many see audits or bug bounties as a one-and-done scenario, but really, security testing needs to be ongoing in Web3 the same way it is in other traditional areas,” he says. 

Are they even white hats?

Bug bounties that rely on white hats are far from perfect, given how easy it is for black hats to put on a disguise. Ad hoc arrangements for the return of funds are a particularly problematic approach.

“Bug bounties in the DeFi space have a severe problem, as over the years, various protocols have allowed black hat hackers to turn ‘white hat’ if they return some or most of the money,” says Finlow-Bates.

White hat and black hat hackers sometimes play the same game
White hat and black hat hackers sometimes play the same game. (Pexels)

“Extract a nine-figure sum, and you may end up with tens of millions of dollars in profit without any repercussions.” 

The Mango Markets hack in October 2022 is a perfect example, with a $116-million exploit and only $65 million returned and the rest taken as a so-called “bounty.” The legality of this is an open question, with the hacker responsible charged over the incident, which some have likened more to extortion than a legitimate “bounty.”

The Wormhole Bridge was similarly hacked for $325 million of crypto, with a $10-million bounty offered in a white hat-style agreement. However, this was not large enough to attract the hacker to execute the agreement.

“Compare this to true white hat hackers and bug bounty programs, where a strict set of rules are in place, full documentation must be provided, and the legal language is threatening, then failure to follow the directions to the letter (even inadvertently) may result in legal action,” Finlow-Bates elaborates. 

Organizations that enlist the support of white hats must realize that not all of them are equally altruistic – some blur the lines between white and black hat activities, so building in accountability and having clear instructions and rewards that are executed matter. 

“Both bug bounties and audits are less profitable than exploits,” Waisanen continues, remarking that attracting white hat hackers in good faith is not easy.

Read also

Features

How Silk Road Made Your Mailman a Dealer

Features

Reformed ‘altcoin slayer’ Eric Wall on shitposting and scaling Ethereum

Where do we go from here?

Security audits are not always helpful and depend crucially on their degree of thoroughness and independence. Bug bounties can work, but equally, the white hat might just get greedy and keep the funds. 

Are both strategies just a way of outsourcing responsibility and avoiding responsibility for good security practices? Crypto projects may be better off learning how to do things the right way in the first place, argues Maurício Magaldi, global strategy director for 11:FS.

“Web3 BUIDLers are generally unfamiliar with enterprise-grade software development practices, which puts a number of them at risk, even if they have bug bounty programs and code audits,” he says. 

“Relying on code audit to highlight issues in your application that aims to handle millions in transactions is a clear outsourcing of responsibility, and that is not an enterprise practice. The same is true for bug bounty programs. If you outsource your code security to external parties, even if you provide enough monetary incentive, you’re giving away responsibility and power to parties whose incentives might be out of reach. This is not what decentralization is about,” said Magaldi.

An alternative approach is to follow the process of the Ethereum Merge. 

“Maybe because of the DAO hack back in the early days of Ethereum, now every single change is meticulously planned and executed, which gives the whole ecosystem a lot more confidence about the infrastructure. DApp developers could steal a page or two from that book to move the industry forward,” Magaldi says.

Rather than outsource your security, projects need to take full responsibility themselves
Rather than outsource their security, projects need to take full responsibility themselves. (Pexels)

Five lessons for cybersecurity in crypto

Let’s take stock. Here are five broad philosophical lessons we can take away.

First, we need more transparency around the successes and failures of Web3 cybersecurity. There is, unfortunately, a dark subculture that rarely sees the light of day since the audit industry often operates without transparency. This can be countered by people talking – from a constructive point of view – about what works and what does not work. 

When Arthur Anderson failed to correct and flag fraudulent behavior by Enron, it suffered a major reputational and regulatory blow. If the Web3 community cannot at least meet those standards, its ideals are disingenuous.

Second, Web3 projects must be committed to honoring their bug bounty programs if they want the broader community to obtain legitimacy in the world and reach consumers at scale. Bug bounty programs have been highly effective in the Web1 and Web2 landscapes for software, but they require credible commitments by projects to pay the white hat hackers.

Third, we need genuine collaborations among developers, researchers, consultancies and institutions. While profit motives may influence how much certain entities work together, there has to be a shared set of principles that unite the Web3 community – at least around decentralization and security – and lead to meaningful collaborations.

There are already many examples; tools like Ethpector are illustrative because they showcase how researchers can help provide not only careful analysis but also practical tools for blockchains.

Fourth, regulators should work with, rather than against or independently of, developers and entrepreneurs.

“Regulators should provide a set of guiding principles, which would need to be accounted for by developers of DeFi interfaces. Regulators need to think of ways to reward developers of good interfaces and punish designers of poor interfaces, which can be subject to hacking and expose the underlying DeFi services to costly attacks,” says Agostino Capponi, director of the Columbia Center for Digital Finance and Technologies.

By working collaboratively, regulators are not burdened by having to be subject matter experts on every emerging technology – they can outsource that to the Web3 community and play to their strengths, which is building scalable processes.

Fifth, and most controversially, DeFi projects should work toward a middle-ground where users go through some level of KYC/AML verification to ensure that malicious actors are not leveraging Web3 infrastructure for harmful purposes.

Although the DeFi community has always opposed these requirements, there can be a middle ground: Every community requires some degree of structure, and there should be a process for ensuring that unambiguously malicious users are not exploiting DeFi platforms.

Decentralization is valuable in finance. As we have seen once again with the collapse of the Silicon Valley Bank, centralized institutions are vulnerable, and failures create large ripple effects for society. 

My research in the Journal of Corporate Finance also highlights how DeFi is recognized as having greater security benefits: Following a well-known data breach on the centralized exchange KuCoin, for example, transactions grew 14% more on decentralized exchanges, relative to centralized exchanges. But more work remains to be done for DeFi to be accessible.

Ultimately, building a thriving ecosystem and market for cybersecurity in the Web3 community is going to require good-faith efforts from every stakeholder. 

Christos A Makridis

Christos Makridis

Christos A. Makridis is the Chief Technology Officer and Head of Research at Living Opera. He is also a research affiliate at Stanford University’s Digital Economy Lab and Columbia Business School’s Chazen Institute, and holds dual doctorates in economics and management science and engineering from Stanford University. Follow at @living_opera.



Source link

Fiverr

Be the first to comment

Leave a Reply

Your email address will not be published.


*