According to Cointelegraph, Kang Li, the chief security officer at blockchain security firm CertiK, has warned that artificial intelligence (AI) tools such as OpenAI's ChatGPT could create more problems, bugs, and attack vectors if used to write smart contracts and build cryptocurrency projects. Li explained that ChatGPT cannot pick up logical code bugs the same way experienced developers can, and may create more bugs than it identifies, which could be disastrous for first-time or amateur coders looking to build their own projects.

Li believes that ChatGPT should be used as an engineer's assistant because it's better at explaining what a line of code actually means. He stressed that it shouldn't be relied on for writing code, especially by inexperienced programmers looking to build something monetizable. Li said he will back his assertions for at least the next two to three years, as he acknowledged the rapid developments in AI may vastly improve ChatGPT's capabilities.

In related news, Richard Ma, the co-founder and CEO of Web3 security firm Quantstamp, told Cointelegraph that AI tools are becoming more successful at social engineering attacks, with many being identical to attempts by humans. Ma said Quantstamp's clients are reporting an alarming amount of ever more sophisticated social engineering attempts, and believes we're approaching a point where we won't know if malicious messages are AI or human-generated. Ma said better anti-phishing software is coming to market that can help companies mitigate against potential attacks.