Home News Claude 3.5 Sonnet Struggles Creatively in AI Coding Tests Dominated by ChatGPT

Claude 3.5 Sonnet Struggles Creatively in AI Coding Tests Dominated by ChatGPT

May 3, 2025
FrankWilliams
2

Testing the Capabilities of Anthropic's New Claude 3.5 Sonnet

Last week, I received an email from Anthropic announcing the release of Claude 3.5 Sonnet. They boasted that it "raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations." They also claimed it was perfect for complex tasks like code generation. Naturally, I had to put these claims to the test.

I've run a series of coding tests on various AIs, and you can too. Just head over to How I test an AI chatbot's coding ability - and you can too to find all the details. Let's dive into how Claude 3.5 Sonnet performed against my standard tests, and see how it stacks up against other AIs like Microsoft Copilot, Meta AI, Meta Code Llama, Google Gemini Advanced, and ChatGPT.

1. Writing a WordPress Plugin

Initially, Claude 3.5 Sonnet showed a lot of promise. The user interface it generated was impressive, with a clean layout that placed data fields side-by-side for the first time among the AIs I've tested.

Screenshot of WordPress plugin interface created by Claude 3.5 SonnetScreenshot by David Gewirtz/ZDNET

What caught my attention was how Claude approached the code generation. Instead of the usual separate files for PHP, JavaScript, and CSS, it provided a single PHP file that auto-generated the JavaScript and CSS files into the plugin's directory. While this was an innovative approach, it's risky because it depends on the OS settings allowing a plugin to write to its own folder—a major security flaw in a production environment.

Unfortunately, despite the creative solution, the plugin didn't work. The "Randomize" button did nothing, which was disappointing given its initial promise.

Here are the aggregate results compared to previous tests:

  • Claude 3.5 Sonnet: Interface: good, functionality: fail
  • ChatGPT GPT-4o: Interface: good, functionality: good
  • Microsoft Copilot: Interface: adequate, functionality: fail
  • Meta AI: Interface: adequate, functionality: fail
  • Meta Code Llama: Complete failure
  • Google Gemini Advanced: Interface: good, functionality: fail
  • ChatGPT 4: Interface: good, functionality: good
  • ChatGPT 3.5: Interface: good, functionality: good

2. Rewriting a String Function

This test evaluates how well an AI can rewrite code to meet specific needs, in this case, for dollar and cent conversions. Claude 3.5 Sonnet did a good job removing leading zeros, handling integers and decimals correctly, and preventing negative values. It also smartly returned "0" for unexpected inputs, which helps avoid errors.

However, it failed to allow entries like ".50" for 50 cents, which was a requirement. This means the revised code wouldn't work in a real-world scenario, so I have to mark it as a fail.

Here are the aggregate results:

  • Claude 3.5 Sonnet: Failed
  • ChatGPT GPT-4o: Succeeded
  • Microsoft Copilot: Failed
  • Meta AI: Failed
  • Meta Code Llama: Succeeded
  • Google Gemini Advanced: Failed
  • ChatGPT 4: Succeeded
  • ChatGPT 3.5: Succeeded

3. Finding an Annoying Bug

This test is tricky because it requires the AI to find a subtle bug that needs specific WordPress knowledge. It's a bug I missed myself and had to turn to ChatGPT to solve initially.

Claude 3.5 Sonnet not only found and fixed the bug but also noticed an error introduced during the publishing process, which I then corrected. This was a first among the AIs I've tested since publishing the full set of tests.

Here are the aggregate results:

  • Claude 3.5 Sonnet: Succeeded
  • ChatGPT GPT-4o: Succeeded
  • Microsoft Copilot: Failed. Spectacularly. Enthusiastically. Emojically.
  • Meta AI: Succeeded
  • Meta Code Llama: Failed
  • Google Gemini Advanced: Failed
  • ChatGPT 4: Succeeded
  • ChatGPT 3.5: Succeeded

So far, Claude 3.5 Sonnet has failed two out of three tests. Let's see how it does with the last one.

4. Writing a Script

This test checks the AI's knowledge of specialized programming tools like AppleScript and Keyboard Maestro. While ChatGPT had shown proficiency in both, Claude 3.5 Sonnet didn't fare as well. It wrote an AppleScript that attempted to interact with Chrome but completely ignored the Keyboard Maestro component.

Moreover, the AppleScript contained a syntax error. In trying to make the match case-insensitive, Claude generated a line that would cause a runtime error:

if theTab's title contains input ignoring case then

The "contains" statement is already case-insensitive, and the "ignoring case" phrase was misplaced, resulting in an error.

Here are the aggregate results:

  • Claude 3.5 Sonnet: Failed
  • ChatGPT GPT-4o: Succeeded but with reservations
  • Microsoft Copilot: Failed
  • Meta AI: Failed
  • Meta Code Llama: Failed
  • Google Gemini Advanced: Succeeded
  • ChatGPT 4: Succeeded
  • ChatGPT 3.5: Failed

Overall Results

Here's how Claude 3.5 Sonnet performed overall compared to other AIs:

  • Claude 3.5 Sonnet: 1 out of 4 succeeded
  • ChatGPT GPT-4o: 4 out of 4 succeeded, but with one weird dual-choice answer
  • Microsoft Copilot: 0 out of 4 succeeded
  • Meta AI: 1 out of 4 succeeded
  • Meta Code Llama: 1 out of 4 succeeded
  • Google Gemini Advanced: 1 out of 4 succeeded
  • ChatGPT 4: 4 out of 4 succeeded
  • ChatGPT 3.5: 3 out of 4 succeeded

I was pretty disappointed with Claude 3.5 Sonnet. Anthropic promised it was suited for programming, but it didn't meet those expectations. It's not that it can't program; it just can't program correctly. I keep hoping to find an AI that can outperform ChatGPT, especially as these models get integrated into programming environments. But for now, I'm sticking with ChatGPT for programming help, and I recommend you do the same.

Have you used an AI for programming? Which one, and how did it go? Share your experiences in the comments below.

Follow my project updates on social media, subscribe to my weekly newsletter, and connect with me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Related article
Open Deep Search arrives to challenge Perplexity and ChatGPT Search Open Deep Search arrives to challenge Perplexity and ChatGPT Search If you're in the tech world, you've likely heard about the buzz surrounding Open Deep Search (ODS), the new open-source framework from the Sentient Foundation. ODS is making waves by offering a robust alternative to proprietary AI search engines like Perplexity and ChatGPT Search, and it's all about
Use ChatGPT to Craft a Superior Cover Letter: Tips and Tricks Use ChatGPT to Craft a Superior Cover Letter: Tips and Tricks Creating a resume that perfectly summarizes your career is challenging enough, but job applications often require a cover letter as well. This letter is your chance to dive into the specifics of why you're interested in the company, what qualifies you for the position, and why you're the best candid
Explore Earth Virtually: ChatGPT and Google Earth Vacation Planner Explore Earth Virtually: ChatGPT and Google Earth Vacation Planner Ever felt the urge to escape the daily grind but found yourself stumped on where to go? Let's dive into a cool way to plan your next getaway without even stepping outside your door. By harnessing the power of ChatGPT and Google Earth, you can embark on a virtual vacation that's both exciting and rel
Comments (0)
0/200
Back to Top
OR