Amy Blankenship
2 min readMar 15, 2024

--

We do have good examples (https://scrippsnews.com/stories/ai-discovers-potential-new-cancer-treatment-in-just-30-days/). However, I think the quality of the data LLMs that are trained on rigorous research get is much higher than "all of github" is for coding.

The thing is, of the 100 or more developers I've worked directly with in my time, I can think of maybe 5% that could develop a good, solid solution that wasn't spaghetti. And I've worked for some solid firms (but also Change Healthcare so). Most of the code that's out there is headed in the wrong direction. So how is AI to know "good" code from all of that?

If you want to try to convince me some of the open source projects are good-quality code, I've read a LOT of OS source code. You could say Redux and RTK toolkit are better than decent, as is testing-library. But React? The whole premise of hooks is an anti-pattern, and every new version of React swallows another animal to go after the original fly. Last time I looked at the implementation it was a scary mess. And I could go on naming names.

Also, a good solution usually needs deep knowledge of the business domain and it's not usually going to be something that can be contained in one file. So let's assume even that the AI can cut through the crap and somehow knows how to code well. Then let's assume that you could somehow convey enough domain knowledge to the model to enable it to come up with a solution without spending more time than it would take to just f*ing code it. How can the LLM convey or implement the full design within the existing system? Because the vast majority of work out there is not greenfield, it's legacy projects.

--

--

Amy Blankenship
Amy Blankenship

Written by Amy Blankenship

Full Stack developer at fintech company. I mainly write about React, Javascript, Typescript, and testing.

Responses (1)