by Shelt Garner
@sheltgarner
Apparently, the latest Anthropic LLM, Sonnet 3.7, is kind of mouthy when asked to do “vibe coding” by software developers. I find this very, very annoying because it’s part of a broader issue of programmers being such squeaky wheels that they are able to frame all new AI developments relative to how much it helps them be “10X” coders.

Maybe it’s not coding but far more abstract thinking that we should use to determine how smart an LLM is. The latest OpenAI model, ChatGPT 4.5, apparently astonishingly good when it comes to writing fiction.
That, to me, is a better way to understand how smart a model is.
But I suspect that for the foreseeable future, everything is going to revolve around coding, and “vibe” coding specifically. The coders using LLMs — which may one day take their jobs! — are so adamant about everything in AI needing to revolve around coding that they totally miss that it’s the human touch that is far more meaningful.