I use AI all the time, but then I use my microwave all the time.
I'm old enough to remember when microwaves became a widespread thing in the early 80s. The prices dropped, and everyone was like "Sell the oven! This new Mi-Cro-Wave deal is way better!"
Like the InstantPot of recent history, 800 cookbooks came out telling us how to do all of our cooking in our microwave. It's fast! It's clean! It isn't ionizing radiation!
Then, we put some bread in there and got a brick out and burned a casserole in one spot while three inches away it was still frozen.
Turns out that tools are good for some things and not great at others.
I'm selectively lazy. When I'm working on a project, and I need a particular tool, I can't help but think... "maybe I could just pound this nail in with the side of this wrench."
Rarely does that work out well. I damage my wrench, and the nail gets bent. Microwaves are great for reheating soup, not so much for roasting a chicken.
I've come to the conclusion that AI is the microwave of typing stuff. Need to make a Kubernetes deployment file? Hey, Claude, do that. Need a Python script to do some stuff to a CSV file? ChatGPT, make a script, and make it snappy!
I save countless hours a week using LLMs to scratch stuff up for me.
AI's convenient, and it's almost always wrong.
That deployment file it created? It guessed at a bunch of stuff that looks right, but won't work. The Python script? It called libraries that don't exist. And it's Python 2.7, deprecated in 2020.
Now, I have to go in to every piece of work that these things spit out, read every line, and make sure that it's doing what I want and not doing something "bad". Still, on balance, that's usually faster than trying to scratch the whole thing up from memory.
But is it getting worse? Or is it just that the thrill is gone?
Lately, I've been hyper-productive just dealing with a very full pipeline of work. Upgrades of systems, rewrites, productivity enhancements that I've put off for too long. As a result, I'm leaning on AI to help me get more done, faster.
Yet, it feels like it's actually moving backwards, in terms of accuracy. You know how you read an article online, and you can almost immediately tell that it's AI? "In the ever-changing technological landscape..." I'm starting to see that in code.
I wonder if the AI snake is eating its own tail? Are these models training on their own bad advice in the way that they seem to train on their own insipid writing, reinforcing their own regurgitated blandness? Only in this case, they output code based on outdated training data, which makes it current training data, which make it more important for training, which makes it dumber.
I find myself going back to StackOverflow.
I have never liked the trendy move to Discord for support. The same questions get asked over and over and get lost in the chaos. Yes, you can search for them. No one seems to, and even if you do, it's like finding a needle in a haystack. It's just not made for that.
I've always preferred StackOverflow. It was the de facto place to go for technical answers for so long that it was optimized for that function. I didn't need to find yet another Discord server, read their rules, search their mess, ask my question and keep going back there to see if anyone made reference to it. Today? Has it scrolled away? No one? Hello?
For a minute there, I thought ChatGPT was going to fix all that. I'd just ask my question, and having access to the world's knowledge, it'd just give me the best answer. Turned out, it'd give me an answer all right. And it'd sound very confident. But it'd be wrong.
You want to chat with your gamer friends about pretty much nothing so who cares if it's a mess? Discord's perfect.
You want to scratch up a rough draft of a React component because you know what you're doing, but you don't want to type it all in? Generative AI is a handy tool for that.
You want a vetted correct answer to a specific technical question? I think it's back to StackOverflow.