Already showing signs of spreading.
This guy’s argument is that he’s a 10xer because he’s using AI effectively, i.e. just proofreading its output and deleting the comments. (Also, why hire juniors when you can get the same work for $20/mo?)
I think this is a losing strategy unless all senior devs never retire and are immortal. (Or unless GAI happens in which case the world economy will collapse and who cares about strategy.)
It looks like what’s happening is that way fewer companies are willing to invest in juniors now, leading to falling enrollment in university, leading to a shortage of seniors, leading to very high dev pay, leading to increased enrollment. Eventually.
This guy (a founding engineer of fly.io) isn’t claiming his 10x status in the article on the grounds that he uses AI
Indeed, he’s claiming that he gets value from AI despite being a 10xer without it
Remember when everybody was making “smart” toasters and fridges and shit with cameras or WiFi for absolutely no reason.
This is that all over again.
Nobody needs “AI” in their water kettles, dryers, or dildos.
I gave you the up vote because it’s a good take, but this really has nothing to do with the article, so I can tell that you and a bunch of your 58 up voters didn’t read it
I did read it, and my comment is exactly referencing the attitude of the author which is “It’s good enough, so you should use it”. I disagree, and say it’s another dumbass shortcut to cash grab on a less than stellar ecosystem and product. It’s training wheels for failure.
There is a very loud population of AI-haters who don’t hate AI but rather corporate AI but they don’t know what the difference is and can be lead to water but won’t drink it.
If they wanted to stick it to the AI companies, they’d be all in on the open source LLMs. They’re not, though, because they don’t understand it. They’re just angry at this nebulous concept of AI because a few companies pissed in the well. Nobody was upset at AI Dungeon when that came out.
I can’t speak for others but I simply hate that people keep telling us how amazing AI is yet not a single one of them can ever point to a single task completed by AI on its own that is actually of decent quality, never mind enough tasks that I would trust AI to do anything without supervision. I mean actual tasks, e.g. PRs on an open source repository or a video showing some realistic every-day task done from start to finish by AI alone, not hand-wavy “I use it every day” abstract claims.
People like OP seem to be completely oblivious to the fact that reading code takes a lot of time and effort, even when there was an actual human thought process behind it, never mind when it might be totally random garbage. Writing code is also not nearly as much of a bottleneck as AI proponents seem to think it is. Reading code to verify it is not total garbage is actually much more effort than writing the same code yourself. It might not appear like that if you are writing in a low expressiveness language like Go or Java because you are reading or writing a lot of lines for every actual high level action the code takes that you need to think about but it becomes more obvious in more expressive languages where the same action can be expressed closer to 1:1 in terms of lines per high level action.
You’re listening to hype bros when you should be listening to developers.
Any code reviewer will tell you code review is harder than writing code. And it gets harder and harder the lower the quality the code is; the more revisions and research the code reviewer needs to do to get the final product to a high quality.
One must consider how humans will interact with this part of the program (often this throws all kinds of spanners in the works), what happens when data comes in differently than expected, how other parts of the system work with this one, etc, etc, etc. Code that merely achieves the stated goals of a ticket can easily produce a dozen tickets later if not done right.
whatever die mad I’ll keep being more productive than I’ve ever been
The issue with AI is not that it’s not an impressive technology, it’s that it’s built on stolen data and is incredibly wasteful of resources. It’s a lot like cars in that regard, sure it solves some problems and is more convenient than the alternatives, but its harmful externalities vastly outweigh the benefits.
LLMs are amazing because they steal the amazing work of humans. Encyclopedias, scientific papers, open source projects, fiction, news, etc. Every time the LLM gets something right, it’s because a human figured it out, their work was published, and some company scraped it without permission. Yet it’s the LLM that gets the credit and not the person. Their very existence is unjust because they profit off humanity’s collective labour and give nothing in return.
No matter how good the technology is, if it’s made through unethical means, it doesn’t deserve to exist. You’re not entitled to AI more than content creators are entitled to their intellectual property.