Lately, some of my friends have been asking me things like, “Is it worth getting into the tech industry? Since AI can do all the work, why would someone hire me?” or “I think AI will take my job, so I should focus on AI prompting rather than learning the fundamentals.” I understand why they are thinking in that manner, but perhaps the situation is blown out of proportion.
Anecdote
At the moment, students are probably using AI (LLMs - Large Language Models) most extensively as it can be a great help with school assignments, especially if they are studying in the fields of literature or programming. While they might not learn much, they will get good grades.
As a Mathematics major (humble flex 😎), I was eager to see how AI could support my studies. It’s good at explaining basic definitions, giving examples, and comparing two seemingly similar concepts. But as soon as you go beyond that, it falls apart quickly.
We have a course on Fortran where we implement various numerical methods. What is Fortran, you ask? Well, it’s a programming language that is still apparently used for scientific computations. Fortran code is not readily available on the internet, and proper documentation and learning materials are also scarce. Previously, we also had a course on Numerical Analysis that explained the concepts of various numerical methods theoretically. So it was a trivial task to convert those concepts to code. Being me, I wanted to save some time writing code for my assignments. AI can tear through that, right? Or so I thought.
It did well on the root-finding methods, but beyond that, it was lackluster. For example, I wasted so much time trying to get it to generate the correct code for Weddle’s rule (an integration method with high accuracy). It’s not a difficult algorithm to implement once you understand the core concept. But in the wild, you’ll only find examples of Weddle’s rule with 6-step calculation, I guess that was the reason it wasn’t able to give me the generalized code. For some other complicated methods, e.g., Adaptive Runge-Kutta, Linear shooting, Broyden, it kept hallucinating. It was also prone to making syntax errors! I tried ChatGPT, Gemini, and Claude; I got the same result everywhere. I gave up on LLMs and ended up writing them myself.
The AI Pause
I still remember the first time I used Copilot; I was amazed! It felt like it could read my thoughts; it wrote stuff before I could even think. When the ‘awe’ factor wore off, I realized it was really good at autocomplete but not at generating entire blocks of code. More often than not, what it generates isn’t something I can use in production. I was spending more time fixing what Copilot generated rather than writing code myself. Although there’s a perceived productivity, in the long run, it will hurt you.
I used it like that for a year. After the license expired, I decided to write code the old-fashioned way for a while. What I noticed was after writing a function name or variable identifier, I was pausing for a while. I wasn’t thinking anything at the time, just waiting for something. That wait was for Copilot to complete the code for me. It felt like I had lost the part that had enticed me to become a programmer at the first place: critical thinking. I was no longer thinking; I was just expecting Copilot to do the work for me. Also, if I’m trying to solve some niche problem, it will keep hallucinating.
I realized, Copilot didn’t make me a better programmer, but a faster mediocre one. Since then, I’ve stopped using AI inside my code editor. Will I ever use it inside my editor again? I really don’t know. But I do still use AI in conventional way.
Where is Devin?
Remember Devin, the first AI software engineer? We haven’t heard much about it in a while. The reason is simple: it never caught on. Devin never reached the point where it could actually replace a software engineer.
Apparently, OpenAI is in talks to buy Windsurf for $3 billion! I wonder why they’re interested in buying a code editor that’s a fork of VSCodium. ChatGPT should be able to achieve that in no time, right? I guess not.
When Apple introduced iPhone 16, its main differentiating factor with its predecessors was Apple Intelligence. They promised a bunch of cool AI features that were supposed to launch with iOS 18.1. After the public beta, they launched some of the AI features they promised; they especially left out the new Siri. The rollout didn’t go as planned; the notification summary was dysfunctional. It was hallucinating outright misinformation. As you can imagine, news outlets were not happy with it, as they were sending notifications of one thing and Apple Intelligence was making up something completely different. They had to turn off the feature for the time being.
Apple is known for their polished software experiences. But they fall short on this one, that tells you something.
While these CEOs might gaslight you into believing that AI will replace software developers, that is far from the truth. Don’t buy into marketing hype; try to understand the real story behind those claims. We still need human software developers to drive innovation.
Companion Model
I’m not going to sit here and pretend that AI is good for nothing. It’s actually useful in some specific scenarios. Instead of marketing AI as a replacement for software developers, market it as their companion. I believe more money can be made that way. Imagine senior devs no longer have to do mundane tasks. They would have more time for critical thinking and coming up with better solutions to complex problems.
AI is great for brainstorming ideas or creating a quick prototype to validate your ideas. Skimming through a large documentation and give you the answer you’re looking for, or write documentation for your code.
Maybe you are stuck on a pesky bug for hours, let the AI read your code and suggest you some fix. Let it generate boilerplate code, update dependencies, migrate old code for a new version of a library. Use AI as your rubber duck debugger, explore multiple solutions to a given problem.
Believe me, these are invaluable powers you can have as a senior dev. Maybe we’ll actually have 10x devs that way!
‘I’ in AI is Missing
If you know what these LLMs actually do, then what I’ve said so far shouldn’t be news to you. In case you don’t know, here’s an oversimplified explanation:
As the name suggests, it’s a language model and it understands language very well. When you ask an LLM something, it takes that entire input, chops it up into tokens, and processes it. These tokens are converted into word clouds where each token is represented as collection of numbers (vectors to be precise). They go through various layers of transformers that figure out the contextual meaning of each token to remove ambiguity. It does so by referring to its training data, which taught it what words go together and what a word means in different contexts. It knows the relationship between words because the entire internet is used as its training dataset, which helps it to actually give contextual meaning to words. And as you can imagine, it can also have biases.
After it processes the meaning of each token, it then goes forward to predict the next word. Again, that prediction comes from its training, and it likely changes in different layers because of probability. That in turn gives the most likely next word to put after the initial input. This entire process repeats with the new input (initial input + predicted next word) and continues until it hits the stopping criteria. Keep in mind, deep neural networks are still a ‘gray’ box to us.
Here’s the important thing to understand: it understands the language, not the facts within the language. It can merely understand if what it’s generating makes sense grammatically and logically in the given context. But it doesn’t understand the meaning or facts as we humans do. The ‘Intelligence’ in Artificial Intelligence isn’t truly there yet. It would be a great feat if we could achieve that. So, if you’re taking what AI is telling you as fact, please cross-reference it.
The Plateau
LLMs have improved a lot since their inception. They have become Multimodal Models, i.e., they can now deal with image, audio, video, etc. Google’s AlphaEvolve has recently optimized some of the existing math problems.
But, if you look closely, the last few updates to most LLMs have been iterative at best. We are no longer seeing exponential growths.
OpenAI is trying to get its GPT-5 model out, but it’s rumored to be delayed because the response generation time is slower and it costs significantly more than existing models.
Agentic Models are good now, but they require constant supervision or a metric to gauge their output.
A big question remains: Is AI technology plateauing? Every novel technology hits a plateau after a certain amount of time. Is AI at that stage? Nobody actually knows; only time will tell.
Find a Niche
AI is here to stay. So, it’s only fair to get yourself acquainted with it and use it in your workflows to be more productive. But not at the cost of your reasoning capability. Treat it like your assistant. You will delegate some of your work to it and supervise what it does to ensure it’s actually correct.
If you’re a beginner, don’t get reliant on it from the get-go. Use it as a mentor that can teach you things if you’re stuck or help you get explanations for hard-to-understand topics. Don’t let it write code for you; otherwise, you will miss out on the very skill that you’re trying to learn. I would still prefer if you get your answers from search engines for a couple of reasons:
- It will reinforce your findings, as you potentially have to skim through a few results.
- You will be exposed to different perspectives on solving the same problem.
Whatever you do, don’t compromise your foundational knowledge. It may seem like everyone is moving ahead of you, but believe me, it’s very easy to leapfrog in this industry if your groundwork is in order. Once you’re past that stage, find a niche and go deep. Build yourself in a way that makes you the best in that niche. You might not be the best, but at least become one of the best. Once you reach that mastery, nobody can replace you.
AI will replace mediocrity, not mastery.