Resist Third Person Programming
November 22, 2025
Disclaimer: I want to start this blog by saying, I do not think AI is going anywhere. However, based on what I have seen from those around me, it seems that there is a sinister shift going on that people are not acknowledging.
What Is Third Person Programming?
Third Person Programming (TPP) refers to the practice of watching a LLM write code for you while insisting that you are still in control. It is similar to vibe coding, although vibe coding usually implies that you have fully given up the driver’s seat. I have noticed an increasingly common trend in which people lean on LLMs as a substitute for genuine computer science understanding. To be clear, I only have a minor in data science and computer science along with a bachelor’s degree in statistics, so I do not claim to be an expert. Even so, the stories I have heard and witnessed about TPP are genuinely concerning. You might wonder what I mean when I say it has “taken over” people. What I mean is that I have seen students rely entirely on LLMs for their knowledge and coursework. Recently, a friend in undergrad told me about someone who failed an interview involving list comprehension and basic string manipulation in Python. These are fundamental skills that any third-year student seeking CS internships at Cal Poly should be comfortable with. There is nothing wrong with failing a technical interview, but when my friend asked why the questions were difficult, the student explained, “I haven’t done anything with Python in two years.” That would make sense on its own, except my friend pointed out that the student was currently taking CS480, an Artificial Intelligence course that uses Python. The student then replied, “We do use Python, but I let Chat do all the coding.” That is the essence of Third Person Programming. You may feel as though you are genuinely learning, but once Copilot or your language model is taken away, you quickly realize how little you actually know. I have experienced this myself. I often relied on an LLM to generate boilerplate code, and when I started a job where LLMs were not allowed, I found that I had forgotten how to begin writing programs from scratch.
When Using LLMs Works
I have seen both senior and junior engineers use LLMs with great success, and there is no denying the advancements in AI and its practical usefulness. However, the definition of “success” varies greatly depending on the context. When you give an experienced engineer an LLM, they can read and debug whatever code is produced. Their foundation and knowledge allows them to debug and understand the underlying code. In contrast, junior engineers and students who lack a strong CS foundation are for more likely to not have the ability to debug the code an LLM generates. As a result, the final product may look functional on the surface but remains one sneaky bug away from complete failure. Without solid fundamentals, relying on LLMs can create more technical debt than real progress. For that reason, I implore myself, and others who are still early in their CS journey, to avoid using LLMs as a primary source of generated code, and instead focus on building the foundational knowledge needed to use these tools responsibly.
Intelligence On Tap, Or A Tap On Intelligence?
A long while ago, while scrolling through LinkedIn, I came across a post claiming that ChatGPT is intelligence on tap. This statement is not entirely wrong, because ChatGPT can provide excellent insight and analysis, but it is not entirely right either. I would argue that more often than not, ChatGPT and other language models act less like intelligence on tap and more like a tap on intelligence. Instead of strengthening critical thinking skills and foundational knowledge, these tools often replace them. For example, consider this story. A friend of mine joined a Discord call with a group of students. One of them was sharing his screen when he exclaimed “Shit, I need to do my homework.” He proceeded to upload all of his assignments to ChatGPT and submit the generated answers. The entire process took about one minute to complete a week’s worth of work for his classes. I do not know what grade he received, but the outcome is not the point. Rather, it is a peer into how an alarming number of students are using LLMs. When reduced to its core, this student is paying his university only to provide training data for OpenAI, while gaining no real knowledge in return. This may sound like an extreme example, but I would argue that this kind of “extreme” is far more common than people are willing to admit.
So, What To Do?
I do not think that avoiding LLMs completely is the solution, although it is an option. I would encourage anyone young in their careers to not fall into this trap and trust yourself. Even if working and turning in all of your assignments without assistance leads to a C in every class, that is far better for your future than all A’s by extensively using LLMs. I do think there is a middle ground, and I do also believe that you shouldn’t be memorizing every last bit of syntax, however it is a slippery slope I think more people need to be exposed to. Just remember, pursue CS for the sake of gaining knowledge and having fun, not to min max your life experience. Trust yourself to debug, persistence is the best debugger. It is okay to bang your head on the wall and that is where the real learning comes from. Manually typing code and working through the “grunt work” (except regex I will never do regex) is the part of the process where you learn!
Moral Of The Story
Long story short, don’t let yourself become a glorified Chat API. Have faith in yourself and put in the hard work for the love of the game. When you are further in your career you will thank yourself tremendously.