
In the age of GenAI tools dominance it is easy to use these tools blindly, without thinking how the tools come up with their answers.
The main issue with treating these tools as a black box is the fact that people learn by making mistakes, while they apply what they try to learn. The problem with GenAI tools is that they give you a complete answer without you exerting any effort in trying to solve the problem on your own. Hence, you miss a critical step in a learning process, which is without a challenge there is no gain.
I can say this based on my own experience with using GenAI tools at work and at home for dozens if not hundreds times a day. The main issue with using GenAI tools at work, such as M365 Copilot Chat and the like, for software development tasks, is what I’ve mentioned above. These tools may give you a complete implementation that works, but you don’t learn anything new as a result, since you don’t try to implement that same thing on your own.
A good example, can be a situation where you’d like to have a Python script that can transform data from one format into another.
Recently, I needed to transform an OWASP Dependency Check plugin JSON report into a more user friendly format, specifically into a CSV file (aka Excel) to be able to to see security vulnerabilities at a glance and to be able to sort them by the NVD last updated date.
To write such a script in Python would take me some time for sure. For I need to understand how to parse JSON file, extract various fields and then write the data in to CSV file. I had a bright idea to use GenAI to do it quickly to save time and effort. Indeed, after a few back and forth iterations with M365 Copilot Chat I had a working Python script that delivered what I needed, without me even looking into the implementation. It was what I’ve mentioned a black box treatment of the GenAI response. It worked and I didn’t care how. But, the main point was, that I didn’t learn anything in the process and felt actually as I was cheating.
I needed to update the script in the end, so I did indeed look at the implementation. And it took me some time to figure out how it worked. It was written well and in a modular fashion. But, I think I’d never do it in a similar a way as a GenAI model did.
I was able to understand and adapt the code enough for it to work, but it didn’t work quite well. So, I said to myself, well it’s time to save my effort and asked GenAI to refactor it, which it did correctly first time.
So, in the end, I did save time on that script, and I did learn how it worked, since I was curious, but I didn’t get the same amount of experience, should I implement it on my own from start to finish.
In this case, I am not talking about GitHub Copilot, for I don’t use it currently. But, even using free GenAI tools can be harmful in the long run, if you treat them as black boxes, instead of tools that help you think better. They cannot substitute thinking, for then you become dumber in the process.
Also, with recent improvements in Anthropic Claude agents, people such as Andrej Karpathy state that we are in the new era of software programming. It sounds that it’s a valid statement to make, since he’s not the only one noticing this change in performance of GenAI agents, but the crucial point is that people who didn’t have software programming experience before starting to use such tools will certainly have a gap in programming skills, for they didn’t face challenges that taught them how to really think. Karpathy is a good example, since he was a programmer before GenAI tools where on the horizon.
In the end, GenAI tools can be very helpful and effective in helping you learn whatever topic you want, but they can be as effective in preventing you from gaining valuable experience while you master the subject.