Writing computer code is like writing in any language. There are syntax (or grammar) rules to be observed; we want the program to be meaningful and do what we want, ie, to respect semantics. In 2022, generative AIs that built code from prompts appeared; ChatGPT allowed for this directly within its interface, making Python or C languages on par with French, Italian or Japanese.
Quickly, a debate emerged – since AIs were good at producing code, is it still necessary to learn code? For the many who couldn’t code, there was little doubt, and the claims of the industry that AI could produce good-quality code were sufficient. In the industry, at the end of 2023, some jobs were lost from humans to AI, but on the whole managers are hesitant to replace programmers with AI. There are still the issues of hallucinations but, more importantly, it soon appeared that you could only get good code if you could write the correct prompts, or in other words, specify correctly. Furthermore, as prompting is usually not one shot and requires some form of dialogue, it is useful to understand the partner’s language. This is a skill which usually comes from long hours of practising coding.
The current attitude seems to be that if humans are not necessarily going to be the ones writing future codes, there is a need for people who know how to code to interact with AI in order to get the code to work.
Code, no code, low code
On the other hand, if high-quality coders are needed to work with AI on complex systems, should everyone reach that level? The answer is probably not. As often, things aren’t always black or white, and there is probably room for an intermediate level between no code and code, often called low code.