The question is often asked, will AI really replace programmers? Is artificial intelligence a threat to programming? OpenAI recently created a lot of buzz with their automated code-writing Codex.

AI programming is nothing new, as is Copilot, the previous AI auto-assist programming tool. In fact, Codex is more of a complete upgrade to Copilot. Both are also built on GPT-3, but Codex can translate English requirements descriptions directly into code.

Gpt-3 is an AI model with 45TB training set, 175 billion parameter scale and 700G pre-training results, which has become the focus of attention since its release. Since then, gpT-3 has been used to write poetry, compose music and even paint, and today’s Codex is based on its auto-generated code site, Debuild.co. On the site, users only need to describe their requirements in English after registering, and the relevant front-end code will be generated automatically. But the site is now largely shut down.

Codex, on the other hand, is a big step forward, understanding the requirements described in natural language and generating code with complex logic.

At its core, though, Codex is just a proprietary version of GPT-3. The largest version of Codex has only 12 billion parameters, which is still much smaller than GPT-3. Initially, GPT-3 was able to generate some simple code from Python comments, so the openAI staff went to Github for collaborative pre-training and fed it to Codex. In the end, the 12 billion metric version of Codex was able to give correct answers to 28.81 percent of the questions. Later, the OpenAI staff figured out how to make the AI iterate like a programmer, and then stack it up. Codex achieved a 77.5% accuracy rate.

But in fact, codeX has been rolling over a lot these days, including during an official live broadcast. This suggests that codeX is not perfect and does not understand user intentions very well, so the code it provides is likely to be completely wrong.

Because the generated code can not guarantee the accuracy and correctness, there are also certain security risks. In fact, OpenAI says Codex generates racist content.

In essence, Codex doesn’t create code, it just moves it around. The future of choice should be between AI and human programmers, rather than AI doing coding alone. As a result, CTRL + C/V development will one day become obsolete. Because what AI is best at is highly mimicking similar code that has existed in the past, a copy-and-paste + bug-fixing model can be far more efficient than a human programmer.

In the future, there will be less demand for junior programmers and the information security direction will become more desirable. AI mimicry is likely to reference older libraries or software packages, potentially posing security risks.