Technology

Machines Are Getting Better at Writing Their Own Code. But Human-Level Is ‘Light Years Away'

The massive amount of automation and digital transformation taking place in the business world would be impossible if companies needed to code all of it “the old-fashioned way.” Low-code and no-code platforms are building new applications faster and allowing non-coders within a company to be involved.
Joe Raedle | Getty Images
  • DeepMind announced on Wednesday that it has created a piece of software called AlphaCode that can code just as well as an average human programmer.
  • The London-headquartered firm tested AlphaCode's abilities in a coding competition on CodeForces.
  • But computer scientist Dzmitry Bahdanau wrote on Twitter that human level coding is "still light years away."

Computers are getting better at writing their own code but software engineers may not need to worry about losing their jobs just yet.

DeepMind, a U.K. artificial intelligence lab acquired by Google in 2014, announced Wednesday that it has created a piece of software called AlphaCode that can code just as well as an average human programmer.

The London-headquartered firm tested AlphaCode's abilities in a coding competition on Codeforces — a platform that allows human coders to compete against one another.

"AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions," the DeepMind team behind the tool said in a blogpost.

But computer scientist Dzmitry Bahdanau wrote on Twitter that human-level coding is "still light years away."

"The [AlphaCode] system ranks behind 54.3% participants," he said, adding that many of the participants are high school or college students who are just honing their problem-solving skills.

Bahdanau said most people reading his tweet could "easily train to outperform AlphaCode."

Researchers have been trying to teach computers to write code for decades but the concept has yet to go mainstream, partly because the AI tools that are meant to write new code have not been versatile enough.

An AI research scientist, who preferred to remain anonymous as they were not authorized to talk publicly on the subject, told CNBC that AlphaCode is an impressive technical achievement, but a careful analysis is required of the sort of coding tasks it does well on, versus the ones it doesn't.

The scientist said they believe AI coding tools like AlphaCode will likely change the nature of software engineering roles somewhat as they mature, but the complexity of human roles means machines won't be able to do the jobs in their entirety for some time.

"You should think of it as something that could be an assistant to a programmer in the way that a calculator might once have helped an accountant," Gary Marcus, an AI professor at New York University, told CNBC.

"It's not one-stop shopping that would replace an actual human programmer. We are decades away from that."

British artificial intelligence scientist and entrepreneur Demis Hassabis.
OLI SCARFF | AFP | Getty Images
British artificial intelligence scientist and entrepreneur Demis Hassabis.

DeepMind is far from the only tech company developing AI tools that can write their own code.

Last June, Microsoft announced an AI system that can recommend code for software developers to use as they work.

The system, called GitHub Copilot, draws on source code uploaded to code-sharing service GitHub, which Microsoft acquired in 2018, as well as other websites.

Microsoft and GitHub developed it with help from OpenAI, an AI research start-up that Microsoft backed in 2019. The GitHub Copilot relies on a large volume of code in many programming languages and vast Azure cloud computing power.

Nat Friedman, CEO of GitHub, describes GitHub Copilot as a virtual version of what software creators call a pair programmer — that's when two developers work side-by-side collaboratively on the same project. The tool looks at existing code and comments in the current file, and it offers up one or more lines to add. As programmers accept or reject suggestions, the model learns and becomes more sophisticated over time.

The software makes coding faster, Friedman told CNBC. Hundreds of developers at GitHub have been using the Copilot feature all day while coding, and the majority of them are accepting suggestions and not turning the feature off, Friedman said.

In a separate research paper published on Friday, DeepMind said it had tested its software against OpenAI's technology and it had performed similarly.

Samim Winiger, an AI researcher in Berlin, told CNBC that every good computer programmer knows that it is essentially impossible to create "perfect code."

"All programs are flawed and will eventually fail in unforeseeable ways, due to hacks, bugs or complexity," he said.

"Hence, computer programming in most critical contexts is fundamentally about building 'fail safe' systems that are 'accountable.'"

In 1979, IBM said "computers can never be held accountable" and "therefore a computer must never make a management decision."

Winiger said the question of the accountability of code has been largely ignored despite the hype around AI coders outperforming humans.

"Do we really want hyper-complex, intransparent, non-introspectable, autonomous systems that are essentially incomprehensible to most and uncountable to all to run our critical infrastructure?" he asked, pointing to the finance system, food supply chain, nuclear power plants, weapons systems and space ships.

— Additional reporting by CNBC's Jordan Novet.

Copyright CNBC
Contact Us