As soon as Tom Smith got his hands on the Codex – a new artificial intelligence technology that can write its own computer programs – he interviewed for a job.
He asked if it could solve the “coding challenges” that programmers often face when interviewing for big money jobs at Silicon Valley companies like Google and Facebook. Is it possible to write a program that replaces all spaces in a sentence with dashes? Even better, is it possible to write an invalid ZIP code?
It did both at once, before completing several other tasks. “These are tough problems for a lot of people, myself included, and it’s,” said Mr. Smith, a seasoned programmer who oversees an AI startup called Gado Images. will give a response in two seconds.” “It’s spooky to watch.”
The Codex seems to be a technology that will soon replace humans. As Mr. Smith continued to test the system, he realized that its skills went far beyond his ability to answer ready-made interview questions. It can even translate from one programming language to another.
However, after several weeks of working with this new technology, Mr. Smith believes it poses no threat to professional programmers. In fact, like many other experts, he sees it as a tool that will ultimately boost human productivity. It might even help a whole new generation of people learn the art of computers, by showing them how to write simple code, almost like a personal tutor.
“This is a tool that can make a programmer’s life a lot easier,” said Mr. Smith.
About four years ago, researchers at labs like OpenAI began designing neural networks that analyze large amounts of prose, including thousands of digital books, Wikipedia articles, and all other types of documents posted on the internet.
By identifying patterns in all that text, the networks learned to predict the next word in a sequence. When someone types a few words into these “general language models,” they can complete the thought with an entire paragraph. In this way, one system – an OpenAI creation called GPT-3 – can write its own Twitter posts, speeches, poetry, and news.
To the surprise of the researchers who built the system, it was even able to write its own computer programs, although they were short and simple. Apparently, it learned from countless programs put on the internet. So OpenAI went a step further, training a new system – the Codex – on a huge array of prose and code.
The result is a system that understands both prose and code – to a point. You can ask, in plain English, about snow falling on a black background and it will give you the code to create a virtual snowstorm. If you ask for a blue bouncing ball, it will give you that too.
“You can ask it to do something and it will do it,” said Ania Kubow, another programmer who has used the technology.
The Codex can create programs in 12 computer languages and even translate between them. But it often makes mistakes, and although its skills are impressive, it cannot reason like a human. It can recognize or imitate what it has seen in the past, but it is not agile enough to think for itself.
Sometimes, the programs generated by the Codex do not run. Or they contain security flaws. Or they don’t come close to what you want them to do. OpenAI estimates that the Codex generates the right code 37% of the time.
When Mr. Smith used the system as part of a “beta” test program this summer, the code it generated was impressive. But sometimes it only works if he makes a small change, such as adapting a command to his particular software setup or adding the digital code needed to access it. internet service he is trying to query.
In other words, the Codex is only really useful to an experienced programmer.
But it can help programmers do their day-to-day tasks a lot faster. It can help them find the basic building blocks they need or steer them towards new ideas. Using this technology, GitHub, a popular online service for programmers, now offers Co-pilot, a tool that suggests your next line of code, in the same way that tools “self-help” “complete” suggests the next word as you type in text or email.
“It’s a way to code without having to write a lot of code,” said Jeremy Howard, who founded the artificial intelligence lab Fast.ai and helped create language technology based on OpenAI’s work. “It’s not always accurate, but it’s close enough.”
Mr. Howard and others believe that Codex can also help beginners learn to code. It is especially good at creating simple programs from brief descriptions in English. And it also works in the other direction, by explaining complex code in plain English. Some people, including Joel Hellermark, a businessman in Sweden, are trying to turn the system into a teaching tool.
The rest of the AI landscape looks similar. Robots are getting more and more powerful. The same goes for chatbots designed for online chat. DeepMind, an AI lab in London, recently built a system that instantly determines the shape of proteins in the human body, which is a key part of the design of drugs and vaccines. please new. That mission once took scientists days or even years. But those systems only replace a small part of what human experts can do.
In some areas where new machines can immediately replace workers, they are often jobs that the market is slow to fill. For example, robots are increasingly useful inside transportation hubs, which are expanding and are struggling to find the workers needed to keep up with the pace.
With his start-up, Gado Images, Mr. Smith set out to build a system that could automatically sort through newspaper and library photo archives, reconstructing forgotten images, Automatically write captions and tags and share photos with other publications and businesses. But technology can only handle part of the job.
It can sift through a huge archive of photos faster than a human, identify the types of images that might be useful and attract captions. But finding the best and most important photos and tagging them properly still takes a seasoned archivist.
“We thought these tools would eliminate the need for humans entirely, but what we’ve learned over the years is that this isn’t really possible – you still need a skilled person to look at it. output,” said Mr. Smith. “Technology makes everything wrong. And it can be biased. You still need someone to review what worked and decide what worked and what didn’t.”
The Codex expands on what a machine can do, but it’s another sign that the technology works best with humans in human operators.
Greg Brockman, CTO of OpenAI said: “AI is not going as people expect. “It feels like it’s going to do this job and that job, and everyone is trying to figure out which will go first. Instead, it is replacing no jobs. But it is taking away the hard work of all of them at once. “