Towards artificial intelligence that learns to write code

0
25

Researchers combine deep learning and symbolic reasoning for a more flexible way of teaching computers to program.

Researchers have developed a flexible way of combining deep learning and symbolic reasoning to teach computers to write short computer programs. Here, Armando Solar-Lezama (left), a professor at CSAIL, talks to graduate student Maxwell Nye.

Researchers have developed a flexible way of combining deep learning and symbolic reasoning to teach computers to write short computer programs. Here, Armando Solar-Lezama (left), a professor at CSAIL, talks to graduate student Maxwell Nye.

Photo: Kim Martineau

Learning to program implies recognizing how to structure a program and how to correctly fill in every detail. No wonder it can be so frustrating.

A new artificial intelligence for writing programs, SketchAdapt, offers an exit route. Trained on tens of thousands of program examples, SketchAdapt learns how to compose short, high-level programs, while letting a second set of algorithms find the right subroutines to fill in the details. Unlike similar approaches for automatic program writing, SketchAdapt knows when to switch from matching statistical patterns to less efficient but more versatile symbolic reasoning to fill in the gaps.

"Neural networks are good enough to get the right structure, but not the details," he says Armando Solar-Lezama, a professor at MIT & # 39; s Computer Science and Artificial Intelligence Laboratory (CSAIL). "By dividing up the work – by letting the neural networks manage the high-level structure and using a search strategy to fill in the blanks – we can write efficient programs that give the right answer."

SketchAdapt is a collaboration between Solar-Lezama and Josh Tenenbaum, a professor at CSAIL and at MIT Center for brains, minds and machines. The work will be presented at International conference on machine learning 10-15 June.

The synthesis of the program, or the teaching of computers to the code, has long been a goal of AI researchers. It is more likely that a computer that can program itself will learn the language faster, converse fluently and even model human cognition. All this led Solar-Lezama to the field as a graduate student, where he laid the foundations for SketchAdapt.

Solar-Lezama's first works, Sketch, is based on the idea that low-level details of a program can be mechanically found if a high-level structure is provided. Among other applications, Sketch-inspired spinoffs to automatically classify programming tasks and convert hand-drawn diagrams into code. Later, as neural networks grew in popularity, students from Tenenbaum's cognitive computational science laboratory suggested a collaboration, from which SketchAdapt formed.

Rather than relying on experts to define the structure of the program, SketchAdapt identifies it using in-depth learning. The researchers also added a breakthrough: when neural networks are not sure of which code to place where, SketchAdapt is programmed to leave the space empty for the search algorithms to be filled.

"The system decides by itself what it knows and does not know", says the lead author of the study, Maxwell Nye, a graduate student in MIT Brain Department and Cognitive Sciences. "When it crashes and has no familiar patterns to draw on, leave the placeholders in the code. Then use a guess and check strategy to fill in the holes."

The researchers compared SketchAdapt's performance with programs modeled on the Microsoft property RobustFill is DeepCoder software, successors of the Excel FlashFill function, which analyzes adjacent cells to offer suggestions as you type, for example to learn how to transform a column of names into a column of corresponding e-mail addresses. RobustFill uses deep learning to write high-level programs from examples, while DeepCoder specializes in researching and compiling low-level details.

The researchers found that SketchAdapt outperformed their re-implemented versions of RobustFill and DeepCoder in their specialized tasks. SketchAdapt has outperformed the RobustFill-like program to string transformations; for example, writing a program to abbreviate social security numbers as three digits and the first names with their first letter. SketchAdapt has also done better than the DeepCoder-like program when writing programs to transform a list of numbers. Trained only on examples of three-line list processing programs, SketchAdapt was able to transfer its knowledge to a new scenario and write correct programs on four lines.

In another task, SketchAdapt overcame both programs by converting math problems from English to code and calculating the answer.

The key to its success is the ability to move from finding neural models to symbolic research based on rules, he says Rishabh Singh, a former graduate student of Solar-Lezama, now a researcher at Google Brain. "SketchAdapt learns how much pattern recognition is needed to write familiar parts of the program, and how much symbolic reasoning is needed to fill in details that may involve new or complicated concepts."

SketchAdapt is limited to writing very short programs. Something more requires too many calculations. Nevertheless, it is intended more to integrate programmers than to replace them, the researchers say. "Our goal is to give programming tools to the people who want them," says Nye. "They can tell the computer what they want to do and the computer can write the program."

Programming, after all, has always evolved. When Fortran was introduced in the 50's, it was intended to replace human programmers. "His full name was Fortran Automatic Coding System, and his goal was to write programs as well as human, but without errors," says Solar-Lezama. "What he really did was automate much of what the programmers did before Fortran. It changed the nature of programming."

The other co-author of the study is Luke Hewitt. The funding was provided by the U.S. Air Force Office of Scientific Research, from the MIT-IBM Watson AI Lab and the US National Science Foundation.

/ University version. View in full Here.

. (tagToTranslate) MIT (t) university (t) National Science Foundation (t) artificial intelligence (t) social security (t) intelligence (t) conference (t) Air Force (t) professor (t) Microsoft (t) research ( t) security (t) software (t) student (t) science (t) English (t) of Google

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.