Application of Seq2Seq Models on Code Correction.
Ontology highlight
ABSTRACT: We apply various seq2seq models on programming language correction tasks on Juliet Test Suite for C/C++ and Java of Software Assurance Reference Datasets and achieve 75% (for C/C++) and 56% (for Java) repair rates on these tasks. We introduce pyramid encoder in these seq2seq models, which significantly increases the computational efficiency and memory efficiency, while achieving similar repair rate to their nonpyramid counterparts. We successfully carry out error type classification task on ITC benchmark examples (with only 685 code instances) using transfer learning with models pretrained on Juliet Test Suite, pointing out a novel way of processing small programming language datasets.
SUBMITTER: Huang S
PROVIDER: S-EPMC8017285 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA