
Evaluating Large Language Models Trained on Code
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
This July 2021 paper documents the development and evaluation of OpenAI's Codex models, which are large language models specialized in code generation, particularly Python functions from docstrings. They introduce HumanEval, a hand-written dataset designed to assess the functional correctness of generated code through unit tests, a more robust metric than traditional match-based scores like BLEU. The papers compare the performance of various Codex iterations, including supervised fine-tuned versions (Codex-S), against other models like GPT-3, demonstrating significant improvements in pass rates with increased model size and sample generation. Furthermore, the texts explore the limitations, broader impacts, and potential hazards of these models, discussing issues such as over-reliance, misalignment, economic implications for the labor market, and security concerns related to generating vulnerable or biased code. Finally, the sources touch upon Codex-D, a model for generating docstrings from code, and emphasize the need for continued research into safe and responsible AI deployment.
Sources:
https://arxiv.org/pdf/2107.03374
https://github.com/openai/human-eval