Evaluating Large Language Models Trained on Code cover art

Evaluating Large Language Models Trained on Code

Evaluating Large Language Models Trained on Code

Listen for free

View show details

About this listen

This July 2021 paper documents the development and evaluation of OpenAI's Codex models, which are large language models specialized in code generation, particularly Python functions from docstrings. They introduce HumanEval, a hand-written dataset designed to assess the functional correctness of generated code through unit tests, a more robust metric than traditional match-based scores like BLEU. The papers compare the performance of various Codex iterations, including supervised fine-tuned versions (Codex-S), against other models like GPT-3, demonstrating significant improvements in pass rates with increased model size and sample generation. Furthermore, the texts explore the limitations, broader impacts, and potential hazards of these models, discussing issues such as over-reliance, misalignment, economic implications for the labor market, and security concerns related to generating vulnerable or biased code. Finally, the sources touch upon Codex-D, a model for generating docstrings from code, and emphasize the need for continued research into safe and responsible AI deployment.

Sources:

https://arxiv.org/pdf/2107.03374

https://github.com/openai/human-eval

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.