Evaluating LLMs for Code Generation in HRI: A Comparative Study of ChatGPT, Gemini, and Claude

Andrei Sobo, Awes Mubarak, Almas Baimagambetov, Nikolaos Polatidis

Research output: Contribution to journalArticlepeer-review

Abstract

This study investigates the effectiveness of Large Language Models (LLMs) in generating code for Human-Robot Interaction (HRI) applications. We present the first direct comparison of ChatGPT 3.5, Gemini 1.5 Pro, and Claude 3.5 Sonnet in the specific context of generating code for Human-Robot Interaction applications. Through a series of 20 carefully designed prompts, ranging from simple movement commands to complex object manipulation scenarios, we evaluate the models’ ability to generate accurate and context-aware code. Our findings reveal significant variations in performance, with Claude 3.5 Sonnet achieving a 95% success rate, Gemini 1.5 Pro at 60%, and ChatGPT 3.5 at 20%. The study highlights the rapid advancement in LLM capabilities for specialized programming tasks while also identifying persistent challenges in spatial reasoning and adherence to specific constraints. These results suggest promising applications for LLMs in robotics development and education while emphasizing the continued need for human oversight and specialized training in AI-assisted programming for HRI.
Original languageEnglish
Article number2439610
Number of pages22
JournalApplied artificial intelligence
Volume39
Issue number1
DOIs
Publication statusPublished - 19 Dec 2024

Bibliographical note

Publisher Copyright:
© 2024 The Author(s). Published with license by Taylor & Francis Group, LLC.

Fingerprint

Dive into the research topics of 'Evaluating LLMs for Code Generation in HRI: A Comparative Study of ChatGPT, Gemini, and Claude'. Together they form a unique fingerprint.

Cite this