Item


Constraint Modelling with LLMs using In-Context Learning

Kostis Michailidis from KU Leuven in Belgium, tells that the Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this paper, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We investigate different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and on a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities

7764.mp4 7764.mp3

Universitat de Girona. Departament d’Informàtica, Matemàtica Aplicada i Estadística

Other contributions: Universitat de Girona. Departament d’Informàtica, Matemàtica Aplicada i Estadística
Author: Michailidis, Kostis
Tsouros, Dimos
Guns, Tias
Date: 2024 September 3
Abstract: Kostis Michailidis from KU Leuven in Belgium, tells that the Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this paper, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We investigate different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and on a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities
7764.mp4 7764.mp3
Format: audio/mpeg
video/mp4
Document access: http://hdl.handle.net/10256.1/7764
Language: eng
Publisher: Universitat de Girona. Departament d’Informàtica, Matemàtica Aplicada i Estadística
Collection: 30th International Conference on Principles and Practice of Constraint Programming
Rights: Attribution-NonCommercial-ShareAlike 4.0 International
Rights URI: http://creativecommons.org/licenses/by-nc-sa/4.0/
Subject: Programació per restriccions (Informàtica) -- Congressos
Constraint programming (Computer science) -- Congresses
Title: Constraint Modelling with LLMs using In-Context Learning
Type: info:eu-repo/semantics/lecture
Repository: DUGiMedia

Subjects

Authors