Prompt Optimization KDD2025:

KDD workshop on Prompt Optimization

Aug 04, 2025. Toronto, ON, Canada. Held in conjunction with KDD'25


Welcome to KDD 2025 workshop on Prompt Optimization!

The rapid development of large language models (LLMs) has enabled us to achieve very strong performance across a wide range of tasks, including text generation, summarization, and question answering. However, the effectiveness of these LLMs heavily depends on the quality of the prompts used. Furthermore, LLMs are known to exhibit unpredictable sensitivity to input factors like the task description, ordering, choice of delimiters, etc. Prompt optimization has become a critical step to elicit desired responses for various tasks. Despite its importance, AI practitioners often rely on trial-and-error methods, leading to inefficiencies, and sub-optimal performance. These challenges are further compounded by the growing demand for prompt optimization in low-resource settings, multimodal applications, agentic and multi-agentic systems, and ethical AI deployment.

This workshop aims to address these gaps by providing a forum for researchers and practitioners to share their latest findings, tools, and methodologies in prompt optimization and engineering. The event will focus on prompt optimization including hard/discrete prompts (human-readable) and soft prompts (embeddings). Topics of interest include best practices, theoretical understanding, adversarial prompting, robustness and generalization, interplay between exemplar and instruction optimization, and applications in domains such as healthcare and finance.

This workshop will highlight the latest advancements in prompt engineering and optimization, provide a platform for discussing challenges and opportunities, and also encourage collaboration between researchers and industry professionals. We invite submissions from a diverse range of perspectives, including theoretical insights, empirical studies, and practical applications. By bringing together experts from NLP, ML, and related fields, this workshop will play an important role in shaping the future of prompt engineering and optimization.

Contact: kdd-prompt-optimization@amazon.com

Call for Contributions

  • Link to the submission website: OpenReview portal
  • This workshop will cover a wide range of research topics related to prompt engineering and optimization, focusing on both prompt techniques in real-world applications and theoretical understanding of these methods. Below is a detailed list of important research topics that the workshop will cover:

    • Prompt engineering methods
    • Prompt optimization for different applications such as RAG, Agents, and Multi-agent systems (MAS)
    • Automatic discrete prompt optimization
    • Prompt tuning
    • Prompt optimization for multilingual and low-resource settings
    • Prompt optimization methods to mitigate bias, fairness, and ethical concerns
    • Multi-modal prompting
    • Adversarial prompting
    • Theoretical foundations of prompt optimization
    • Framework and tools used for prompt optimization
    NOTE: The accepted papers at the workshop will be uploaded to this website which will serve as workshop proceedings, and the authors are also welcome to upload their papers to Arxiv or other proceedings.

    Submission Guidelines

    • A paper should be submitted in PDF format through OpenReview at this link
    • Paper submissions are limited to 4 pages for short papers and 8 pages for long papers, excluding references, must be in PDF and use ACM Conference Proceeding templates (two column format).
    • Additional supplemental material focused on reproducibility can be provided. Proofs, pseudo-code, and code may also be included in the supplement, which has no explicit page limit. The supplement format could be either single column or double column. The paper should be self-contained, since reviewers are not required to read the supplement.
    • The Word template guideline can be found here: [link]
    • The Latex/overleaf template guideline can be found here: [link]
    • The submissions will be judged for quality and relevance through single-blind reviewing.
    • We also welcome authors to submit their papers already submitted to other venues such as ACL, NAACL, EACL, EMNLP, COLING, NeurIPS, ICLR, ICML, LREC, TACL, SIGIR, AAAI, IJCAI, KDD, WWW, CVPR,WSDM,ICCV or other similar conferences. They could be under review or have undergone a full review cycle with a decision. In case you would like to reuse reviews, please provide the following details if you would like to re-use reviews from a previous submission.
      • Clearly state the conference name, venue, and year
      • Provide an accessible link to the submission. If not available, provide the Submission No. or Unique ID from the venue.
      • Share the reviews in a PDF format as-is
      • An optional cover letter highlighting changes to address reviews.
    • We recommend familiarising yourself with OpenReview's process and moderation policy for newly created profiles:
      • New profiles created without an institutional email will go through a moderation process that can take up to two weeks.(source)
      • New profiles created with an institutional email will be activated automatically.


    Attending the Conference

    We request authors and interested participants to review KDD's resources on attending the conference. As mentioned in the below webpages, KDD 2025 is strictly in-person and requires authors / co-authors to arrange for their paper presentation at the event. Web-conferencing or Audio/Visual Support is not provided for the poster presentations.

    Keynote Speakers And Panelists

    Prof. Jundong Li

    Prof. Jundong Li
    University of Virginia

    Sercan O. Arik

    Sercan O. Arik
    Staff Research Scientist Manager, Google

    Oscar Mañas

    Oscar Mañas
    PhD Candidate MILA, Visiting Research Meta FAIR

    Oscar Mañas

    Kaizhe Ding
    Assistant Professor, Northwestern University



    Accepted Papers

    Plan‑and‑Write: Structure‑Guided Length Control for LLMs without Model Retraining

    Adewale Akinfaderin, Shreyas Subramanian, Akarsha Sehwag

    Prompt Optimization Meets Subspace Representation Learning for Few‑shot Out‑of‑Distribution Detection

    Faizul Rakib Sayem, Shahana Ibrahim

    Experience Retrieval‑Augmentation with Electronic Health Records Enables Accurate Discharge QA

    Justice Ou, Tinglin Huang, Yilun Zhao, Ziyang Yu, Yuchen Kuang, Yan Zeng, Peiqing Lu, Rex Ying

    DeRAG: Black‑box Adversarial Attacks on Retrieval‑Augmented Generation Applications via Prompt Injection

    Jerry Wang, Fang Yu

    Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles

    Devichand Budagam, Ashutosh Kumar, Mahsa Khoshnoodi, Sankalp KJ, Vinija Jain, Aman Chadha

    Tournament of Prompts: Evolving LLM Instructions Through Structured Debates and Elo Ratings

    Anirudh Nair, Adi Banerjee, Laurent Mombaerts, Matthew Hagen, Tarik Borogovac

    Efficient Prompt Optimization for Comparative LLM‑as‑a‑judge through Uncertainty Estimation

    Yassir Fathullah, Mark Gales

    GreenTEA: Gradient Descent with Topic‑modeling and Evolutionary Auto‑prompting

    Zheng Dong, Luming Shang, Gabriela Olinto

    The Prompt is Mightier than the Example

    Shengzhe Xu, Nikhil Muralidhar, Naren Ramakrishnan

    How and Where to Translate? The Impact of Translation Strategies in Cross‑lingual LLM Prompting

    Aman Gupta, Yingying Zhuang, Anurag Beniwal

    The Order Effect: Investigating Prompt Sensitivity to Input Order in LLMs

    Bryan Guan, Mehdi Rezagholizadeh, Tanya G. Roosta, Peyman Passban

    State‑Inference‑Based Prompting for Natural Language Trading with Game NPCs

    Minkyung Kim, Junsik Kim, Hwidong Bae, Woongcheol Yang, Sangdon Park, Sohee Bae

    Prompt Smart, Pay Less: Cost‑Aware APO for Real‑World Applications

    Piyush Singh, Jayesh Choudhari, Snehal Nair, Douglas McIlwraith