Call for Papers
In VerifAI-2: The Second Workshop on AI Verification in the Wild we invite papers and discussions that explore the intersection of generative artificial intelligence (AI) and the correctness-focused principles of verification. Potential angles include, but are not limited to, the following:
-
Generative AI for formal methods: Formal methods offer strong guarantees of desired or undesired properties, but they can be challenging to implement and scale. When faced with non-halting proofs or extensive search spaces, machine learning approaches can help guide those search processes effectively, and LLMs may even write the theorems themselves. How can we further integrate AI to enhance verification practices? How can we ensure that AI-generated test conditions or specifications align with actual desired properties?
-
Formal methods for generative AI: Generative AI can benefit from formal methods, which provide assurance and thus build trust. For example, satisfiability solvers can be used as a bottleneck in reasoning domains, code generated by the model can be annotated with specifications for program analysis tools to ensure its correctness, and even simple symbolic methods such as automata simulators can steer AI generations towards more logically consistent behavior. How else can we integrate formal methods into generative AI development, post-training, and deployment?
-
AI as verifiers: Hard guarantees can be notoriously rigid and difficult to achieve or specify, especially in learning-based systems. In these cases, probabilistic methods and learning-based verifiers are appealing alternatives to provide “soft assurances” and actionable feedback. How can we develop more robust and trustworthy verifiers from probabilistic methods? How can language models and agents be used as judges, critics, or self-verifiers? In what settings is it appropriate to make verification more flexible using probabilistic methods?
-
Datasets and benchmarks: The advancement of research at the intersection of generative AI, verification, and interactive environments relies heavily on the availability of robust datasets and benchmarks. We welcome papers that present new datasets and benchmarks in reasoning, theorem proving, code generation, multi-turn programmatic interaction, and related areas. How can we design benchmarks that accurately reflect the challenges in combining probabilistic models with formal (or informal) verification, particularly in sequential or interactive settings?
-
Special Theme: Verifiable tasks and environments for reinforcement learning: Recent progress in large-scale post-training has highlighted the importance of reinforcement learning with verifiable rewards. While much existing work has focused on easily verifiable settings such as isolated code generation, many important domains require richer tasks, multi-step interaction, and more expressive verifiers. This year, our special theme invites researchers to explore how to build verifiable tasks and environments for reinforcement learning, placing verification at the center of environment design, reward construction, and evaluation.
-
Tiny and short papers: We have a separate track for tiny papers. The goal is to encourage submissions of budding findings that are modest but interesting, have not yet have had the resources to execute full-scale experiments, or are a fresh perspective on existing ideas.
We welcome novel methodologies, analytic contributions, works in progress, negative results, and positional papers that will foster discussion.
Important Dates
Paper submission opens: January 5, 2026
Deadline for paper submission: February 5, 2026
Notification: March 1, 2026
Submission Requirements
Submissions to VerifAI are limited to 8 pages of content for regular submissions, and 2 pages of content for tiny papers.
Tiny papers: Since 2025, ICLR has discontinued the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see https://iclr.cc/Conferences/2025/CallForTinyPapers for a history of the ICLR tiny papers initiative. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2026 will become available on https://iclr.cc/Conferences/2026/ at the beginning of February and close early March.
Outside of the content page limit, submissions may also contain an unlimited number of pages for references and appendices. These may not necessarily be read by the reviewers. We request and recommend that authors rely on the supplementary material only to include minor details (e.g., hyperparameter settings, reproducibility information, etc.) that do not fit in the main content pages pages. The review process is double-blind, so please ensure that all papers are appropriately anonymised.
All submissions must be formatted with LaTeX using the ICLR paper format.
All accepted papers will be presented in an in-person poster session, and some will be selected for oral presentation. We also permit papers that have been recently published or are under submission to another venue. Please mark such papers accordingly upon submission. Accepted papers will be displayed on the VerifAI homepage, but are to be considered non-archival.
All submissions must be made through OpenReview.