Text2BIM Generating Building Models Using a Large Language Model-based Multi-Agent
Text2BIM Generating Building Models Using a Large Language Model-based Multi-Agent
Framework
1 Ph.D. Candidate, Chair of Computational Modeling and Simulation, Technical University of Munich, Munich
2 Postdoctoral Researcher, Chair of Computational Modeling and Simulation, Technical University of Munich,
ABSTRACT
The conventional BIM authoring process typically requires designers to master complex and tedious modeling
commands in order to materialize their design intentions within BIM authoring tools. This additional cognitive burden
complicates the design process and hinders the adoption of BIM and model-based design in the AEC (Architecture,
Engineering, and Construction) industry. To facilitate the expression of design intentions more intuitively, we pro-
pose Text2BIM, an LLM-based multi-agent framework that can generate 3D building models from natural language
instructions. This framework orchestrates multiple LLM agents to collaborate and reason, transforming textual user
input into imperative code that invokes the BIM authoring tool’s APIs, thereby generating editable BIM models with
internal layouts, external envelopes, and semantic information directly in the software. Furthermore, a rule-based
model checker is introduced into the agentic workflow, utilizing predefined domain knowledge to guide the LLM
agents in resolving issues within the generated models and iteratively improving model quality. Extensive experiments
were conducted to compare and analyze the performance of three different LLMs under the proposed framework. The
evaluation results demonstrate that our approach can effectively generate high-quality, structurally rational building
models that are aligned with the abstract concepts specified by user input. Finally, an interactive software prototype
was developed to integrate the framework into the BIM authoring software Vectorworks, showcasing the potential of
modeling by chatting.
1 INTRODUCTION
Throughout the last decades, various digital representations and workflows have continuously emerged to represent
built assets with geometric and semantic information, which can be utilized across the entire life-cycle of a building
and shared across different project stakeholders in dedicated representations (Borrmann et al. 2018). Modern BIM
authoring software encompasses design requirements across multiple disciplines. This integrated approach has led to
a proliferation of functions and tools within the software, making the user interface increasingly complex. Designers
often face a steep learning curve and require extensive training to translate design intentions into complex command
In recent years, the application of generative Artificial Intelligence (AI) in architectural design has alleviated this
additional cognitive load, enhancing the creative potential and efficiency of the design process. Current research and
industrial applications primarily focus on generating 2D images or simple 3D volumes (Li et al. 2024a), utilizing
Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and diffusion models to create 2D
architectural floor plans (Luo and Huang 2022; Shabani et al. 2023; Wang et al. 2021), building renderings (Stigsen
et al. 2023; Chen et al. 2023; Graphisoft 2024), architectural facade designs (Sun et al. 2022; Zhang et al. 2022), or
preliminary 3D conceptual forms (Zhuang et al. 2023; Pouliou et al. 2023; Tono and Fischer 2022). Recent research
proposed using large language models (LLMs) to automatically generate wall details (Jang et al. 2024), but more
In other sectors, such as game development and virtual reality, advanced 3D generative models like DreamFusion
(Poole et al. 2022) and Magic3D (Lin et al. 2023) can generate complex 3D models with rich textures directly from
text descriptions, allowing designers to express design intent in natural language without tedious modeling commands.
However, the outputs from these Text-to-3D methods are typically based on voxels, point clouds, meshes, or implicit
representations like Neural Radiance Fields (NeRFs) (Mildenhall et al. 2021), which only contain geometric data of
the outer surfaces and cannot model possible internal contents of the 3D objects, nor do they include any semantic
information.
The differences between these purely geometric 3D shapes and native BIM models make it challenging to integrate
them into BIM-based architectural design workflows. Designers cannot directly modify and edit the generated contents
in BIM authoring software, and due to the lack of semantic information, these models are also difficult to apply in
To bridge these gaps, we propose Text2BIM, which converts natural language descriptions to 3D building models
with external envelopes, internal layouts, and semantic information. By representing building models as imperative
code scripts invoking the BIM authoring software’s Application Program Interfaces (APIs), we enable multiple Large
Language Model (LLM) agents to collaborate and autonomously generate executable code that ultimately produces
prompts to guide LLMs in generating architectural-wise rational outcomes and automatically evaluates the generated
BIM models against domain-specific building rules. It allows LLMs to iteratively improve the model quality through
multiple feedback loops, incorporating domain knowledge from the rule-based system. A variety of complex test cases
were designed to comprehensively evaluate the framework’s generation capabilities and the quality of the results. We
also implemented an interactive software prototype to integrate the proposed framework into the BIM authoring tool
Vectorworks, demonstrating new possibilities for modeling-by-chatting during the design process.
The scope of this paper is limited to generating regular building models at the early design stage. The generated
models only include essential building components such as interior and exterior walls, slabs, roofs, doors, and windows,
along with representative semantic information like stories, spaces, and material definitions. Our goal is to generate
reasonably laid-out 3D buildings with a certain level of model quality from natural language descriptions, providing
designers with a reference to further refine the designs in BIM authoring software. This approach aims to partially
liberate designers from the tedious and repetitive modeling commands and express design intent more intuitively.
Nevertheless, the user can proceed to modify the resulting models in the BIM authoring tool at any time ensuring a
2 RELATED WORK
The application of generative AI in the field of 3D building design is gradually becoming a research hotspot.
The key lies in constructing appropriate data representations for existing design data, experiential knowledge, and
physical principles, and then training corresponding algorithms to intelligently generate new designs (Liao et al. 2024).
de Miguel Rodríguez et al. (2020) use connectivity vectors to represent different 3D mesh-like building geometries. By
training a Variational Autoencoder (VAE) using these data, more building shapes can be generated by reconstructing
interpolated positions within the learned distribution. Vitruvio (Tono and Fischer 2022) uses the occupancy field to
describe the building shape by assigning binary values to each point in the 3D space, indicating whether the point is
occupied by an object. They employ a modified occupancy network (Mescheder et al. 2019) to learn this representation,
enabling the reconstruction of a 3D printable building mesh from a single perspective sketch.
Other researchers are employing GANs to generate conceptual 3D buildings for the early design stage. Pouliou
et al. (2023) proposed using CPCGAN (Yang et al. 2021) to generate point cloud representations of building geometries
based on specific site rules. Ennemoser and Mayrhofer-Hufnagl (2023) decoded 3D voxels into 2D images to train
a DCGAN (Radford et al. 2015), and then used signed distance fields (SDF) (Oleynikova et al. 2016) to convert
the generated images back into voxels. Although their method produces voxel models that can partially reconstruct
the interior spaces of buildings, the generated results still suffer from issues such as geometric inconsistency and
Overall, current research utilizing 3D generative algorithms based on pure geometric representations is still limited
to generating conceptual architectural forms and has not yet been able to produce complex 3D building models with a
high level of development (LOD) that exhibit both coherent exterior and interior geometry. Additionally, the results
generated by these purely data-driven methods are difficult to constrain using text-based architectural rules. Better data
The rapidly advancing generative AI technologies, such as diffusion models and large language models (LLMs),
have shown significant potential in the field of architectural design. The application of diffusion models is still
primarily focused on tasks based on 2D images, such as generating architectural renderings from text (Li et al. 2024b)
and replacing GANs to produce more robust structural designs (He et al. 2023). On the other hand, the application of
LLMs is mainly centered on using natural language to retrieve data from BIM models (Zheng and Fischer 2023) and
enhancing human-machine interaction in BIM authoring software (Du et al. 2024b; Fernandes et al. 2024). However,
the concept of using LLMs to generate 3D building models has not yet been explored. One of the key challenges is
representing 3D models as one-dimensional text data that LLMs can use (Liao et al. 2024). This textual representation
must concisely capture the features of the model, avoiding the verbosity caused by the overly detailed granularity typical
of conventional model serialization files. A recent study (Jang et al. 2024) proposed converting BIM models into XML
format and then using LLM to process this structured text to add wall details. Finally, the modified XML is converted
back into the BIM model to achieve automatic wall detailing. Unlike their approach, we propose representing the BIM
model as imperative code. By constructing and invoking high-level modeling APIs in the BIM authoring tool, we aim
to express the geometric and semantic features of the model using the minimal and most flexible text format possible
while also maximizing the benefits from the powerful code generation capabilities of LLMs.
A Large Language Model-based agent refers to an autonomous system that utilizes an advanced language model to
perform tasks involving perception, decision-making, and action (Wang et al. 2024). These systems typically equip the
LLM with tools to interact with the external environment, as well as memory modules to retain the thought processes,
observations, and action records. The LLM-based agent uses the LLM as the "brain", leveraging its powerful in-context
learning (Dong et al. 2024) capabilities to synthesize information from various sources and deploy appropriate tools
for different scenarios through linguistic reasoning, thereby enabling the system to behave, plan, and execute tasks
like humans (Du et al. 2024b). The agents can operate individually or in multi-agent systems where they collaborate,
communicate, and specialize in distinct roles to solve more dynamic and complex problems using collective intelligence
LLM-based multi-agent systems have growing applications in software development (Hong et al. 2023), gaming (Xu
stage. 3D-GPT (Sun et al. 2024) framework breaks down 3D modeling tasks into multiple steps, coordinating three
LLM agents in a manner akin to a human team to create 3D assets in Blender that match textual descriptions. SceneCraft
(Hu et al. 2024) utilizes multiple LLM agents to convert textual descriptions into Python scripts executable in Blender,
automatically rendering 3D scenes suitable for use in games and films. This system employs scene graphs to simulate
spatial relationships between assets and iteratively refines scenes using visual language models. In the architecture
domain, similar to the aforementioned research, Çelen et al. (2024) enables multiple LLM agents to create scene graphs
based on user instructions, and then uses a backtracking algorithm to place furniture, ultimately generating interior
design scenes. Mehta et al. (2024) propose an interactive framework allowing human architects to collaborate with an
LLM agent using natural language instructions to construct structures. The agent can place blocks, seek clarifications,
and integrate human feedback within a Minecraft-like 3D simulation environment. Du et al. (2024b) integrate an LLM
agent into BIM authoring software to answer software usage questions via Retrieval Augmentation Generation (RAG)
(Lewis et al. 2020) and perform simple modeling tasks based on natural language instructions. In conclusion, the
existing literature within our field has only explored the application of LLM-based agents in straightforward modeling
contexts. There is currently a dearth of studies exploring the potential of multi-agent systems to generate sophisticated
3D BIM models.
With the continuous adoption of BIM concepts in the design phase of built assets, the rich information base provides
a sophisticated foundation for several downstream applications. The Industry Foundation Classes (IFC) (ISO 2024)
data model is well established to exchange digital representations of built assets comprising geometric and semantic
information. These representations are perfectly tailored to automatically perform checks regarding the compliance
of the envisioned design against various rules and guidelines. Such approaches have gained increasing interest from
different stakeholders in the industry throughout the last few years. A comprehensive overview of opportunities and
related challenges has been described by Preidel and Borrmann (2018). Eastman et al. (2009) have introduced an
overall approach towards automated code compliance checking based on BIM models. They divide the overall checking
process into four stages: rule interpretation to create machine-readable rules, building model preparation with advanced
analysis, rule checking execution, and reporting of detailed defects and issues.
The rules a model should be compliant with can vary in their complexity. To account for this challenge, Solihin
and Eastman (2015) have introduced a classification system for rules, which comprises four different levels. Rules
assigned to class 1 require a single or a small number of explicit data to be available. A typical example of such a rule
is the inspection of a dedicated property assigned for each element in the model. Class 2 rules are characterized by the
derivation of simple attribute values. Such calculations can comprise simple arithmetic or trigonometric calculations
an extended data structure and processing. Such rules require a comprehensive processing of semantic and geometric
data and the evaluation of intermediate calculations. Prominent examples of such rules can be found in the area
of evaluations related to fire safety regulations. These rules often involve the assessment of material parameters of
different components, geometric features, and path search algorithms to identify relevant spaces and corridors to be
checked. Class 4 considers rules that cannot be evaluated by prescribed features but rather require a holistic evaluation
of extracted information. In most cases, these rules consider multiple objectives, which are difficult (or even impossible)
to formulate in a sequential workflow. Hence, it is expected that software applications support users to extract and
identify relevant model information but ultimately don’t provide a simple pass/fail statement at the end of a checking
run.
Besides investigations regarding the varying complexity of rules, other researchers have focused on the translation
of human-readable guidelines and regulations (Zhang and El-Gohary 2017; Zhou et al. 2022; Fuchs et al. 2022) and
presented different methods to formulate the rules in machine-readable representations (Sydora and Stroulia 2020;
Häußler et al. 2021). As a recent development, the Information Delivery Specification (IDS) standard developed
and maintained by buildingSMART International enables the specification of rules targeting basic property checks
in a unified and vendor-independent manner (Tomczak et al. 2022). In its current development stage, however, it
merely supports semantic information but lacks options to specify comprehensive geometric conditions. Nuyts et al.
(2024) have investigated different approaches to current compliance checking techniques and discussed advantages and
downsides. As an extension to the approaches already mentioned, they also considered techniques related to linked
Numerous studies have explored the use of generative AI to create geometric representations of conceptual buildings.
However, these advancements have not been integrated into the field of BIM-driven building design. Based on the
conducted literature review, our approach appears to be the first that utilizes collaborative LLM agents to generate
BIM models with relatively high LOD based on natural language instructions, ensuring compliance and consistency
3 METHODOLOGY
We propose Text2BIM, an LLM-based multi-agent framework, where four LLM agents assume different roles and
collaborate to convert natural language instruction into imperative code, thereby generating building models in BIM
authoring software. The core idea is to encapsulate the underlying modeling APIs of software using a series of custom
high-level tool functions. By using prompt engineering techniques to guide LLMs in calling these functions within the
generated code, we can construct native BIM models through a concise and efficient textual representation.
The overall framework with a sample user input is shown in Fig. 1. To realize the core concept outlined above, we
make use of four LLM-based agents with dedicated tasks and skills that interact with each other via text:
• Product Owner: Refines and enhances user instructions and generates detailed requirement documents.
• Reviewer: Provides code optimization suggestions to address issues identified in the model.
Due to the typically brief and open-ended nature of user inputs, we first designed an LLM agent acting as a
Product Owner to expand and refine user instruction. This ensures the instruction contains sufficient information to
guide the downstream Programmer agent to invoke suitable tool functions in its code. The Product Owner agent’s
elaboration and detailing of the original instructions reference multiple sources. Firstly, it reads information from the
to understand whether the input parameters required for calling the corresponding functions are available within the
user’s instructions or if additional information is needed. Secondly, it draws on the knowledge of the Architect agent.
When the Product Owner deems more architectural context or building plans necessary, especially in cases where
the building requirements are complex, it can opt to consult with the Architect. The Architect agent is designed to
generate building plans in a structured text format with coordinates and dimensions according to certain architectural
rules, combined with the user instruction relayed by the Product Owner.
The original user instruction, after being enhanced by the Product Owner, becomes a detailed requirement
document to guide the Programmer agent in combining and invoking appropriate functions from the toolset to
construct the building model expected by the user. The generated code is evaluated by a custom Python interpreter with
syntax checking. If exceptions are raised during code execution, the Programmer will be automatically prompted to
self-reflect and iteratively improve the code until errors are resolved.
The successfully generated building model is automatically exported into an IFC-based representation and then
sent to a downstream model checker for automatic quality assessment. We customized a series of domain-specific
rules in the checker to comprehensively evaluate model quality from various perspectives, including geometric analysis,
collision detection, information verification, etc. The results of the checks are ultimately exported in BIM Collaboration
Format (BCF), containing descriptions of the issues found in the model along with the associated building component
GUIDs.
At this stage, a Reviewer agent is introduced to interpret the BCF files and provide suggestions for optimizing the
model. The Reviewer is designed to understand the current issues in the model by reading the information recorded in
the BCF files. It then proposes solutions by combining this information with the Programmer’s previously generated
code and tool function documentation. This involves guiding and prompting the Programmer agent to use the
appropriate tool functions to fix the code in order to resolve the issues present in the model. This model quality
optimization loop involving the Reviewer, Programmer, and model checker will iterate multiple times until the
checker reports no errors or the agents are unable to resolve the issues autonomously.
To ensure that the agents can perceive comprehensive contextual information within the loop, we implement a local
loop-internal memory module to store the historical interaction between the Programmer and Reviewer during the
optimization process. In addition, a global memory module shared by the Programmer and the Product Owner is
designed to store the user’s historical chat records and the corresponding code information. This allows the agents
to have continuous contextual information during the conversation with the human, enabling the entire framework to
improve and refine responses based on human feedback. Through these multiple optimization loops, we can iteratively
guide the LLM agents using domain knowledge to generate BIM models that meet certain design quality, user intentions,
In the following subsections, we will describe several key modules of the proposed framework in detail.
3.1 Toolset
The manually defined tool functions can essentially be viewed as high-level, concise API interfaces exposed to the
LLMs. Since the native APIs of BIM authoring software are usually fine-grained and low-level, each tool inherently
encapsulates the logic of combining different callable API functions to achieve its functionality. This not only avoids
the tediousness and complexity of low-level API calls but also incorporates specific design rules and engineering logic,
ensuring the precision of the modeling tasks handled by the tool (Du et al. 2024b). However, designing general tool
To address this difficulty, we employed both quantitative and qualitative analysis methods to determine what
tool functions to implement. We began our investigation by examining user log files from BIM authoring software
to understand which commands (tools) human designers most frequently use while interacting with the software.
We utilized one day’s log data collected from 1,000 anonymous global users of the design software Vectorworks,
encompassing approximately 25 million records in 7 languages. After cleaning and filtering the raw data (Du et al.
2024a), the top 50 most frequently used commands are extracted, as shown in Fig. 2.
code, commands directly triggered by the mouse operations, such as drag, nudge, and resize, occupy a large proportion
of the log data. Additionally, since the data is completely anonymous, we cannot determine the users’ disciplines or
usage scenarios. However, some common steps in the modeling process can still be observed from this vast dataset,
such as delete, move, duplicate, set working layer, etc. We excluded mouse-dominated commands and highlighted
in orange the general modeling commands that can be implemented via APIs in the chart, to serve as a reference for
On the other hand, we analyzed the built-in graphical programming tool Marionette (similar to Dynamo/Grasshop-
per) in Vectorworks. In fact, the nodes (batteries) provided by these visual scripting platforms are typically encapsulated
versions of the underlying APIs tailored to different scenarios, serving as a higher-level and more intuitive programming
interface for designers. Software vendors categorize the default nodes based on their functionalities, making it easier
for designers to understand and use them. These motivations are similar to ours. Given our use case of creating regular
BIM models, we mainly refer to the nodes under the "BIM" category.
Finally, we also considered the essential components and steps required in the typical building modeling process
of architects, such as creating floors and walls, setting materials and elevation, etc. By comprehensively synthesizing
information from these three perspectives, we designed a set of 26 tool functions for the agents, as shown in Table 4
- 6 in the Appendix I, covering important aspects of BIM authoring such as geometric modeling and semantic
enrichment. Since the LLM agents primarily understand and use the tools through their descriptions, we clearly
defined the functionality, usage scenarios, input parameters, and return parameters of each function using a structured
text format.
The original user prompt 𝛼 needs to be expanded and enhanced into more specific representations aligned with
architectural design. Therefore, the tasks of the Product Owner and Architect at this stage are crucial, as they
determine the basic layout and quality baseline of the final generated model. The prompt template 𝑃 𝐴𝑟 𝑐ℎ designed for
the Architect is shown in Fig. 3. In this template, we first define the role and tasks of the agent and strictly specify the
output format and content to meet downstream programming requirements. Then, we incorporate 9 basic architectural
rules and principles into the prompt (e.g., components configuration, interior partition, building opening layout, etc.),
guiding the LLM to produce architectural-wise reasonable and structurally integrated building plans.
Additionally, we employ the few-shots learning approach (Brown et al. 2020) to provide the LLM with an example
conversation to better lead it in producing robust outputs. The user input 𝛼′ in the Architect’s prompt template
is a paraphrase of the original user instruction 𝛼 provided by the Product Owner. This is because we use the
function-calling mechanism to connect two agents: the Architect is wrapped in a function layer 𝐹, with the user
responsibilities, and this function is the only tool the Product Owner can invoke. It is important to note that the tool
here for function-calling is not related to our toolset designed for code generation tasks. The generation of the textual
Most mainstream LLMs have been fine-tuned to support function-calling, enabling them to return structured JSON-
formatted responses containing the function name and parameter values to be executed (OpenAI 2024b; Mistral 2024a;
Gemini 2024). The basic workflow of the function-calling is as follows. When the Product Owner receives user
instructions, it infers from the prompt context whether an external function needs to be called to complete the task. If
so, it generates a JSON object, explicitly indicating the function name (Architect) and parameters (paraphrased user
needs). We automatically interpret the JSON locally and invoke the Architect function. Upon receiving the task, the
Architect generates the corresponding building plan. This information is then concatenated as additional context to
the Product Owner’s prompt, guiding it to regenerate the final standard text response. The core goal of this design
is to enable the Product Owner to flexibly decide whether to consult the Architect based on the complexity of the
task, thereby making the interaction between the two agents more intelligent and efficient. This way, the Product
Owner can work independently on simple tasks and leverage the Architect’s expertise for complex tasks, optimizing
The prompt template 𝑃 𝑃𝑂 for the Product Owner is shown in Fig. 4. The Product Owner’s role is to synthesize
various contextual information, including the user’s original instructions 𝛼, the building plans from the Architect,
descriptions of function usage 𝑇 in the toolset, previous chat history with the user Φ𝑔𝑙𝑜𝑏𝑎𝑙 from the global memory
module, and the output specifications in the prompt template, to create a detailed and comprehensive requirements
document 𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 . This document serves as a guide for downstream code generation tasks. We utilize the Chain of
Thought (COT) technique (Wei et al. 2024) in the prompt, which requires the LLM to reason step-by-step, breaking
down the building modeling task into sub-tasks and sub-logics that utilize different tool functions. Therefore, the whole
The enhanced user requirements 𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 will be input into the Programmer’s prompt template 𝑃𝐶𝑜 , as illustrated
in Fig. 5. The Programmer agent is required to write concise Python code utilizing the functions solely from the
Fig. 5. Prompt template of the Programmer agent. Placeholders with “«»” indicate the dynamic content that can be
inserted into the template.
advantage of generating Python code is that LLM can flexibly combine and call different tool functions using various
algorithmic logic. This is more powerful than JSON-based function-calling in a recent work (Fernandes et al. 2024),
which is constrained to executing single functions sequentially and cannot meet the complex logical demands of tasks
such as building modeling. Given that the enhanced requirements contain rich contextual information, we choose to
leverage the zero-shot learning (Wei et al. 2022) ability of the agent without the provision of exemplars. This approach
is intended to allow the LLM to flexibly explore different code logic tailored to various task requirements, rather than
being constrained by rigid examples, thereby maximizing the utilization of its pre-trained knowledge. Eq. 3 illustrates
the process of code generation, where we denote tool information as 𝑇 and the historical chat records with the Product
Owner as Φ𝑔𝑙𝑜𝑏𝑎𝑙 :
A custom Python interpreter will execute the generated code within a controlled environment. We use an abstract
syntax tree (AST) to represent the code, traversing the tree nodes to evaluate each Python expression. This approach
allows us to customize the usable syntax and callable functions while enabling more precise error handling. The
interpreter uses a state dictionary to store and track the results of code execution, including imported packages, defined
function objects, and variable names and their values. This lays the technical foundation for the Programmer agent’s
memory capability — the LLM can utilize and access variables and functions defined in previous dialogues or directly
continue previous code. The interpreter can still execute code correctly by retrieving the state dictionary, ensuring
comprehensive context is maintained at the code level throughout the entire session. We extended previous work (Du
et al. 2024b) to allow the interpreter to support more data types and advanced syntax while restricting potentially
problematic syntax like the while statement. Additionally, the interpreter can only evaluate functions from the toolset
and Python built-in library (except for custom-defined functions within the generated code), preventing the invocation
If an exception 𝐸 is thrown by the interpreter during code evaluation, resolving this error will be treated as a new
task input for the Programmer. The code that caused the exception will be used as the new history information. Both
the error and the code will be re-input into the prompt template 𝑃𝐶𝑜 for the Programmer to regenerate new code. As
shown in Eq. 3, this self-reflective modification loop will iterate up to 𝑛 times until the newly generated 𝑐𝑜𝑑𝑒 𝑛+1 is free
of exceptions 𝐸 𝑛+1 . If the issue is not resolved within 3 attempts, we will interrupt the process to seek human feedback.
It should be noted that within this self-reflection loop, no updates will be made to any memory modules; only the
state dictionary will be updated. In each iteration, the only historical information the Programmer can see is the code
that failed in the previous step. We only store responses from the Programmer that are free of errors in the memory
modules.
The successfully executed code will generate a building model within the BIM authoring software. Although
we used extensive prompt engineering in earlier stages to guide the LLMs in producing spatially and geometrically
reasonable results, the inherently stochastic nature of the process may still lead to flaws in the generated building.
Therefore, deterministic domain-specific rules are used to verify and refine the generated BIM model. We employ a
rule-based model checker to evaluate the model quality. According to (Solihin and Eastman 2015), we defined a series
of rules covering classes 1 to 3, primarily checking for geometric conflicts between components (e.g., whether doors
and windows overlap), correct semantic attribute definitions (e.g., whether each component has a unique GUID), and
compliance of spatial layouts with architectural common sense (e.g., whether the roof is supported by walls and not
floating). Detailed documentation of all rules can be found in the Appendix II. The issues identified in the model will
be exported to BCF files. A script is used to automatically extract useful information from the BCF, i.e., the name and
description of the issues and corresponding rules, as well as the GUIDs of the associated components. We denote this
information as 𝐼 and input it along with the 𝑐𝑜𝑑𝑒 0 that generated the checked model and the toolset information 𝑇 into
the Reviewer agent’s prompt template 𝑃 𝑅𝑒𝑣 (see Fig. 6), asking the Reviewer to provide suggestions 𝛽 on solving the
𝛽 are then fed to the Programmer as an input task to guide it in generating code to resolve conflicts. This quality
optimization loop will iterate multiple times until the checker no longer finds any errors. Similar to the self-reflection
loop, if issues persist after 3 attempts, we will interrupt the loop and have a human designer resolve the issues. In
summary, our framework employs triple-loops to generate BIM model and iteratively improve its quality, which can
be summarized by Alg. 1.
We developed an interactive software prototype using the architecture shown in Fig. 7, integrating the proposed
framework into the BIM authoring software Vectorworks. Our implementation is based on Vectorworks’ open-source
web palette plugin template (Vectorworks 2024) and significantly extends previous work (Du et al. 2024b) to support
multi-agent workflows. The frontend of the web palette is implemented with Vue.js and runs in a web environment
built on Chromium Embedded Framework (CEF), allowing us to embed dynamic web interfaces in Vectorworks using
modern frontend technologies. The backend of the web palette is a C++ application, enabling the definition and
exposure of asynchronous JavaScript functions within a web frame, while the actual logic is implemented using C++
functions.
Fig. 7. Software architecture of the implemented prototype in Vectorworks based on web palette plugin template
Since our multi-agent framework is entirely based on Python, we invoke Vectorworks’ built-in Python engine
within the corresponding C++ functions to execute our code, thereby delegating the JavaScript implementation. Our
framework supports various mainstream LLMs, such as GPT-4o (OpenAI 2024a), Gemini-1.5-pro (Google 2024),
and the state-of-the-art open-source model Mistral-Large-2 (Mistral 2024b). Within the framework, we maintain a
state dictionary and a local memory module to store the state between Python calls and the interaction records of the
agents in the quality optimization loop, respectively. We use Vectorworks’ API to export the generated models as IFC
files, and then use the Solibri Autorun commands to automatically launch the Solibri Model Checker, perform checks,
Checker, 30 rules are implemented according to the rule documentation in Appendix II, as shown in Fig. 8.
On the frontend, a session begins with a new user input. If the input is audio, we employ the Whisper model
(Radford et al. 2022) to convert the speech to text and populate the input message box. We use page session storage
to store user input text and the chat records of the agents, displaying them sequentially in the dialog box with different
colors. Users can easily start a new session by reloading the page. Fig. 9 shows the developed prototype. Users
can chat directly with the agents by clicking the microphone button, generating editable building models directly in
This section is devoted to the experimental evaluation of the presented methodology. The evaluation is performed
via the employment of test user prompts (instructions) to the proposed framework and comparing the generated
outcome of various LLMs including GPT-4o (OpenAI 2024a), Mistral-Large-2 (Mistral 2024b) and Gemini-1.5-Pro
(Google 2024). Ten user prompts were conceptualized to comprehensively test the generative capabilities and qualities
of the proposed framework from various perspectives, as illustrated in Table 1. Table 2 summarizes the different
architectural scenarios/requirements covered by each test prompt, including aspects such as shape, dimensions, spatial
features, room layouts, construction materials, etc., focusing on the functional and aesthetic elements essential to
their respective purposes (whether residential or commercial). Additionally, by deliberately leaving some building
requirements unspecified in the test prompts, we aim to experiment with the framework’s ability to generate designs in
open-ended settings. Given the inherent stochasticity of generative models, each test prompt was input into each LLM
five times, resulting in a total of 391 IFC models (including intermediate results from the optimization process). The
experiments are based on this dataset and aim to report statistically significant results.
Table 3 presents the pass rates of final models generated by different test prompts during model checking. These
rates are calculated based on a total of 30 domain-specific checking rules implemented in Solibri and reflect the quality
of the generated model. "n" indicates the number of times this prompt was input. The "Mean pass rate" records the
Nr. Content
1 I want to build a two-story hotel with eight rooms on each floor. The rooms are arranged in groups of four on each side, separated by a
4-meter-wide corridor in the middle. Each room has a door and a window. The doors of the rooms are on the corridor side of the wall,
and the windows are on the outside wall of the building. The building should have a wooden pitched roof and brick walls.
2 Create a basic 3D model of a four-story residential house with dimensions of 10 by 6 meters.
3 Create a single-story residential house with a total floor area of 120 square meters. Includes three bedrooms, two bathrooms, a kitchen,
and a living room. The house should have a wooden pitched roof, and incorporate at least four windows and one main entrance door.
Use the concrete walls.
4 Create a building with three connected sections. Each section should be a rectangle (10m x 5m) with walls of 3 meters in height. The
sections should be connected by 5-meter-long walls. Add slabs and a continuous, concrete-pitched roof that covers all sections. Add
doors and windows to each section. Choose the right material for the wall.
5 Create a 3-story L-shaped house with each leg of the L being 8 meters long and 4 meters wide. Place a door at the corner of the L and
a window on each side of the L. I want the whole building to be made of wood.
6 Design a building with a complex polygonal footprint (e.g., hexagonal). Each side of the hexagon should be 5 meters. Add a slab for
the floor and a pitched roof. Include a door on one side and a window on each of the other sides.
7 Construct a residential building with a rectangular footprint (15m x 10m), a pitched roof and two floors. Create balconies by extending
the floor slab outwards from the exterior walls on the first floor. Add doors and windows to each floor. Make sure that the balconies
are accessible from the inside.
8 Construct a modern office building with a rectangular base of 20 meters by 20 meters. Set the wall height to 3 meters. Include four
rooms (5x5 meters each) along the perimeter, with a central open space. Add doors and windows to each room and a main entrance
door to the building.
9 Design a two-story apartment building with an H-shaped base. Each floor consists of two apartments with two rooms each.
10 Create a T-shaped, single-story building with a horizontal section of 10 meters x 30 meters and a vertical section of 10 meters x 20
meters. Connect the two sections by placing a door at their junction. Each section has three windows. The entire building is made of
concrete.
TABLE 2. Categorization of user requirements in test prompts ("-" for not specified)
Nr. Type Floors Base Shape Roof Materials Rooms Spatial Features Building Openings
Wood, Doors on corridor side,
1 Hotel 2 Rectangular Pitched 16 4m wide corridor
brick windows on outside wall
Rectangular
2 House 4 - - - - -
(10m x 6m)
120 𝑚 2 , shape Wood, Main entrance door, at least
3 House 1 Pitched 7 -
not specified concrete four windows
Connected
Connecting sections with Doors and windows in each
4 Building - rectangles (10m Pitched - -
5m long walls section
x 5m)
Each leg of the L being 8m Door at corner, window on
5 House 3 L-shaped - Wood -
x 4m each side
Each side of the hexagon is Door on one side, window
6 Building - Hexagon Pitched - -
5m on each other side
Residential Rectangular Doors and windows on each
7 2 Pitched - - Balconies on first floor
building (15m x 10m) floor, accessible balconies
Office Rectangular Central open space, rooms Doors and windows in each
8 - - - 4
building (20m x 20m) along the perimeter room, main entrance door
Apartment 2 apartments with 2 rooms
9 2 H-shaped - - 8 -
building each
Vertical section: 10m x
Door at junction, three
10 Building 1 T-shaped - Concrete - 20m, horizontal section:
windows per section
10m x 30m
average pass rate of the corresponding LLM under each test prompt, as well as the overall average pass rate across all
test cases. The three LLMs under the proposed framework were generally able to generate high-quality BIM models.
Among them, GPT-4o and Mistral-Large-2 achieved average pass rates of 99.4% and 99.2% respectively across all test
cases, while Gemini-1.5-pro only reached a pass rate of 94.05%. When examining each test prompt individually, it
its metrics fluctuating between 66.67% and 100%, indicating a high variance. Prompts 3, 4, and 5 could be considered
"commonly recognized challenges" as none of the LLMs managed to avoid errors in all five runs. Additionally, despite
Mistral being the smallest model in terms of size among the three, its demonstrated stability and exceptional reasoning
TABLE 3. Rule pass rates of final models generated by different backbone LLMs under test prompts
Backbone LLMs n Prompt 1 Prompt 2 Prompt 3 Prompt 4 Prompt 5 Prompt 6 Prompt 7 Prompt 8 Prompt 9 Prompt 10
1 100.00% 100.00% 96.67% 100.00% 100.00% 100.00% 96.67% 100.00% 96.67% 100.00%
2 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 96.67%
GPT-4o 3 100.00% 100.00% 100.00% 96.67% 100.00% 100.00% 96.67% 100.00% 100.00% 100.00%
4 100.00% 100.00% 100.00% 100.00% 96.67% 100.00% 96.67% 100.00% 100.00% 100.00%
5 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 96.67% 100.00%
Mean pass rate 99.40% 100.00% 100.00% 99.33% 99.33% 99.33% 100.00% 98.00% 100.00% 98.67% 99.33%
1 73.33% 80.00% 96.00% 100.00% 80.00% 96.67% 100.00% 100.00% 96.67% 100.00%
2 96.67% 100.00% 86.67% 96.67% 93.33% 100.00% 66.67% 100.00% 100.00% 96.67%
Gemini-1.5-Pro 3 100.00% 76.67% 100.00% 73.33% 90.00% 100.00% 96.67% 100.00% 96.67% 100.00%
4 76.67% 96.67% 100.00% 100.00% 100.00% 100.00% 96.67% 100.00% 100.00% 100.00%
5 93.33% 76.67% 86.67% 100.00% 96.67% 100.00% 100.00% 96.67% 100.00% 90.00%
Mean pass rate 94.05% 88.00% 86.00% 93.87% 94.00% 92.00% 99.33% 92.00% 99.33% 98.67% 97.33%
1 100.00% 93.33% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00%
2 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 96.67%
Mistral-Large-2 3 100.00% 100.00% 96.67% 100.00% 96.67% 100.00% 100.00% 100.00% 100.00% 100.00%
4 100.00% 100.00% 100.00% 90.00% 96.67% 100.00% 100.00% 100.00% 100.00% 100.00%
5 100.00% 100.00% 96.67% 96.67% 96.67% 100.00% 100.00% 100.00% 100.00% 100.00%
Mean pass rate 99.20% 100.00% 98.67% 98.67% 97.33% 98.00% 100.00% 100.00% 100.00% 100.00% 99.33%
We further evaluated the effectiveness of the model quality optimization method proposed in the framework. Unlike
the pass rate, which is a category-related metric, we selected the issue amount as a more fine-grained, instance-level
metric. Specifically, while the pass rate provides a broad overview of the types of issues present in the model (e.g.,
if the model fails the IfcSlab-IfcSlab intersection rule but passes the other 29 rules, it indicates a type of issue related
to slab position conflicts), the issue amount offers a deeper insight into the quantity of affected component pairs (e.g.,
how many pairs of slab instances have positioning conflicts in the model).
Fig. 10 presents three line charts illustrating the average issue amount that exists in BIM models at the initial
generation stage, as well as after the first, second, and third rounds of quality optimization iteration. It can be observed
that GPT-4o and Mistral-Large-2, when acting as Reviewer agents, are effective in iteratively resolving issues within
the model. The average issue amount tends to decrease overall with each iteration step and converges eventually to a
smaller value, consistent with the results reported in Table 3. In contrast, Gemini-1.5-Pro exhibits an upward trend
in some test prompts, indicating an increase in the number of issues. Upon closely reviewing the logs of the Gemini
model in an attempt to solve the issues, rather than following the instructions in the prompt template to create code
fixes. This approach can cause duplication and conflict between components of the new and old models, leading to a
sharp increase in issue amount. Furthermore, the excessive number of issues negatively impacts the judgment of the
Fig. 11 visualizes part of the building models generated by the proposed framework. Due to space constraints, the
full list of generated building models can be found in Appendix III. By carefully reviewing these generated models,
we systematically evaluated whether the generated buildings meet the user requirements and intentions described in
the framework were generally able to effectively fulfill the user’s intentions. Even for open-ended instructions that
do not explicitly specify some requirements, our framework can augment and complete the user’s original input. It
leverages the pre-trained architectural knowledge of the LLMs and feedback from the domain-rule-based model checker
to ultimately produce buildings that are rational both in engineering and architectural terms. Additionally, it can be
observed that the reasonable arrangement of building openings, a task requiring advanced spatial understanding, poses
Fig. 12. Evaluate the generated models against the requirements specified by users in the test prompts. 0 indicates that
the requirement was not specified in the instructions, but LLM agents generated relevant results. -0.5 indicates that the
agents did not generate any relevant results for the unspecified requirement.
The implemented framework is currently capable of generating regular, non-curved building models in the early
design stage. To generalize this approach to irregularly shaped buildings or more detailed engineering models,
the development of more complex tools for the agents is required to significantly expand the existing limited toolset.
However, a challenge arises in organizing and managing the vast amount of tool information and their interdependencies,
so that the LLM can efficiently retrieve useful functions. Knowledge graphs and graph-based Retrieval Augmented
Additionally, in the current framework, the Architect agent generates structured text-based building plans that
include numerical information such as coordinates and dimensions. This content format is designed to be aligned
with the input requirements of the tool functions, allowing downstream agents to better understand and utilize specific
architectural parameters in the code. It is observed in the experiments that this approach is more robust and accurate
than using formats like SVG (XML) or images for representing floor plans. Moreover, Architect currently designs
the building’s interior layout based solely on its pre-trained knowledge and the examples and information provided
in prompt templates. Although the generated interior partitions appear visually reasonable to some extent (as shown
in Fig. 17), they lack comprehensive consideration of complex architectural conditions (e.g., lighting, functionality,
Our experiment demonstrates that LLM agents can automatically resolve clashes within the model to a limited
extent through the designed quality optimization loop. Although this module is not the main focus of this study,
our preliminary exploration in this direction presents a new technical approach for research in related fields. This
is particularly significant considering that current research on automatic clash resolution mainly focuses on using
optimization algorithms (Wu et al. 2023), classical machine learning (Harode et al. 2024), or reinforcement learning
(Harode et al. 2022). Despite these advancements, the conflict resolution method based on LLM agents still has
significant limitations. Fig. 13 summarizes some representative scenarios encountered during the quality optimization
loop. The first common failure (a) involves the agent attempting to rewrite code to create a new model, leading to an
increase in issue amount due to conflicts between the new and existing model components. In scenario (b), the upper two
floors of the initial model have overlapping and nested walls, doors, and windows. In such highly complex situations,
LLM agents, which rely solely on code and checker feedback (rule/issue descriptions) for contextual information,
cannot resolve all the issues and are prone to hallucinations. The strategy the agent adopts here involves deleting parts
of the walls on the relevant floors. While this action can reduce the overall issue amount in the model, it compromises
the structural integrity of the building. However, current agents can only perceive information from 1D text and are
not yet capable of understanding 3D spaces in this manner. Scenario (c) illustrates a successful case where the agent
correctly adjusts the height of a floating roof to align with the top floor’s wall elevation. Overall, LLMs perform well
for intuitive issues with deterministic solutions (typically Class 1 rules, such as "no space defined in model -> create
space"). However, they often fail on complex issues that require higher-level spatial understanding and have open-ended
solutions (usually Class 3 rules, such as "two partition walls intersect -> which wall is to be moved, and in which
direction?"). Although our framework allows users to guide the LLM to perform the appropriate issue-solving actions
via dialogue or manually continue editing the BIM model generated in the software, future research will prioritize
enhancing the LLM’s spatial understanding capabilities to advance toward an autonomous conflict resolution system.
Given that our approach generates code representations of 3D models based on prompt engineering techniques, it
does not require fine-tuning of LLMs. This is fundamentally different from conventional Text-to-3D methods, which
typically require constructing a 3D dataset for training. Commonly used metrics such as Chamfer Distance (CD) and
Intersection over Union (IoU) mainly focus on evaluating the geometric accuracy of point cloud/voxel models. As
these metrics are not applicable to our data representation approach, we propose using the pass rate of domain-specific
rule checks as a quantitative metric to evaluate the generated BIM models. While this method can verify whether the
generated models are structurally complete and reasonable in architectural terms, its limitation lies in the fact that the
rules provided by model checkers cannot assess whether the generated buildings align with the abstract and dynamic
user intentions expressed in natural language instructions (e.g., "H-shaped house", "arrange rooms along the building
perimeter", etc.). Currently, we still rely on manual review to determine whether the models align with the intended
instructions. Future research could leverage the data generated by this work to develop new benchmark datasets and
7 CONCLUSIONS
We introduce Text2BIM, an LLM-based multi-agent collaborative framework that generates building models in
BIM authoring software from natural language descriptions. The main findings and contributions of this study are as
follows:
• Unlike previous studies that focused on generating the 3D geometric representation of buildings, our framework
is capable of producing native BIM models with internal layouts, external envelopes, and semantic information.
• We propose representing 3D building models using imperative code that interacts with BIM authoring software
APIs. By employing prompt engineering techniques, multiple LLM agents collaborate to develop the code
• Innovatively, a domain-specific rule-based model checker is integrated into the framework to guide LLMs
in generating architecturally and structurally rational outcomes. The proposed quality optimization loop
demonstrates that the LLM agents can iteratively resolve conflicts within the BIM model based on textual
within the proposed framework, including a comparative analysis of three open/closed-source LLMs within the
• An interactive software prototype is developed that integrates the proposed framework into the BIM authoring
tool Vectorworks, showcasing innovative possibilities for modeling-by-chatting during the design process.
We believe that the proposed methodology can be extended to a broader range of use cases beyond just model
generation, especially if more specialized tools are developed for LLM agents to utilize. We hope that readers will find
inspiration from this and explore using LLMs to address more challenges within our field.
Some data and models that support the findings of this study are available from the corresponding author upon
reasonable request.
9 ACKNOWLEDGMENTS
This work is funded by Nemetschek Group, which is gratefully acknowledged. We sincerely appreciate the data
10 SUPPLEMENTAL MATERIALS
1. Demo video
TABLE 7:
Class 1 rules
Desired resolution Refine GUIDs of those components, which have not passed this rule.
Scope The rule checks if the model has a spatial breakdown structure comprising IfcSite, IfcBuilding,
Desired resolution Create appropriate spatial containers and assign components accordingly.
Scope The rule checks if all doors and windows are on the same floor as the containing wall.
Desired resolution Re-assign spatial associations for each affected door or window.
Scope The rule checks if each component has a layer information attached to it.
Scope The rule checks if certain components are present in the model (e.g., walls, doors, windows,
Class 2 rules
Scope The rule checks if the description value of all building components is set and the value complies
Desired resolution Set Vectorworks-internal ID into the description field of those components, which have not
TABLE 9:
Class 3 rules
Desired resolution Move the roofs/slabs to the top of the support walls
Scope The rule checks that the model doesn’t contain any orphan doors or windows (a door or a
Class 3 rules
Fig. 14. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.1).
Fig. 15. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.4).
Fig. 17. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.3).
Fig. 19. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.8).
Fig. 21. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.10).
Borrmann, A., König, M., Koch, C., and Beetz, J. (2018). “Building information modeling: Why? what? how?.”
Building Information Modeling - Technology foundations and industry practice, A. Borrmann, M. König, C. Koch,
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell,
A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter,
C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A.,
Sutskever, I., and Amodei, D. (2020). “Language models are few-shot learners, <https://arxiv.org/abs/2005.14165>.
Chen, J., Shao, Z., and Hu, B. (2023). “Generating interior design from text: A new diffusion model-based method for
de Miguel Rodríguez, J., Villafañe, M. E., Piškorec, L., and Sancho Caparrini, F. (2020). “Generation of geometric
interpolations of building types with deep variational autoencoders.” Design Science, 6, e34.
Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Chang, B., Sun, X., Li, L., and Sui, Z.
Du, C., Deng, Z., Nousias, S., and Borrmann, A. (2024a). “Towards commands recommender system in bim authoring
tool using transformers.” Proc. of the 31th Int. Conference on Intelligent Computing in Engineering (EG-ICE) (Jul).
Du, C., Nousias, S., and Borrmann, A. (2024b). “Towards a copilot in BIM authoring tool using large language model
based agent for intelligent human-machine interaction.” Proc. of the 31th Int. Conference on Intelligent Computing
Eastman, C., min Lee, J., suk Jeong, Y., and kook Lee, J. (2009). “Automatic rule-based checking of building designs.”
Ennemoser, B. and Mayrhofer-Hufnagl, I. (2023). “Design across multi-scale datasets by developing a novel approach
Fernandes, D., Garg, S., Nikkel, M., and Guven, G. (2024). “A gpt-powered assistant for real-time interaction with
Fuchs, S., Witbrock, M., Dimyadi, J., and Amor, R. (2022). “Neural semantic parsing of building regulations for
compliance checking.” IOP Conference Series: Earth and Environmental Science, 1101, 092022.
Accessed: 2024-07-16.
Google, G. T. (2024). “Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context,
<https://arxiv.org/abs/2403.05530>.
Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N. V., Wiest, O., and Zhang, X. (2024). “Large language
Harode, A., Thabet, W., and Gao, X. (2022). An Integrated Supervised Reinforcement Machine Learning Approach for
Harode, A., Thabet, W., and Gao, X. (2024). “Developing a machine-learning model to predict clash resolution
He, Z., Wang, Y.-H., and Zhang, J. (2023). “Generative structural design integrating bim and diffusion model,
<https://synthical.com/article/bb21e837-1ed0-4489-8a33-768e6d0882fb> (10).
Hong, S., Zhuge, M., Chen, J., Zheng, X., Cheng, Y., Zhang, C., Wang, J., Wang, Z., Yau, S. K. S., Lin, Z., Zhou, L.,
Ran, C., Xiao, L., Wu, C., and Schmidhuber, J. (2023). “Metagpt: Meta programming for a multi-agent collaborative
framework, <https://arxiv.org/abs/2308.00352>.
Hu, Z., Iscen, A., Jain, A., Kipf, T., Yue, Y., Ross, D. A., Schmid, C., and Fathi, A. (2024). “Scenecraft: An llm agent
Häußler, M., Esser, S., and Borrmann, A. (2021). “Code compliance checking of railway designs by integrating BIM,
ISO (2024). “ISO 16739-1:2024: Industry Foundation Classes (IFC) for data sharing in the construction
Jang, S., Lee, G., Oh, J., Lee, J., and Koo, B. (2024). “Automated detailing of exterior walls using NADIA: Natural-
language-based architectural detailing through interaction with AI.” Advanced Engineering Informatics, 61, 102532.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T.,
Riedel, S., and Kiela, D. (2020). “Retrieval-augmented generation for knowledge-intensive NLP tasks.” Proceedings
of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook, NY, USA,
Li, C., Zhang, T., Du, X., Zhang, Y., and Xie, H. (2024a). “Generative ai for architectural design: A literature review,
<https://arxiv.org/abs/2404.01335>.
Li, P., Li, B., and Li, Z. (2024b). “Sketch-to-architecture: Generative ai-aided architectural design,
<https://arxiv.org/abs/2403.20186>.
Liao, W., Lu, X., Fei, Y., Gu, Y., and Huang, Y. (2024). “Generative ai design for building structures.” Automation in
Lin, C.-H., Gao, J., Tang, L., Takikawa, T., Zeng, X., Huang, X., Kreis, K., Fidler, S., Liu, M.-Y., and Lin, T.-Y.
(2023). “Magic3d: High-resolution text-to-3d content creation.” IEEE Conference on Computer Vision and Pattern
Luo, Z. and Huang, W. (2022). “Floorplangan: Vector residential floorplan adversarial generation.” Automation in
Mehta, N., Teruel, M., Deng, X., Figueroa Sanz, S., Awadallah, A., and Kiseleva, J. (2024). “Improving grounded
language understanding in a collaborative environment by interacting with agents through help feedback.” Findings
of the Association for Computational Linguistics: EACL 2024, Y. Graham and M. Purver, eds., St. Julian’s, Malta,
Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., and Geiger, A. (2019). “Occupancy networks: Learning
3d reconstruction in function space.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), 4455–4465.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. (2021). “Nerf: representing
scenes as neural radiance fields for view synthesis.” Commun. ACM, 65(1), 99–106.
Nuyts, E., Bonduel, M., and Verstraeten, R. (2024). “Comparative analysis of approaches for automated compliance
Oleynikova, H., Millane, A., Taylor, Z., Galceran, E., Nieto, J. I., and Siegwart, R. Y. (2016). “Signed distance fields:
cessed: 2024-07-16.
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., and Bernstein, M. S. (2023). “Generative agents:
Pauwels, P., Deursen, D. V., Verstraeten, R., Roo, J. D., Meyer, R. D., de Walle, R. V., and Campenhout, J. V. (2011). “A
semantic rule checking environment for building performance checking.” Automation in Construction, 20, 506–518.
Poole, B., Jain, A., Barron, J. T., and Mildenhall, B. (2022). “Dreamfusion: Text-to-3d using 2d diffusion.” ArXiv,
abs/2209.14988.
Pouliou, P., Horvath, A.-S., and Palamas, G. (2023). “Speculative hybrids: Investigating the generation of conceptual
architectural forms through the use of 3d generative adversarial networks.” International Journal of Architectural
Preidel, C. and Borrmann, A. (2018). “BIM-Based Code Compliance Checking.” Building Information Modeling,
Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., and Sutskever, I. (2022). “Robust speech recognition
Radford, A., Metz, L., and Chintala, S. (2015). “Unsupervised representation learning with deep convolutional
Shabani, M. A., Hosseini, S., and Furukawa, Y. (2023). “Housediffusion: Vector floorplan generation via a diffusion
model with discrete and continuous denoising.” 2023 IEEE/CVF Conference on Computer Vision and Pattern
Solihin, W. and Eastman, C. (2015). “Classification of rules for automated BIM rule checking development.” Automation
Stigsen, M., Moisi, A., Rasoulzadeh, S., Schinegger, K., and Rutzinger, S. (2023). “Ai diffusion as design vocabulary
- investigating the use of ai image generation in early architectural design and education.” 587–596 (01).
Sun, C., Han, J., Deng, W., Wang, X., Qin, Z., and Gould, S. (2024). “3d-gpt: Procedural 3d modeling with large
Sun, C., Zhou, Y., and Han, Y. (2022). “Automatic generation of architecture facade for historical urban renovation
Sydora, C. and Stroulia, E. (2020). “Rule-based compliance checking and generative design for building interiors using
Tomczak, A., v Berlo, L., Krijnen, T., Borrmann, A., and Bolpagni, M. (2022). “A review of methods to specify
information requirements in digital construction projects.” IOP Conference Series: Earth and Environmental
Tono, A. and Fischer, M. (2022). “Vitruvio: 3d building meshes via single perspective sketches,
<https://arxiv.org/abs/2210.13634>.
2024-07-16.
Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., Chen, Z., Tang, J., Chen, X., Lin, Y., Zhao, W. X., Wei, Z.,
and Wen, J. (2024). “A survey on large language model based autonomous agents.” Frontiers of Computer Science,
18(6).
Wang, S., Zeng, W., Chen, X., Ye, Y., Qiao, Y., and Fu, C.-W. (2021). “Actfloor-gan: Activity-guided adversarial
networks for human-centric floorplan design.” IEEE Transactions on Visualization and Computer Graphics, PP,
1–1.
Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., and Le, Q. V. (2022). “Finetuned
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E. H., Le, Q. V., and Zhou, D. (2024).
“Chain-of-thought prompting elicits reasoning in large language models.” Proceedings of the 36th International
Wu, J., Nousias, S., and Borrmann, A. (2023). “Parametrization-based solution space exploration for model healing.”
Proc. of the 30th Int. Conference on Intelligent Computing in Engineering (EG-ICE) (Jul).
Xu, X., Wang, Y., Xu, C., Ding, Z., Jiang, J., Ding, Z., and Karlsson, B. F. (2024). “A survey on game playing agents
Yang, X., Wu, Y., Zhang, K., and Jin, C. (2021). “Cpcgan: A controllable 3d point cloud generative adversarial network
with semantic label generating.” Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3154–3162.
Zhang, J. and El-Gohary, N. M. (2017). “Integrating semantic nlp and logic reasoning into a unified system for
Zhang, L., Zheng, L., Chen, Y., Huang, L., and Zhou, S. (2022). “Cgan-assisted renovation of the styles and features
of street facades—a case study of the wuyi area in fujian, china.” Sustainability, 14, 16575.
Zheng, J. and Fischer, M. (2023). “Dynamic prompt-based virtual assistant framework for bim information search.”
Zhou, Y. C., Zheng, Z., Lin, J. R., and Lu, X. Z. (2022). “Integrating NLP and context-free grammar for complex rule
Zhuang, X., Ju, Y., Yang, A., and Caldas, L. (2023). “Synthesis and generation for 3d architecture volume with
Çelen, A., Han, G., Schindler, K., Gool, L. V., Armeni, I., Obukhov, A., and Wang, X. (2024). “I-Design: Personalized