0% found this document useful (0 votes)
53 views42 pages

Text2BIM Generating Building Models Using a Large Language Model-based Multi-Agent

The document presents Text2BIM, an LLM-based multi-agent framework designed to generate 3D building models from natural language instructions, addressing the complexities of traditional BIM authoring processes. By utilizing multiple LLM agents and a rule-based model checker, the framework transforms user input into executable code for BIM tools, facilitating intuitive design expression and improving model quality. An interactive prototype integrated into the BIM software Vectorworks demonstrates the potential for enhanced modeling through conversational interfaces.

Uploaded by

zerobefore31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views42 pages

Text2BIM Generating Building Models Using a Large Language Model-based Multi-Agent

The document presents Text2BIM, an LLM-based multi-agent framework designed to generate 3D building models from natural language instructions, addressing the complexities of traditional BIM authoring processes. By utilizing multiple LLM agents and a rule-based model checker, the framework transforms user input into executable code for BIM tools, facilitating intuitive design expression and improving model quality. An interactive prototype integrated into the BIM software Vectorworks demonstrates the potential for enhanced modeling through conversational interfaces.

Uploaded by

zerobefore31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Text2BIM: Generating Building Models Using a Large Language Model-based Multi-Agent

Framework

Changyu Du1 , Sebastian Esser2 , Stavros Nousias3 , and André Borrmann4

1 Ph.D. Candidate, Chair of Computational Modeling and Simulation, Technical University of Munich, Munich

80333, Germany. Email: [email protected]


arXiv:2408.08054v1 [cs.AI] 15 Aug 2024

2 Postdoctoral Researcher, Chair of Computational Modeling and Simulation, Technical University of Munich,

Munich 80333, Germany. Email: [email protected]


2 Postdoctoral Researcher, Chair of Computational Modeling and Simulation, Technical University of Munich,

Munich 80333, Germany. Email: [email protected]


4 Professor, Chair of Computational Modeling and Simulation, Technical University of Munich, Munich 80333,

Germany. Email: [email protected]

ABSTRACT

The conventional BIM authoring process typically requires designers to master complex and tedious modeling

commands in order to materialize their design intentions within BIM authoring tools. This additional cognitive burden

complicates the design process and hinders the adoption of BIM and model-based design in the AEC (Architecture,

Engineering, and Construction) industry. To facilitate the expression of design intentions more intuitively, we pro-

pose Text2BIM, an LLM-based multi-agent framework that can generate 3D building models from natural language

instructions. This framework orchestrates multiple LLM agents to collaborate and reason, transforming textual user

input into imperative code that invokes the BIM authoring tool’s APIs, thereby generating editable BIM models with

internal layouts, external envelopes, and semantic information directly in the software. Furthermore, a rule-based

model checker is introduced into the agentic workflow, utilizing predefined domain knowledge to guide the LLM

agents in resolving issues within the generated models and iteratively improving model quality. Extensive experiments

were conducted to compare and analyze the performance of three different LLMs under the proposed framework. The

evaluation results demonstrate that our approach can effectively generate high-quality, structurally rational building

models that are aligned with the abstract concepts specified by user input. Finally, an interactive software prototype

was developed to integrate the framework into the BIM authoring software Vectorworks, showcasing the potential of

modeling by chatting.

1 INTRODUCTION

Throughout the last decades, various digital representations and workflows have continuously emerged to represent

1 Du, August 16, 2024


the built environment. The notion of Building Information Modeling (BIM) comprises a holistic approach to reflect

built assets with geometric and semantic information, which can be utilized across the entire life-cycle of a building

and shared across different project stakeholders in dedicated representations (Borrmann et al. 2018). Modern BIM

authoring software encompasses design requirements across multiple disciplines. This integrated approach has led to

a proliferation of functions and tools within the software, making the user interface increasingly complex. Designers

often face a steep learning curve and require extensive training to translate design intentions into complex command

flows to create building models in the software (Du et al. 2024a).

In recent years, the application of generative Artificial Intelligence (AI) in architectural design has alleviated this

additional cognitive load, enhancing the creative potential and efficiency of the design process. Current research and

industrial applications primarily focus on generating 2D images or simple 3D volumes (Li et al. 2024a), utilizing

Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and diffusion models to create 2D

architectural floor plans (Luo and Huang 2022; Shabani et al. 2023; Wang et al. 2021), building renderings (Stigsen

et al. 2023; Chen et al. 2023; Graphisoft 2024), architectural facade designs (Sun et al. 2022; Zhang et al. 2022), or

preliminary 3D conceptual forms (Zhuang et al. 2023; Pouliou et al. 2023; Tono and Fischer 2022). Recent research

proposed using large language models (LLMs) to automatically generate wall details (Jang et al. 2024), but more

complex 3D building model generation has not yet been tapped.

In other sectors, such as game development and virtual reality, advanced 3D generative models like DreamFusion

(Poole et al. 2022) and Magic3D (Lin et al. 2023) can generate complex 3D models with rich textures directly from

text descriptions, allowing designers to express design intent in natural language without tedious modeling commands.

However, the outputs from these Text-to-3D methods are typically based on voxels, point clouds, meshes, or implicit

representations like Neural Radiance Fields (NeRFs) (Mildenhall et al. 2021), which only contain geometric data of

the outer surfaces and cannot model possible internal contents of the 3D objects, nor do they include any semantic

information.

The differences between these purely geometric 3D shapes and native BIM models make it challenging to integrate

them into BIM-based architectural design workflows. Designers cannot directly modify and edit the generated contents

in BIM authoring software, and due to the lack of semantic information, these models are also difficult to apply in

downstream building simulation, analysis, and maintenance tasks.

1.1 Novelty and contributions

To bridge these gaps, we propose Text2BIM, which converts natural language descriptions to 3D building models

with external envelopes, internal layouts, and semantic information. By representing building models as imperative

code scripts invoking the BIM authoring software’s Application Program Interfaces (APIs), we enable multiple Large

Language Model (LLM) agents to collaborate and autonomously generate executable code that ultimately produces

2 Du, August 16, 2024


native BIM models capable of further editing directly in the software. The proposed framework utilizes specific

prompts to guide LLMs in generating architectural-wise rational outcomes and automatically evaluates the generated

BIM models against domain-specific building rules. It allows LLMs to iteratively improve the model quality through

multiple feedback loops, incorporating domain knowledge from the rule-based system. A variety of complex test cases

were designed to comprehensively evaluate the framework’s generation capabilities and the quality of the results. We

also implemented an interactive software prototype to integrate the proposed framework into the BIM authoring tool

Vectorworks, demonstrating new possibilities for modeling-by-chatting during the design process.

The scope of this paper is limited to generating regular building models at the early design stage. The generated

models only include essential building components such as interior and exterior walls, slabs, roofs, doors, and windows,

along with representative semantic information like stories, spaces, and material definitions. Our goal is to generate

reasonably laid-out 3D buildings with a certain level of model quality from natural language descriptions, providing

designers with a reference to further refine the designs in BIM authoring software. This approach aims to partially

liberate designers from the tedious and repetitive modeling commands and express design intent more intuitively.

Nevertheless, the user can proceed to modify the resulting models in the BIM authoring tool at any time ensuring a

balanced level of automation and engineering autonomy.

2 RELATED WORK

2.1 Generative AI in 3D building design

The application of generative AI in the field of 3D building design is gradually becoming a research hotspot.

The key lies in constructing appropriate data representations for existing design data, experiential knowledge, and

physical principles, and then training corresponding algorithms to intelligently generate new designs (Liao et al. 2024).

de Miguel Rodríguez et al. (2020) use connectivity vectors to represent different 3D mesh-like building geometries. By

training a Variational Autoencoder (VAE) using these data, more building shapes can be generated by reconstructing

interpolated positions within the learned distribution. Vitruvio (Tono and Fischer 2022) uses the occupancy field to

describe the building shape by assigning binary values to each point in the 3D space, indicating whether the point is

occupied by an object. They employ a modified occupancy network (Mescheder et al. 2019) to learn this representation,

enabling the reconstruction of a 3D printable building mesh from a single perspective sketch.

Other researchers are employing GANs to generate conceptual 3D buildings for the early design stage. Pouliou

et al. (2023) proposed using CPCGAN (Yang et al. 2021) to generate point cloud representations of building geometries

based on specific site rules. Ennemoser and Mayrhofer-Hufnagl (2023) decoded 3D voxels into 2D images to train

a DCGAN (Radford et al. 2015), and then used signed distance fields (SDF) (Oleynikova et al. 2016) to convert

the generated images back into voxels. Although their method produces voxel models that can partially reconstruct

the interior spaces of buildings, the generated results still suffer from issues such as geometric inconsistency and

3 Du, August 16, 2024


inaccuracy, making them far from practical architectural models.

Overall, current research utilizing 3D generative algorithms based on pure geometric representations is still limited

to generating conceptual architectural forms and has not yet been able to produce complex 3D building models with a

high level of development (LOD) that exhibit both coherent exterior and interior geometry. Additionally, the results

generated by these purely data-driven methods are difficult to constrain using text-based architectural rules. Better data

representation methods need to be explored.

The rapidly advancing generative AI technologies, such as diffusion models and large language models (LLMs),

have shown significant potential in the field of architectural design. The application of diffusion models is still

primarily focused on tasks based on 2D images, such as generating architectural renderings from text (Li et al. 2024b)

and replacing GANs to produce more robust structural designs (He et al. 2023). On the other hand, the application of

LLMs is mainly centered on using natural language to retrieve data from BIM models (Zheng and Fischer 2023) and

enhancing human-machine interaction in BIM authoring software (Du et al. 2024b; Fernandes et al. 2024). However,

the concept of using LLMs to generate 3D building models has not yet been explored. One of the key challenges is

representing 3D models as one-dimensional text data that LLMs can use (Liao et al. 2024). This textual representation

must concisely capture the features of the model, avoiding the verbosity caused by the overly detailed granularity typical

of conventional model serialization files. A recent study (Jang et al. 2024) proposed converting BIM models into XML

format and then using LLM to process this structured text to add wall details. Finally, the modified XML is converted

back into the BIM model to achieve automatic wall detailing. Unlike their approach, we propose representing the BIM

model as imperative code. By constructing and invoking high-level modeling APIs in the BIM authoring tool, we aim

to express the geometric and semantic features of the model using the minimal and most flexible text format possible

while also maximizing the benefits from the powerful code generation capabilities of LLMs.

2.2 LLM-based agents

A Large Language Model-based agent refers to an autonomous system that utilizes an advanced language model to

perform tasks involving perception, decision-making, and action (Wang et al. 2024). These systems typically equip the

LLM with tools to interact with the external environment, as well as memory modules to retain the thought processes,

observations, and action records. The LLM-based agent uses the LLM as the "brain", leveraging its powerful in-context

learning (Dong et al. 2024) capabilities to synthesize information from various sources and deploy appropriate tools

for different scenarios through linguistic reasoning, thereby enabling the system to behave, plan, and execute tasks

like humans (Du et al. 2024b). The agents can operate individually or in multi-agent systems where they collaborate,

communicate, and specialize in distinct roles to solve more dynamic and complex problems using collective intelligence

(Guo et al. 2024).

LLM-based multi-agent systems have growing applications in software development (Hong et al. 2023), gaming (Xu

4 Du, August 16, 2024


et al. 2024), and societal simulations (Park et al. 2023), etc. However, their use in 3D design is still in the exploratory

stage. 3D-GPT (Sun et al. 2024) framework breaks down 3D modeling tasks into multiple steps, coordinating three

LLM agents in a manner akin to a human team to create 3D assets in Blender that match textual descriptions. SceneCraft

(Hu et al. 2024) utilizes multiple LLM agents to convert textual descriptions into Python scripts executable in Blender,

automatically rendering 3D scenes suitable for use in games and films. This system employs scene graphs to simulate

spatial relationships between assets and iteratively refines scenes using visual language models. In the architecture

domain, similar to the aforementioned research, Çelen et al. (2024) enables multiple LLM agents to create scene graphs

based on user instructions, and then uses a backtracking algorithm to place furniture, ultimately generating interior

design scenes. Mehta et al. (2024) propose an interactive framework allowing human architects to collaborate with an

LLM agent using natural language instructions to construct structures. The agent can place blocks, seek clarifications,

and integrate human feedback within a Minecraft-like 3D simulation environment. Du et al. (2024b) integrate an LLM

agent into BIM authoring software to answer software usage questions via Retrieval Augmentation Generation (RAG)

(Lewis et al. 2020) and perform simple modeling tasks based on natural language instructions. In conclusion, the

existing literature within our field has only explored the application of LLM-based agents in straightforward modeling

contexts. There is currently a dearth of studies exploring the potential of multi-agent systems to generate sophisticated

3D BIM models.

2.3 BIM-based model checking

With the continuous adoption of BIM concepts in the design phase of built assets, the rich information base provides

a sophisticated foundation for several downstream applications. The Industry Foundation Classes (IFC) (ISO 2024)

data model is well established to exchange digital representations of built assets comprising geometric and semantic

information. These representations are perfectly tailored to automatically perform checks regarding the compliance

of the envisioned design against various rules and guidelines. Such approaches have gained increasing interest from

different stakeholders in the industry throughout the last few years. A comprehensive overview of opportunities and

related challenges has been described by Preidel and Borrmann (2018). Eastman et al. (2009) have introduced an

overall approach towards automated code compliance checking based on BIM models. They divide the overall checking

process into four stages: rule interpretation to create machine-readable rules, building model preparation with advanced

analysis, rule checking execution, and reporting of detailed defects and issues.

The rules a model should be compliant with can vary in their complexity. To account for this challenge, Solihin

and Eastman (2015) have introduced a classification system for rules, which comprises four different levels. Rules

assigned to class 1 require a single or a small number of explicit data to be available. A typical example of such a rule

is the inspection of a dedicated property assigned for each element in the model. Class 2 rules are characterized by the

derivation of simple attribute values. Such calculations can comprise simple arithmetic or trigonometric calculations

5 Du, August 16, 2024


based on geometric representations or the aggregation of semantic information. Rules subsumed under class 3 require

an extended data structure and processing. Such rules require a comprehensive processing of semantic and geometric

data and the evaluation of intermediate calculations. Prominent examples of such rules can be found in the area

of evaluations related to fire safety regulations. These rules often involve the assessment of material parameters of

different components, geometric features, and path search algorithms to identify relevant spaces and corridors to be

checked. Class 4 considers rules that cannot be evaluated by prescribed features but rather require a holistic evaluation

of extracted information. In most cases, these rules consider multiple objectives, which are difficult (or even impossible)

to formulate in a sequential workflow. Hence, it is expected that software applications support users to extract and

identify relevant model information but ultimately don’t provide a simple pass/fail statement at the end of a checking

run.

Besides investigations regarding the varying complexity of rules, other researchers have focused on the translation

of human-readable guidelines and regulations (Zhang and El-Gohary 2017; Zhou et al. 2022; Fuchs et al. 2022) and

presented different methods to formulate the rules in machine-readable representations (Sydora and Stroulia 2020;

Häußler et al. 2021). As a recent development, the Information Delivery Specification (IDS) standard developed

and maintained by buildingSMART International enables the specification of rules targeting basic property checks

in a unified and vendor-independent manner (Tomczak et al. 2022). In its current development stage, however, it

merely supports semantic information but lacks options to specify comprehensive geometric conditions. Nuyts et al.

(2024) have investigated different approaches to current compliance checking techniques and discussed advantages and

downsides. As an extension to the approaches already mentioned, they also considered techniques related to linked

data, which have been raised by (Pauwels et al. 2011) earlier.

2.4 Summary and identified research gaps

Numerous studies have explored the use of generative AI to create geometric representations of conceptual buildings.

However, these advancements have not been integrated into the field of BIM-driven building design. Based on the

conducted literature review, our approach appears to be the first that utilizes collaborative LLM agents to generate

BIM models with relatively high LOD based on natural language instructions, ensuring compliance and consistency

by employing rule-driven model checking.

6 Du, August 16, 2024


Fig. 1. The proposed LLM-based multi-agent framework with a sample user instruction

3 METHODOLOGY

We propose Text2BIM, an LLM-based multi-agent framework, where four LLM agents assume different roles and

collaborate to convert natural language instruction into imperative code, thereby generating building models in BIM

authoring software. The core idea is to encapsulate the underlying modeling APIs of software using a series of custom

high-level tool functions. By using prompt engineering techniques to guide LLMs in calling these functions within the

generated code, we can construct native BIM models through a concise and efficient textual representation.

The overall framework with a sample user input is shown in Fig. 1. To realize the core concept outlined above, we

make use of four LLM-based agents with dedicated tasks and skills that interact with each other via text:

• Product Owner: Refines and enhances user instructions and generates detailed requirement documents.

• Architect: Develops textual building plans based on architectural knowledge.

• Programmer: Analyzes the requirements and writes code for modeling.

• Reviewer: Provides code optimization suggestions to address issues identified in the model.

Due to the typically brief and open-ended nature of user inputs, we first designed an LLM agent acting as a

Product Owner to expand and refine user instruction. This ensures the instruction contains sufficient information to

guide the downstream Programmer agent to invoke suitable tool functions in its code. The Product Owner agent’s

elaboration and detailing of the original instructions reference multiple sources. Firstly, it reads information from the

7 Du, August 16, 2024


custom toolset, which contains the modeling functions and the corresponding text descriptions. This allows the agent

to understand whether the input parameters required for calling the corresponding functions are available within the

user’s instructions or if additional information is needed. Secondly, it draws on the knowledge of the Architect agent.

When the Product Owner deems more architectural context or building plans necessary, especially in cases where

the building requirements are complex, it can opt to consult with the Architect. The Architect agent is designed to

generate building plans in a structured text format with coordinates and dimensions according to certain architectural

rules, combined with the user instruction relayed by the Product Owner.

The original user instruction, after being enhanced by the Product Owner, becomes a detailed requirement

document to guide the Programmer agent in combining and invoking appropriate functions from the toolset to

construct the building model expected by the user. The generated code is evaluated by a custom Python interpreter with

syntax checking. If exceptions are raised during code execution, the Programmer will be automatically prompted to

self-reflect and iteratively improve the code until errors are resolved.

The successfully generated building model is automatically exported into an IFC-based representation and then

sent to a downstream model checker for automatic quality assessment. We customized a series of domain-specific

rules in the checker to comprehensively evaluate model quality from various perspectives, including geometric analysis,

collision detection, information verification, etc. The results of the checks are ultimately exported in BIM Collaboration

Format (BCF), containing descriptions of the issues found in the model along with the associated building component

GUIDs.

At this stage, a Reviewer agent is introduced to interpret the BCF files and provide suggestions for optimizing the

model. The Reviewer is designed to understand the current issues in the model by reading the information recorded in

the BCF files. It then proposes solutions by combining this information with the Programmer’s previously generated

code and tool function documentation. This involves guiding and prompting the Programmer agent to use the

appropriate tool functions to fix the code in order to resolve the issues present in the model. This model quality

optimization loop involving the Reviewer, Programmer, and model checker will iterate multiple times until the

checker reports no errors or the agents are unable to resolve the issues autonomously.

To ensure that the agents can perceive comprehensive contextual information within the loop, we implement a local

loop-internal memory module to store the historical interaction between the Programmer and Reviewer during the

optimization process. In addition, a global memory module shared by the Programmer and the Product Owner is

designed to store the user’s historical chat records and the corresponding code information. This allows the agents

to have continuous contextual information during the conversation with the human, enabling the entire framework to

improve and refine responses based on human feedback. Through these multiple optimization loops, we can iteratively

guide the LLM agents using domain knowledge to generate BIM models that meet certain design quality, user intentions,

and engineering requirements.

8 Du, August 16, 2024


Fig. 2. Top 50 most used commands in BIM authoring software Vectorworks (based on collected large-scale log
datasets)

In the following subsections, we will describe several key modules of the proposed framework in detail.

3.1 Toolset

The manually defined tool functions can essentially be viewed as high-level, concise API interfaces exposed to the

LLMs. Since the native APIs of BIM authoring software are usually fine-grained and low-level, each tool inherently

encapsulates the logic of combining different callable API functions to achieve its functionality. This not only avoids

the tediousness and complexity of low-level API calls but also incorporates specific design rules and engineering logic,

ensuring the precision of the modeling tasks handled by the tool (Du et al. 2024b). However, designing general tool

functions to effectively address various building scenarios presents a significant challenge.

To address this difficulty, we employed both quantitative and qualitative analysis methods to determine what

tool functions to implement. We began our investigation by examining user log files from BIM authoring software

to understand which commands (tools) human designers most frequently use while interacting with the software.

We utilized one day’s log data collected from 1,000 anonymous global users of the design software Vectorworks,

encompassing approximately 25 million records in 7 languages. After cleaning and filtering the raw data (Du et al.

2024a), the top 50 most frequently used commands are extracted, as shown in Fig. 2.

9 Du, August 16, 2024


Due to the significant differences between how humans interact with software and how LLM agents interact through

code, commands directly triggered by the mouse operations, such as drag, nudge, and resize, occupy a large proportion

of the log data. Additionally, since the data is completely anonymous, we cannot determine the users’ disciplines or

usage scenarios. However, some common steps in the modeling process can still be observed from this vast dataset,

such as delete, move, duplicate, set working layer, etc. We excluded mouse-dominated commands and highlighted

in orange the general modeling commands that can be implemented via APIs in the chart, to serve as a reference for

building tool functions for the agents.

On the other hand, we analyzed the built-in graphical programming tool Marionette (similar to Dynamo/Grasshop-

per) in Vectorworks. In fact, the nodes (batteries) provided by these visual scripting platforms are typically encapsulated

versions of the underlying APIs tailored to different scenarios, serving as a higher-level and more intuitive programming

interface for designers. Software vendors categorize the default nodes based on their functionalities, making it easier

for designers to understand and use them. These motivations are similar to ours. Given our use case of creating regular

BIM models, we mainly refer to the nodes under the "BIM" category.

Finally, we also considered the essential components and steps required in the typical building modeling process

of architects, such as creating floors and walls, setting materials and elevation, etc. By comprehensively synthesizing

information from these three perspectives, we designed a set of 26 tool functions for the agents, as shown in Table 4

- 6 in the Appendix I, covering important aspects of BIM authoring such as geometric modeling and semantic

enrichment. Since the LLM agents primarily understand and use the tools through their descriptions, we clearly

defined the functionality, usage scenarios, input parameters, and return parameters of each function using a structured

text format.

3.2 Prompt enhancement and building plan generation

The original user prompt 𝛼 needs to be expanded and enhanced into more specific representations aligned with

architectural design. Therefore, the tasks of the Product Owner and Architect at this stage are crucial, as they

determine the basic layout and quality baseline of the final generated model. The prompt template 𝑃 𝐴𝑟 𝑐ℎ designed for

the Architect is shown in Fig. 3. In this template, we first define the role and tasks of the agent and strictly specify the

output format and content to meet downstream programming requirements. Then, we incorporate 9 basic architectural

rules and principles into the prompt (e.g., components configuration, interior partition, building opening layout, etc.),

guiding the LLM to produce architectural-wise reasonable and structurally integrated building plans.

Additionally, we employ the few-shots learning approach (Brown et al. 2020) to provide the LLM with an example

conversation to better lead it in producing robust outputs. The user input 𝛼′ in the Architect’s prompt template

is a paraphrase of the original user instruction 𝛼 provided by the Product Owner. This is because we use the

function-calling mechanism to connect two agents: the Architect is wrapped in a function layer 𝐹, with the user

10 Du, August 16, 2024


Fig. 3. Prompt template of the Architect agent. Placeholders with “«»” indicate the dynamic content that can be
inserted into the template.
11 Du, August 16, 2024
task defined as the input parameter and the building plan as the output. The description documents the Architect’s

responsibilities, and this function is the only tool the Product Owner can invoke. It is important to note that the tool

here for function-calling is not related to our toolset designed for code generation tasks. The generation of the textual

building plan can be summarized by the Eq. 1:

𝐵𝑢𝑖𝑙𝑑𝑖𝑛𝑔_𝑃𝑙𝑎𝑛 ← 𝐹 (𝐿𝐿 𝑀Architect (𝑃 𝐴𝑟 𝑐ℎ (𝛼′ ))) (1)

Most mainstream LLMs have been fine-tuned to support function-calling, enabling them to return structured JSON-

formatted responses containing the function name and parameter values to be executed (OpenAI 2024b; Mistral 2024a;

Gemini 2024). The basic workflow of the function-calling is as follows. When the Product Owner receives user

instructions, it infers from the prompt context whether an external function needs to be called to complete the task. If

so, it generates a JSON object, explicitly indicating the function name (Architect) and parameters (paraphrased user

needs). We automatically interpret the JSON locally and invoke the Architect function. Upon receiving the task, the

Architect generates the corresponding building plan. This information is then concatenated as additional context to

the Product Owner’s prompt, guiding it to regenerate the final standard text response. The core goal of this design

is to enable the Product Owner to flexibly decide whether to consult the Architect based on the complexity of the

task, thereby making the interaction between the two agents more intelligent and efficient. This way, the Product

Owner can work independently on simple tasks and leverage the Architect’s expertise for complex tasks, optimizing

the overall workflow.

The prompt template 𝑃 𝑃𝑂 for the Product Owner is shown in Fig. 4. The Product Owner’s role is to synthesize

various contextual information, including the user’s original instructions 𝛼, the building plans from the Architect,

descriptions of function usage 𝑇 in the toolset, previous chat history with the user Φ𝑔𝑙𝑜𝑏𝑎𝑙 from the global memory

module, and the output specifications in the prompt template, to create a detailed and comprehensive requirements

document 𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 . This document serves as a guide for downstream code generation tasks. We utilize the Chain of

Thought (COT) technique (Wei et al. 2024) in the prompt, which requires the LLM to reason step-by-step, breaking

down the building modeling task into sub-tasks and sub-logics that utilize different tool functions. Therefore, the whole

process of user prompt enhancement can be expressed by the following Eq. 2:

𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 ← 𝐿𝐿 𝑀 𝑃𝑟 𝑜𝑑𝑢𝑐𝑡𝑂𝑤𝑛𝑒𝑟 (𝑃 𝑃𝑂 (𝛼, 𝑇, Φ𝑔𝑙𝑜𝑏𝑎𝑙 ) | 𝐵𝑢𝑖𝑙𝑑𝑖𝑛𝑔_𝑃𝑙𝑎𝑛) (2)

3.3 Coding for BIM model generation

The enhanced user requirements 𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 will be input into the Programmer’s prompt template 𝑃𝐶𝑜 , as illustrated

in Fig. 5. The Programmer agent is required to write concise Python code utilizing the functions solely from the

12 Du, August 16, 2024


Fig. 4. Prompt template of the Product Owner agent. Placeholders with “«»” indicate the dynamic content that can
be inserted into the template.

Fig. 5. Prompt template of the Programmer agent. Placeholders with “«»” indicate the dynamic content that can be
inserted into the template.

13 Du, August 16, 2024


toolset and the built-in standard Python libraries to accomplish the tasks specified by the Product Owner. The

advantage of generating Python code is that LLM can flexibly combine and call different tool functions using various

algorithmic logic. This is more powerful than JSON-based function-calling in a recent work (Fernandes et al. 2024),

which is constrained to executing single functions sequentially and cannot meet the complex logical demands of tasks

such as building modeling. Given that the enhanced requirements contain rich contextual information, we choose to

leverage the zero-shot learning (Wei et al. 2022) ability of the agent without the provision of exemplars. This approach

is intended to allow the LLM to flexibly explore different code logic tailored to various task requirements, rather than

being constrained by rigid examples, thereby maximizing the utilization of its pre-trained knowledge. Eq. 3 illustrates

the process of code generation, where we denote tool information as 𝑇 and the historical chat records with the Product

Owner as Φ𝑔𝑙𝑜𝑏𝑎𝑙 :

𝑐𝑜𝑑𝑒 ← 𝐿𝐿 𝑀𝐶𝑜𝑑𝑒𝑟 (𝑃𝐶𝑜 (𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 , 𝑇, Φ𝑔𝑙𝑜𝑏𝑎𝑙 )) (3)

A custom Python interpreter will execute the generated code within a controlled environment. We use an abstract

syntax tree (AST) to represent the code, traversing the tree nodes to evaluate each Python expression. This approach

allows us to customize the usable syntax and callable functions while enabling more precise error handling. The

interpreter uses a state dictionary to store and track the results of code execution, including imported packages, defined

function objects, and variable names and their values. This lays the technical foundation for the Programmer agent’s

memory capability — the LLM can utilize and access variables and functions defined in previous dialogues or directly

continue previous code. The interpreter can still execute code correctly by retrieving the state dictionary, ensuring

comprehensive context is maintained at the code level throughout the entire session. We extended previous work (Du

et al. 2024b) to allow the interpreter to support more data types and advanced syntax while restricting potentially

problematic syntax like the while statement. Additionally, the interpreter can only evaluate functions from the toolset

and Python built-in library (except for custom-defined functions within the generated code), preventing the invocation

of arbitrary third-party packages.

If an exception 𝐸 is thrown by the interpreter during code evaluation, resolving this error will be treated as a new

task input for the Programmer. The code that caused the exception will be used as the new history information. Both

the error and the code will be re-input into the prompt template 𝑃𝐶𝑜 for the Programmer to regenerate new code. As

shown in Eq. 3, this self-reflective modification loop will iterate up to 𝑛 times until the newly generated 𝑐𝑜𝑑𝑒 𝑛+1 is free

of exceptions 𝐸 𝑛+1 . If the issue is not resolved within 3 attempts, we will interrupt the process to seek human feedback.

𝑐𝑜𝑑𝑒 𝑛+1 ← 𝐿𝐿 𝑀𝐶𝑜𝑑𝑒𝑟 (𝑃𝐶𝑜 (𝐸 𝑛 , 𝑇, 𝑐𝑜𝑑𝑒 𝑛 )) where 𝑛 ∈ [0, 3) (4)

It should be noted that within this self-reflection loop, no updates will be made to any memory modules; only the

14 Du, August 16, 2024


Fig. 6. Prompt template of the Reviewer agent. Placeholders with “«»” indicate the dynamic content that can be
inserted into the template.

state dictionary will be updated. In each iteration, the only historical information the Programmer can see is the code

that failed in the previous step. We only store responses from the Programmer that are free of errors in the memory

modules.

3.4 Model quality assessment and iterative improvements

The successfully executed code will generate a building model within the BIM authoring software. Although

we used extensive prompt engineering in earlier stages to guide the LLMs in producing spatially and geometrically

reasonable results, the inherently stochastic nature of the process may still lead to flaws in the generated building.

Therefore, deterministic domain-specific rules are used to verify and refine the generated BIM model. We employ a

rule-based model checker to evaluate the model quality. According to (Solihin and Eastman 2015), we defined a series

of rules covering classes 1 to 3, primarily checking for geometric conflicts between components (e.g., whether doors

and windows overlap), correct semantic attribute definitions (e.g., whether each component has a unique GUID), and

compliance of spatial layouts with architectural common sense (e.g., whether the roof is supported by walls and not

floating). Detailed documentation of all rules can be found in the Appendix II. The issues identified in the model will

be exported to BCF files. A script is used to automatically extract useful information from the BCF, i.e., the name and

description of the issues and corresponding rules, as well as the GUIDs of the associated components. We denote this

information as 𝐼 and input it along with the 𝑐𝑜𝑑𝑒 0 that generated the checked model and the toolset information 𝑇 into

the Reviewer agent’s prompt template 𝑃 𝑅𝑒𝑣 (see Fig. 6), asking the Reviewer to provide suggestions 𝛽 on solving the

15 Du, August 16, 2024


issues. This process can be represented by Eq. 5:

𝛽 ← 𝐿𝐿 𝑀𝑅𝑒𝑣𝑖𝑤𝑒𝑟 (𝑃 𝑅𝑒𝑣 (𝐼, 𝑇, 𝑐𝑜𝑑𝑒)) (5)

𝛽 are then fed to the Programmer as an input task to guide it in generating code to resolve conflicts. This quality

optimization loop will iterate multiple times until the checker no longer finds any errors. Similar to the self-reflection

loop, if issues persist after 3 attempts, we will interrupt the loop and have a human designer resolve the issues. In

summary, our framework employs triple-loops to generate BIM model and iteratively improve its quality, which can

be summarized by Alg. 1.

Algorithm 1 Triple loops for model generation and optimization


1: repeat when user gives instruction 𝛼 //Human-feedback loop
2: Condition for calling Architect: 𝐼𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 ← 𝐿𝐿 𝑀 𝑃𝑟 𝑜𝑑𝑢𝑐𝑡𝑂𝑤𝑛𝑒𝑟 (𝑃 𝑃𝑂 (𝛼, 𝑇, Φ𝑔𝑙𝑜𝑏𝑎𝑙 ))
3: 𝐵𝑢𝑖𝑙𝑑𝑖𝑛𝑔_𝑃𝑙𝑎𝑛 ← 𝐼𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛 · 𝐹 (𝐿𝐿 𝑀Architect (𝑃 𝐴𝑟 𝑐ℎ (𝛼′ )))
4: 𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 ← 𝐿𝐿 𝑀 𝑃𝑟 𝑜𝑑𝑢𝑐𝑡𝑂𝑤𝑛𝑒𝑟 (𝑃 𝑃𝑂 (𝛼, 𝑇, Φ𝑔𝑙𝑜𝑏𝑎𝑙 ) | 𝐵𝑢𝑖𝑙𝑑𝑖𝑛𝑔_𝑃𝑙𝑎𝑛)
5: Code generation: 𝑐𝑜𝑑𝑒 0 ← 𝐿𝐿 𝑀𝐶𝑜𝑑𝑒𝑟 (𝑃𝐶𝑜 (𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 , 𝑇, Φ𝑔𝑙𝑜𝑏𝑎𝑙 ))
6: for 𝑛 = 0 to 3 do //Self-reflection loop
7: Code execution: 𝑀𝑜𝑑𝑒𝑙 ← 𝐼𝑛𝑡𝑒𝑟 𝑝𝑟𝑒𝑡𝑒𝑟 (𝑐𝑜𝑑𝑒 𝑛 )
8: if Exception 𝐸 𝑛 in 𝑐𝑜𝑑𝑒 𝑛 then
9: if 𝑛 = 3 then
10: Stop and requires new user input 𝛼
11: else
12: Self-correction: 𝑐𝑜𝑑𝑒 𝑛+1 ← 𝐿𝐿 𝑀𝐶𝑜𝑑𝑒𝑟 (𝑃𝐶𝑜 (𝐸 𝑛 , 𝑇, 𝑐𝑜𝑑𝑒 𝑛 ))
13: end if
14: else
15: 𝐵𝑟𝑒𝑎𝑘
16: end if
17: end for
18: Update Φ𝑔𝑙𝑜𝑏𝑎𝑙 with 𝛼, 𝛼𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 , and 𝑐𝑜𝑑𝑒 𝑛
19: Update Φ𝑙𝑜𝑐𝑎𝑙 with 𝑐𝑜𝑑𝑒 𝑛
20: for 𝑡 = 0 to 3 do //Quality-optimization loop
21: Model checking: 𝐼𝑡 ← 𝐶ℎ𝑒𝑐𝑘𝑒𝑟 (𝑀𝑜𝑑𝑒𝑙 𝑡 , 𝑅𝑢𝑙𝑒𝑠)
22: if Issues 𝐼𝑡 exist in 𝑀𝑜𝑑𝑒𝑙 𝑡 then
23: if 𝑡 = 3 then
24: Stop, purge Φ𝑙𝑜𝑐𝑎𝑙 and requires human intervention
25: else
26: Issues resolution proposal: 𝛽𝑡 ← 𝐿𝐿 𝑀𝑅𝑒𝑣𝑖𝑤𝑒𝑟 (𝑃 𝑅𝑒𝑣 (𝐼𝑡 , 𝑇, Φ𝑙𝑜𝑐𝑎𝑙 ))
27: Code revise: 𝑐𝑜𝑑𝑒 𝑡+1 ← 𝐿𝐿 𝑀𝐶𝑜𝑑𝑒𝑟 (𝑃𝐶𝑜 (𝛽𝑡 , 𝑇, 𝑐𝑜𝑑𝑒 𝑡 ))
28: Re-enter self-reflection loop for executing 𝑐𝑜𝑑𝑒 𝑡+1 to generate 𝑀𝑜𝑑𝑒𝑙 𝑡+1
29: Store the error-free 𝑐𝑜𝑑𝑒 𝑡+1
𝑛 in Φ
𝑙𝑜𝑐𝑎𝑙
30: end if
31: else
32: 𝐵𝑟𝑒𝑎𝑘 and purge Φ𝑙𝑜𝑐𝑎𝑙
33: end if
34: end for
35: until user closes the loop

16 Du, August 16, 2024


4 PROTOTYPE IMPLEMENTATION

We developed an interactive software prototype using the architecture shown in Fig. 7, integrating the proposed

framework into the BIM authoring software Vectorworks. Our implementation is based on Vectorworks’ open-source

web palette plugin template (Vectorworks 2024) and significantly extends previous work (Du et al. 2024b) to support

multi-agent workflows. The frontend of the web palette is implemented with Vue.js and runs in a web environment

built on Chromium Embedded Framework (CEF), allowing us to embed dynamic web interfaces in Vectorworks using

modern frontend technologies. The backend of the web palette is a C++ application, enabling the definition and

exposure of asynchronous JavaScript functions within a web frame, while the actual logic is implemented using C++

functions.

Fig. 7. Software architecture of the implemented prototype in Vectorworks based on web palette plugin template

Since our multi-agent framework is entirely based on Python, we invoke Vectorworks’ built-in Python engine

within the corresponding C++ functions to execute our code, thereby delegating the JavaScript implementation. Our

framework supports various mainstream LLMs, such as GPT-4o (OpenAI 2024a), Gemini-1.5-pro (Google 2024),

and the state-of-the-art open-source model Mistral-Large-2 (Mistral 2024b). Within the framework, we maintain a

state dictionary and a local memory module to store the state between Python calls and the interaction records of the

agents in the quality optimization loop, respectively. We use Vectorworks’ API to export the generated models as IFC

files, and then use the Solibri Autorun commands to automatically launch the Solibri Model Checker, perform checks,

17 Du, August 16, 2024


and output BCFs. This automation enables the entire workflow to operate fully autonomously. In the Solibri Model

Checker, 30 rules are implemented according to the rule documentation in Appendix II, as shown in Fig. 8.

Fig. 8. Implemented rules in Solibri Model Checker

On the frontend, a session begins with a new user input. If the input is audio, we employ the Whisper model

(Radford et al. 2022) to convert the speech to text and populate the input message box. We use page session storage

to store user input text and the chat records of the agents, displaying them sequentially in the dialog box with different

colors. Users can easily start a new session by reloading the page. Fig. 9 shows the developed prototype. Users

can chat directly with the agents by clicking the microphone button, generating editable building models directly in

Vectorworks. A demo video can be found in supplemental materials.

18 Du, August 16, 2024


Fig. 9. Seamless integration in Vectorworks. Users can give instructions via audio or text in the built-in chat window.
The responses of the individual agents are displayed in different colors in the dialog box. The generated building model
contains rich semantic information and can be further edited directly in the BIM software. A demo video can be found
in supplemental materials.

5 EXPERIMENTS AND EVALUATION

This section is devoted to the experimental evaluation of the presented methodology. The evaluation is performed

via the employment of test user prompts (instructions) to the proposed framework and comparing the generated

outcome of various LLMs including GPT-4o (OpenAI 2024a), Mistral-Large-2 (Mistral 2024b) and Gemini-1.5-Pro

(Google 2024). Ten user prompts were conceptualized to comprehensively test the generative capabilities and qualities

of the proposed framework from various perspectives, as illustrated in Table 1. Table 2 summarizes the different

architectural scenarios/requirements covered by each test prompt, including aspects such as shape, dimensions, spatial

features, room layouts, construction materials, etc., focusing on the functional and aesthetic elements essential to

their respective purposes (whether residential or commercial). Additionally, by deliberately leaving some building

requirements unspecified in the test prompts, we aim to experiment with the framework’s ability to generate designs in

open-ended settings. Given the inherent stochasticity of generative models, each test prompt was input into each LLM

five times, resulting in a total of 391 IFC models (including intermediate results from the optimization process). The

experiments are based on this dataset and aim to report statistically significant results.

Table 3 presents the pass rates of final models generated by different test prompts during model checking. These

rates are calculated based on a total of 30 domain-specific checking rules implemented in Solibri and reflect the quality

of the generated model. "n" indicates the number of times this prompt was input. The "Mean pass rate" records the

19 Du, August 16, 2024


TABLE 1. Test user prompts

Nr. Content
1 I want to build a two-story hotel with eight rooms on each floor. The rooms are arranged in groups of four on each side, separated by a
4-meter-wide corridor in the middle. Each room has a door and a window. The doors of the rooms are on the corridor side of the wall,
and the windows are on the outside wall of the building. The building should have a wooden pitched roof and brick walls.
2 Create a basic 3D model of a four-story residential house with dimensions of 10 by 6 meters.
3 Create a single-story residential house with a total floor area of 120 square meters. Includes three bedrooms, two bathrooms, a kitchen,
and a living room. The house should have a wooden pitched roof, and incorporate at least four windows and one main entrance door.
Use the concrete walls.
4 Create a building with three connected sections. Each section should be a rectangle (10m x 5m) with walls of 3 meters in height. The
sections should be connected by 5-meter-long walls. Add slabs and a continuous, concrete-pitched roof that covers all sections. Add
doors and windows to each section. Choose the right material for the wall.
5 Create a 3-story L-shaped house with each leg of the L being 8 meters long and 4 meters wide. Place a door at the corner of the L and
a window on each side of the L. I want the whole building to be made of wood.
6 Design a building with a complex polygonal footprint (e.g., hexagonal). Each side of the hexagon should be 5 meters. Add a slab for
the floor and a pitched roof. Include a door on one side and a window on each of the other sides.
7 Construct a residential building with a rectangular footprint (15m x 10m), a pitched roof and two floors. Create balconies by extending
the floor slab outwards from the exterior walls on the first floor. Add doors and windows to each floor. Make sure that the balconies
are accessible from the inside.
8 Construct a modern office building with a rectangular base of 20 meters by 20 meters. Set the wall height to 3 meters. Include four
rooms (5x5 meters each) along the perimeter, with a central open space. Add doors and windows to each room and a main entrance
door to the building.
9 Design a two-story apartment building with an H-shaped base. Each floor consists of two apartments with two rooms each.
10 Create a T-shaped, single-story building with a horizontal section of 10 meters x 30 meters and a vertical section of 10 meters x 20
meters. Connect the two sections by placing a door at their junction. Each section has three windows. The entire building is made of
concrete.

TABLE 2. Categorization of user requirements in test prompts ("-" for not specified)

Nr. Type Floors Base Shape Roof Materials Rooms Spatial Features Building Openings
Wood, Doors on corridor side,
1 Hotel 2 Rectangular Pitched 16 4m wide corridor
brick windows on outside wall
Rectangular
2 House 4 - - - - -
(10m x 6m)
120 𝑚 2 , shape Wood, Main entrance door, at least
3 House 1 Pitched 7 -
not specified concrete four windows
Connected
Connecting sections with Doors and windows in each
4 Building - rectangles (10m Pitched - -
5m long walls section
x 5m)
Each leg of the L being 8m Door at corner, window on
5 House 3 L-shaped - Wood -
x 4m each side
Each side of the hexagon is Door on one side, window
6 Building - Hexagon Pitched - -
5m on each other side
Residential Rectangular Doors and windows on each
7 2 Pitched - - Balconies on first floor
building (15m x 10m) floor, accessible balconies
Office Rectangular Central open space, rooms Doors and windows in each
8 - - - 4
building (20m x 20m) along the perimeter room, main entrance door
Apartment 2 apartments with 2 rooms
9 2 H-shaped - - 8 -
building each
Vertical section: 10m x
Door at junction, three
10 Building 1 T-shaped - Concrete - 20m, horizontal section:
windows per section
10m x 30m

average pass rate of the corresponding LLM under each test prompt, as well as the overall average pass rate across all

test cases. The three LLMs under the proposed framework were generally able to generate high-quality BIM models.

Among them, GPT-4o and Mistral-Large-2 achieved average pass rates of 99.4% and 99.2% respectively across all test

cases, while Gemini-1.5-pro only reached a pass rate of 94.05%. When examining each test prompt individually, it

20 Du, August 16, 2024


becomes evident that Gemini’s performance is significantly more inconsistent compared to the other two models, with

its metrics fluctuating between 66.67% and 100%, indicating a high variance. Prompts 3, 4, and 5 could be considered

"commonly recognized challenges" as none of the LLMs managed to avoid errors in all five runs. Additionally, despite

Mistral being the smallest model in terms of size among the three, its demonstrated stability and exceptional reasoning

ability are particularly impressive.

TABLE 3. Rule pass rates of final models generated by different backbone LLMs under test prompts

Backbone LLMs n Prompt 1 Prompt 2 Prompt 3 Prompt 4 Prompt 5 Prompt 6 Prompt 7 Prompt 8 Prompt 9 Prompt 10
1 100.00% 100.00% 96.67% 100.00% 100.00% 100.00% 96.67% 100.00% 96.67% 100.00%
2 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 96.67%
GPT-4o 3 100.00% 100.00% 100.00% 96.67% 100.00% 100.00% 96.67% 100.00% 100.00% 100.00%
4 100.00% 100.00% 100.00% 100.00% 96.67% 100.00% 96.67% 100.00% 100.00% 100.00%
5 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 96.67% 100.00%
Mean pass rate 99.40% 100.00% 100.00% 99.33% 99.33% 99.33% 100.00% 98.00% 100.00% 98.67% 99.33%

1 73.33% 80.00% 96.00% 100.00% 80.00% 96.67% 100.00% 100.00% 96.67% 100.00%
2 96.67% 100.00% 86.67% 96.67% 93.33% 100.00% 66.67% 100.00% 100.00% 96.67%
Gemini-1.5-Pro 3 100.00% 76.67% 100.00% 73.33% 90.00% 100.00% 96.67% 100.00% 96.67% 100.00%
4 76.67% 96.67% 100.00% 100.00% 100.00% 100.00% 96.67% 100.00% 100.00% 100.00%
5 93.33% 76.67% 86.67% 100.00% 96.67% 100.00% 100.00% 96.67% 100.00% 90.00%
Mean pass rate 94.05% 88.00% 86.00% 93.87% 94.00% 92.00% 99.33% 92.00% 99.33% 98.67% 97.33%

1 100.00% 93.33% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00%
2 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 96.67%
Mistral-Large-2 3 100.00% 100.00% 96.67% 100.00% 96.67% 100.00% 100.00% 100.00% 100.00% 100.00%
4 100.00% 100.00% 100.00% 90.00% 96.67% 100.00% 100.00% 100.00% 100.00% 100.00%
5 100.00% 100.00% 96.67% 96.67% 96.67% 100.00% 100.00% 100.00% 100.00% 100.00%
Mean pass rate 99.20% 100.00% 98.67% 98.67% 97.33% 98.00% 100.00% 100.00% 100.00% 100.00% 99.33%

We further evaluated the effectiveness of the model quality optimization method proposed in the framework. Unlike

the pass rate, which is a category-related metric, we selected the issue amount as a more fine-grained, instance-level

metric. Specifically, while the pass rate provides a broad overview of the types of issues present in the model (e.g.,

if the model fails the IfcSlab-IfcSlab intersection rule but passes the other 29 rules, it indicates a type of issue related

to slab position conflicts), the issue amount offers a deeper insight into the quantity of affected component pairs (e.g.,

how many pairs of slab instances have positioning conflicts in the model).

Fig. 10 presents three line charts illustrating the average issue amount that exists in BIM models at the initial

generation stage, as well as after the first, second, and third rounds of quality optimization iteration. It can be observed

that GPT-4o and Mistral-Large-2, when acting as Reviewer agents, are effective in iteratively resolving issues within

the model. The average issue amount tends to decrease overall with each iteration step and converges eventually to a

smaller value, consistent with the results reported in Table 3. In contrast, Gemini-1.5-Pro exhibits an upward trend

in some test prompts, indicating an increase in the number of issues. Upon closely reviewing the logs of the Gemini

21 Du, August 16, 2024


agent, we found that Gemini often tends to completely restructure the previous Programmer’s code to generate a new

model in an attempt to solve the issues, rather than following the instructions in the prompt template to create code

fixes. This approach can cause duplication and conflict between components of the new and old models, leading to a

sharp increase in issue amount. Furthermore, the excessive number of issues negatively impacts the judgment of the

LLM as a Reviewer, causing hallucinations to accumulate during the iterative process.

Fig. 10. Effectiveness of quality optimization loop with different LLMs

22 Du, August 16, 2024


Fig. 11. Qualitative results generated by the Text2BIM framework based on test prompts 5 and 7. Visualizations of
the models generated by other prompts can be found in Appendix III

Fig. 11 visualizes part of the building models generated by the proposed framework. Due to space constraints, the

full list of generated building models can be found in Appendix III. By carefully reviewing these generated models,

we systematically evaluated whether the generated buildings meet the user requirements and intentions described in

23 Du, August 16, 2024


Table 2. The evaluation results are visualized in Fig. 12. Regardless of the LLMs used, the models generated by

the framework were generally able to effectively fulfill the user’s intentions. Even for open-ended instructions that

do not explicitly specify some requirements, our framework can augment and complete the user’s original input. It

leverages the pre-trained architectural knowledge of the LLMs and feedback from the domain-rule-based model checker

to ultimately produce buildings that are rational both in engineering and architectural terms. Additionally, it can be

observed that the reasonable arrangement of building openings, a task requiring advanced spatial understanding, poses

a challenge for all LLMs.

Fig. 12. Evaluate the generated models against the requirements specified by users in the test prompts. 0 indicates that
the requirement was not specified in the instructions, but LLM agents generated relevant results. -0.5 indicates that the
agents did not generate any relevant results for the unspecified requirement.

6 DISCUSSION AND LIMITATIONS

The implemented framework is currently capable of generating regular, non-curved building models in the early

design stage. To generalize this approach to irregularly shaped buildings or more detailed engineering models,

the development of more complex tools for the agents is required to significantly expand the existing limited toolset.

However, a challenge arises in organizing and managing the vast amount of tool information and their interdependencies,

so that the LLM can efficiently retrieve useful functions. Knowledge graphs and graph-based Retrieval Augmented

Generation (RAG) techniques might offer potential solutions.

Additionally, in the current framework, the Architect agent generates structured text-based building plans that

include numerical information such as coordinates and dimensions. This content format is designed to be aligned

with the input requirements of the tool functions, allowing downstream agents to better understand and utilize specific

architectural parameters in the code. It is observed in the experiments that this approach is more robust and accurate

than using formats like SVG (XML) or images for representing floor plans. Moreover, Architect currently designs

the building’s interior layout based solely on its pre-trained knowledge and the examples and information provided

in prompt templates. Although the generated interior partitions appear visually reasonable to some extent (as shown

in Fig. 17), they lack comprehensive consideration of complex architectural conditions (e.g., lighting, functionality,

24 Du, August 16, 2024


accessibility, etc.) and regulations (e.g., fire safety, area requirements, etc.). Future work could explore how to

effectively integrate this complex architectural knowledge into LLMs.

Our experiment demonstrates that LLM agents can automatically resolve clashes within the model to a limited

extent through the designed quality optimization loop. Although this module is not the main focus of this study,

our preliminary exploration in this direction presents a new technical approach for research in related fields. This

is particularly significant considering that current research on automatic clash resolution mainly focuses on using

optimization algorithms (Wu et al. 2023), classical machine learning (Harode et al. 2024), or reinforcement learning

(Harode et al. 2022). Despite these advancements, the conflict resolution method based on LLM agents still has

significant limitations. Fig. 13 summarizes some representative scenarios encountered during the quality optimization

loop. The first common failure (a) involves the agent attempting to rewrite code to create a new model, leading to an

increase in issue amount due to conflicts between the new and existing model components. In scenario (b), the upper two

floors of the initial model have overlapping and nested walls, doors, and windows. In such highly complex situations,

LLM agents, which rely solely on code and checker feedback (rule/issue descriptions) for contextual information,

cannot resolve all the issues and are prone to hallucinations. The strategy the agent adopts here involves deleting parts

of the walls on the relevant floors. While this action can reduce the overall issue amount in the model, it compromises

the structural integrity of the building. However, current agents can only perceive information from 1D text and are

not yet capable of understanding 3D spaces in this manner. Scenario (c) illustrates a successful case where the agent

correctly adjusts the height of a floating roof to align with the top floor’s wall elevation. Overall, LLMs perform well

for intuitive issues with deterministic solutions (typically Class 1 rules, such as "no space defined in model -> create

space"). However, they often fail on complex issues that require higher-level spatial understanding and have open-ended

solutions (usually Class 3 rules, such as "two partition walls intersect -> which wall is to be moved, and in which

direction?"). Although our framework allows users to guide the LLM to perform the appropriate issue-solving actions

via dialogue or manually continue editing the BIM model generated in the software, future research will prioritize

enhancing the LLM’s spatial understanding capabilities to advance toward an autonomous conflict resolution system.

Given that our approach generates code representations of 3D models based on prompt engineering techniques, it

does not require fine-tuning of LLMs. This is fundamentally different from conventional Text-to-3D methods, which

typically require constructing a 3D dataset for training. Commonly used metrics such as Chamfer Distance (CD) and

Intersection over Union (IoU) mainly focus on evaluating the geometric accuracy of point cloud/voxel models. As

these metrics are not applicable to our data representation approach, we propose using the pass rate of domain-specific

rule checks as a quantitative metric to evaluate the generated BIM models. While this method can verify whether the

generated models are structurally complete and reasonable in architectural terms, its limitation lies in the fact that the

rules provided by model checkers cannot assess whether the generated buildings align with the abstract and dynamic

user intentions expressed in natural language instructions (e.g., "H-shaped house", "arrange rooms along the building

25 Du, August 16, 2024


Fig. 13. Several representative cases encountered during the quality optimization loop

perimeter", etc.). Currently, we still rely on manual review to determine whether the models align with the intended

instructions. Future research could leverage the data generated by this work to develop new benchmark datasets and

metrics, enabling the automated, data-driven evaluation of user intent.

7 CONCLUSIONS

We introduce Text2BIM, an LLM-based multi-agent collaborative framework that generates building models in

BIM authoring software from natural language descriptions. The main findings and contributions of this study are as

follows:

• Unlike previous studies that focused on generating the 3D geometric representation of buildings, our framework

is capable of producing native BIM models with internal layouts, external envelopes, and semantic information.

• We propose representing 3D building models using imperative code that interacts with BIM authoring software

APIs. By employing prompt engineering techniques, multiple LLM agents collaborate to develop the code

without the need for fine-tuning, thereby conserving computational resources.

• Innovatively, a domain-specific rule-based model checker is integrated into the framework to guide LLMs

in generating architecturally and structurally rational outcomes. The proposed quality optimization loop

demonstrates that the LLM agents can iteratively resolve conflicts within the BIM model based on textual

feedback from the checker.

26 Du, August 16, 2024


• Extensive experiments have been conducted to comprehensively evaluate the performance of different modules

within the proposed framework, including a comparative analysis of three open/closed-source LLMs within the

framework, which validates the generalizability and effectiveness of our approach.

• An interactive software prototype is developed that integrates the proposed framework into the BIM authoring

tool Vectorworks, showcasing innovative possibilities for modeling-by-chatting during the design process.

We believe that the proposed methodology can be extended to a broader range of use cases beyond just model

generation, especially if more specialized tools are developed for LLM agents to utilize. We hope that readers will find

inspiration from this and explore using LLMs to address more challenges within our field.

8 DATA AVAILABILITY STATEMENT

Some data and models that support the findings of this study are available from the corresponding author upon

reasonable request.

9 ACKNOWLEDGMENTS

This work is funded by Nemetschek Group, which is gratefully acknowledged. We sincerely appreciate the data

and licensing support provided by Vectorworks, Inc.

10 SUPPLEMENTAL MATERIALS

1. Demo video

27 Du, August 16, 2024


APPENDIX I. TOOLSET DOCUMENTATION

TABLE 4. Toolset (a)

Tool function name Description


create_story_layer This tool is used to create a new story layer. The new layer is created at the given elevation. Once a new story layer is created,
it becomes the active layer. All new building elements will be created on the current active story.
Input:
- layer_name: str, the unique name of the new story.
- elevation: float, the elevation of the new story relative to the ground.
- floor_index: int, the index of the new floor. Should start from 1.
Return:
- str, the layer_uuid of the new story layer.
set_active_story_layer This tool is used to set the story layer with the given name to active. The active story layer is the layer in which new elements
are created.
Input:
- layer_name: str, the name of the layer to set as active.
Return:
- str, the layer_uuid of the active layer.
create_functional_area This tool is used to create a conceptual functional area on a specified layer. The area is created from a list of vertices that
define the room boundary. Usually, functional areas are created first to define the interior layout of the building, and then the
rooms are separated by placing walls at the boundaries.
Input:
- vertices: list of tuples, each tuple represents the 2D coordinate of a vertex that defines the boundary of the room.
- name: str, the name of the room/functional area.
- layer_uuid: str, the uuid of the story layer where the space will be created.
Return:
- str, the uuid of the created room/functional area.
create_wall This tool is used to create a wall on a specified layer. By default, the wall is created with a bottom_elevation of 0 and a
top_elevation of 3000 relative to this layer.
Input:
- st_pt: tuple, the 2D coordinate of the starting point of the wall.
- ed_pt: tuple, the 2D coordinate of the end point of the wall.
- layer_uuid: str, the uuid of the story layer where the wall will be created.
Return:
- str, the uuid of the newly created wall.
set_wall_thickness This tool is used to set the thickness of a wall.
Input:
- uuid: str, the uuid of the wall object.
- thickness: float, the new thickness of the wall.
Return:
- str, the uuid of the wall object that has been modified.
set_wall_elevation This tool is used to set the top/bottom elevation of a wall. Subtracting these two is the height of the wall itself.
Input:
- uuid: str, the uuid of the wall object.
- top_elevation: float, the vertical distance from the top of the wall to the story layer where the wall was originally created.
- bottom_elevation: float, the vertical distance from the bottom of the wall to the story layer where the wall was originally
created.
Return:
- str, the uuid of the wall object that has been modified.
get_wall_elevation This tool is used to get the top and bottom elevation of a wall. Subtracting these two is the height of the wall itself.
Input:
- uuid: str, the uuid of the wall object.
Return:
- top_elevation: float, the vertical distance from the top of the wall to the story layer where the wall was originally created.
- bottom_elevation: float, the vertical distance from the bottom of the wall to the story layer where the wall was originally
created.
get_wall_thickness This tool is used to get the thickness of a wall.
Input:
- uuid: str, the uuid of the wall object.
Return:
- thickness: float, the thickness of the wall.

28 Du, August 16, 2024


TABLE 5. Toolset (b)

Tool function name Description


set_wall_style This tool is used to set the style of a wall.
Input:
- uuid: str, the uuid of the wall object.
- style_name: str, the name of the style. Following wall style names are available: ["Exterior Concrete Wall", "Exterior Wood
Wall", "Exterior Brick Wall", "Interior Concrete Wall", "Interior Wood Wall", "Interior Brick Wall"]
Return:
- str, the uuid of the wall object that has been modified.
add_window_to_wall This tool is used to add a window to a wall. Once a window is added to a wall, it is part of the wall and will be
moved/duplicated/rotated with the wall.
Input:
- wall_uuid: str, the uuid of the wall object to which the window will be added.
- window_elevation: float, the elevation of the window from the bottom of the wall.
- window_offset: float, the offset of the window from the starting point of the wall.
- window_name: str, the name of the window object to be added.
Return:
- str, the uuid of the window object that has been added to the wall.
add_door_to_wall This tool is used to add a door to a wall. Once a door is added to a wall, it is part of the wall and will be moved/duplicated/rotated
with the wall.
Input:
- wall_uuid: str, the uuid of the wall object to which the door will be added.
- door_elevation: float, the elevation of the door from the bottom of the wall.
- door_offset: float, the offset of the door from the starting point of the wall.
- door_name: str, the name of the door object to be added.
Return:
- str, the uuid of the door object that has been added to the wall.
move_obj This tool is used to move an element. It can only move the given element within the layer where it is placed but not duplicate
it.
Input:
- uuid: str, the unique uuid of the element to move.
- xDistance: float, moving distance in x direction.
- yDistance: float, moving distance in y direction.
- zDistance: float, moving distance in z direction.
Return:
- None
delete_element This tool is used to delete an element or a list of elements. Story layers cannot be deleted.
Input:
- uuid: str or a list of string, the unique uuids of the elements to delete.
Return:
- None
find_selected_element This tool is used to find the selected element in the current active story layer. If no selected elements are found, it will return
an empty list.
Input:
- None
Return:
- list of str, the uuids of the selected elements.
create_polygon This tool is used to create a polygon on a specified story layer using its vertices.
Input:
- vertices: list of tuples, each tuple represents the 2D coordinate of a vertex of the polygon.
- layer_uuid: str, the uuid of the story layer where the polygon will be created.
Return:
- str, the uuid of the created polygon.
get_polygon_vertex This tool is used to get a desired vertex at the given index in the polygon’s vertex array.
Input:
- uuid: str, the uuid of the polygon object.
- at: int, the index of the desired vertex.
Return:
- tuple, the 2D coordinate of the desired vertex of the polygon.
get_vertex_count This tool is used to get the number of vertices in a polygon.
Input:
- uuid: str, the uuid of the polygon object.
Return:
- int, the number of vertices in the input polygon.

29 Du, August 16, 2024


TABLE 6. Toolset (c)

Tool function name Description


create_slab This tool is used to create a slab from a polygon profile on a specified layer.
Input:
- profile_id: str, the uuid of a polygon object that determines the profile of the slab.
- layer_uuid: str, the uuid of the story layer where the slab will be created.
Return:
- str, the uuid of the created slab.
set_slab_height This tool is used to set the height (elevation) of a slab.
Input:
- slab_id: str, the uuid of the slab object.
- height: float, the height of the slab relative to the story layer where the slab was originally created.
Return:
- str, the uuid of the modified slab.
get_slab_height This tool is used to get the height (elevation) of a slab.
Input:
- slab_id: str, the uuid of the slab object.
Return:
- float, the height of the slab relative to the story layer where the slab was originally created.
set_slab_style This tool is used to set the style of a slab.
Input:
- slab_id: str, the uuid of the slab object.
- style_name: str, the name of the style.
Return:
- str, the uuid of the modified slab.
duplicate_obj This tool is used to duplicate an element to a specified layer. Note that when duplicating a wall that includes doors and
windows, the doors and windows within it will also be duplicated. The story layer cannot be duplicated.
Input:
- element_uuid: str, the unique uuid of an element to duplicate.
- layer_uuid: str, the uuid of the story layer where the copies will be placed.
- n: int, the number of copies to make.
Return:
- list of str, the list of uuids of the copies. It is recommended to use this list to further manipulate the copies.
rotate_obj This tool is used to rotate an element.
Input:
- uuid: str, the unique uuid of the element to rotate.
- angle: float, the angle in degrees to rotate the element.
- center: tuple, the 2D coordinate of the center of rotation. By default, it is the center of the element. (optional)
Return:
- str, the uuid of the rotated element.
create_pitched_roof This tool is used to create a pitched roof from a polygon profile on a specified layer.
Input:
- profile_id: str, the uuid of a polygon object that determines the profile(base) of the roof.
- layer_uuid: str, the uuid of the story layer where the roof will be created.
- slope: float, the slope of the roof in degrees. It cannot be less than 5.
- eave_overhang: float, the eave overhang of the roof.
- eave_height: float, the elevation of the roof relative to the specified layer. Usually the height of the wall on this floor.
- roof_thickness: float, the thickness of the roof.
Return:
- str, the uuid of the created roof.
set_pitched_roof_attributes This tool is used to set the new attributes of a pitched roof. Attributes that need to be changed can be optionally entered.
Input:
- roof_id: str, the uuid of the roof object.
- slope: float, the slope of the roof in degrees (optional).
- eave_overhang: float, the eave overhang of the roof (optional).
- eave_height: float, the height(elevation) of the roof from the story layer where the roof was originally created (optional).
- roof_thickness: float, the thickness of the roof (optional).
Return:
- str, the uuid of the modified roof.
set_pitched_roof_style This tool is used to set the style of a pitched roof.
Input:
- roof_id: str, the uuid of the roof object.
- style_name: str, the name of the style. Available: ["Low Slope Concrete w/ Rigid Insulation", "Sloped Wood Struct Insul
Flat Clay Tile"]
Return:
- str, the uuid of the modified roof.
30 Du, August 16, 2024
APPENDIX II. RULESET DOCUMENTATION

TABLE 7:

Class 1 rules

Scope The rule checks if all components have a unique GUID.

Solibri RuleId SOL/176/2.2

Components any component

Desired resolution Refine GUIDs of those components, which have not passed this rule.

Scope The rule checks if the model has a spatial breakdown structure comprising IfcSite, IfcBuilding,

and IfcBuildingStorey instances.

Solibri RuleId SOL/176/2.2

Components Spatial breakdown elements

Desired resolution Create appropriate spatial containers and assign components accordingly.

Scope The rule checks if all doors and windows are on the same floor as the containing wall.

Solibri RuleId SOL/176/2.2

Components Doors and windows

Desired resolution Re-assign spatial associations for each affected door or window.

Scope The rule checks if each component has a layer information attached to it.

Solibri RuleId SOL/230/1.1

Components Any component excluding openings

Desired resolution Add layer information to affected components.

Scope The rule checks if certain components are present in the model (e.g., walls, doors, windows,

slabs, roofs, spaces).

Solibri RuleId SOL/11/4.2

Components any component

Desired resolution Create missing components based on the user’s input.

31 Du, August 16, 2024


TABLE 8:

Class 2 rules

Scope The rule checks if the description value of all building components is set and the value complies

with the pattern of a UUID.

Solibri RuleId SOL/244/1.0

Components any physical component

Desired resolution Set Vectorworks-internal ID into the description field of those components, which have not

passed this rule.

TABLE 9:

Class 3 rules

Scope The rule checks if a component intersects with another component.

Solibri RuleId SOL/1/5.0

Components Any physical component

Desired resolution Reposition the components to avoid intersection

Scope The rule checks if two components duplicate each other.

Solibri RuleId SOL/1/5.0

Components any physical component

Desired resolution Remove one of the components

Scope The rule checks the connection between two components:

Roofs must be connected to the walls on the uppermost floor.

Slabs must be connected to supporting walls.

Solibri RuleId SOL/23/5.2

Components Roofs, slabs and walls

Desired resolution Move the roofs/slabs to the top of the support walls

Scope The rule checks that the model doesn’t contain any orphan doors or windows (a door or a

window, which doesn’t have a relation to any wall).

Continued on next page

32 Du, August 16, 2024


TABLE 9: (Continued)

Class 3 rules

Solibri RuleId SOL/176/2.2

Components doors and windows

Desired resolution Remove the orphan doors or windows

33 Du, August 16, 2024


APPENDIX III. VISUALIZATION OF GENERATED MODELS

Fig. 14. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.1).

Fig. 15. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.4).

34 Du, August 16, 2024


Fig. 16. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.2).

Fig. 17. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.3).

35 Du, August 16, 2024


Fig. 18. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.6).

Fig. 19. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.8).

36 Du, August 16, 2024


Fig. 20. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.9).

Fig. 21. The building model in wireframe/rendering mode generated by different LLMs through the proposed
framework according to the corresponding text description (Prompt Nr.10).

37 Du, August 16, 2024


REFERENCES

Borrmann, A., König, M., Koch, C., and Beetz, J. (2018). “Building information modeling: Why? what? how?.”

Building Information Modeling - Technology foundations and industry practice, A. Borrmann, M. König, C. Koch,

and J. Beetz, eds., Vol. 1, Springer, 1–24.

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell,

A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter,

C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A.,

Sutskever, I., and Amodei, D. (2020). “Language models are few-shot learners, <https://arxiv.org/abs/2005.14165>.

Chen, J., Shao, Z., and Hu, B. (2023). “Generating interior design from text: A new diffusion model-based method for

efficient creative design.” Buildings, 13(7).

de Miguel Rodríguez, J., Villafañe, M. E., Piškorec, L., and Sancho Caparrini, F. (2020). “Generation of geometric

interpolations of building types with deep variational autoencoders.” Design Science, 6, e34.

Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Chang, B., Sun, X., Li, L., and Sui, Z.

(2024). “A survey on in-context learning, <https://arxiv.org/abs/2301.00234>.

Du, C., Deng, Z., Nousias, S., and Borrmann, A. (2024a). “Towards commands recommender system in bim authoring

tool using transformers.” Proc. of the 31th Int. Conference on Intelligent Computing in Engineering (EG-ICE) (Jul).

Du, C., Nousias, S., and Borrmann, A. (2024b). “Towards a copilot in BIM authoring tool using large language model

based agent for intelligent human-machine interaction.” Proc. of the 31th Int. Conference on Intelligent Computing

in Engineering (EG-ICE) (Jul).

Eastman, C., min Lee, J., suk Jeong, Y., and kook Lee, J. (2009). “Automatic rule-based checking of building designs.”

Automation in Construction, 18, 1011–1033.

Ennemoser, B. and Mayrhofer-Hufnagl, I. (2023). “Design across multi-scale datasets by developing a novel approach

to 3dgans.” International Journal of Architectural Computing, 21(2), 358–373.

Fernandes, D., Garg, S., Nikkel, M., and Guven, G. (2024). “A gpt-powered assistant for real-time interaction with

building information models.” Buildings, 14(8).

Fuchs, S., Witbrock, M., Dimyadi, J., and Amor, R. (2022). “Neural semantic parsing of building regulations for

compliance checking.” IOP Conference Series: Earth and Environmental Science, 1101, 092022.

Gemini (2024). “Intro to function calling with gemini api, <https://ai.google.dev/gemini-api/docs/function-calling>.

Accessed: 2024-07-16.

Google, G. T. (2024). “Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context,

<https://arxiv.org/abs/2403.05530>.

Graphisoft (2024). “Archicad ai visualizer, <https://graphisoft.com/solutions/innovation/archicad-ai-visualizer>. ac-

38 Du, August 16, 2024


cessed June 27, 2024.

Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N. V., Wiest, O., and Zhang, X. (2024). “Large language

model based multi-agents: A survey of progress and challenges, <https://arxiv.org/abs/2402.01680>.

Harode, A., Thabet, W., and Gao, X. (2022). An Integrated Supervised Reinforcement Machine Learning Approach for

Automated Clash Resolution. 679–688.

Harode, A., Thabet, W., and Gao, X. (2024). “Developing a machine-learning model to predict clash resolution

options.” Journal of Computing in Civil Engineering, 38(2), 04024005.

He, Z., Wang, Y.-H., and Zhang, J. (2023). “Generative structural design integrating bim and diffusion model,

<https://synthical.com/article/bb21e837-1ed0-4489-8a33-768e6d0882fb> (10).

Hong, S., Zhuge, M., Chen, J., Zheng, X., Cheng, Y., Zhang, C., Wang, J., Wang, Z., Yau, S. K. S., Lin, Z., Zhou, L.,

Ran, C., Xiao, L., Wu, C., and Schmidhuber, J. (2023). “Metagpt: Meta programming for a multi-agent collaborative

framework, <https://arxiv.org/abs/2308.00352>.

Hu, Z., Iscen, A., Jain, A., Kipf, T., Yue, Y., Ross, D. A., Schmid, C., and Fathi, A. (2024). “Scenecraft: An llm agent

for synthesizing 3d scene as blender code, <https://arxiv.org/abs/2403.01248>.

Häußler, M., Esser, S., and Borrmann, A. (2021). “Code compliance checking of railway designs by integrating BIM,

BPMN and DMN.” Automation in Construction, 121, 103427.

ISO (2024). “ISO 16739-1:2024: Industry Foundation Classes (IFC) for data sharing in the construction

and facility management industries - Part 1: Data schema, <https://www.iso.org/standard/84123.html>.

https://www.iso.org/standard/84123.html (last access: 2024-08-08).

Jang, S., Lee, G., Oh, J., Lee, J., and Koo, B. (2024). “Automated detailing of exterior walls using NADIA: Natural-

language-based architectural detailing through interaction with AI.” Advanced Engineering Informatics, 61, 102532.

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T.,

Riedel, S., and Kiela, D. (2020). “Retrieval-augmented generation for knowledge-intensive NLP tasks.” Proceedings

of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook, NY, USA,

Curran Associates Inc.

Li, C., Zhang, T., Du, X., Zhang, Y., and Xie, H. (2024a). “Generative ai for architectural design: A literature review,

<https://arxiv.org/abs/2404.01335>.

Li, P., Li, B., and Li, Z. (2024b). “Sketch-to-architecture: Generative ai-aided architectural design,

<https://arxiv.org/abs/2403.20186>.

Liao, W., Lu, X., Fei, Y., Gu, Y., and Huang, Y. (2024). “Generative ai design for building structures.” Automation in

Construction, 157, 105187.

Lin, C.-H., Gao, J., Tang, L., Takikawa, T., Zeng, X., Huang, X., Kreis, K., Fidler, S., Liu, M.-Y., and Lin, T.-Y.

(2023). “Magic3d: High-resolution text-to-3d content creation.” IEEE Conference on Computer Vision and Pattern

39 Du, August 16, 2024


Recognition (CVPR).

Luo, Z. and Huang, W. (2022). “Floorplangan: Vector residential floorplan adversarial generation.” Automation in

Construction, 142, 104470.

Mehta, N., Teruel, M., Deng, X., Figueroa Sanz, S., Awadallah, A., and Kiseleva, J. (2024). “Improving grounded

language understanding in a collaborative environment by interacting with agents through help feedback.” Findings

of the Association for Computational Linguistics: EACL 2024, Y. Graham and M. Purver, eds., St. Julian’s, Malta,

Association for Computational Linguistics, 1306–1321, <https://aclanthology.org/2024.findings-eacl.87> (March).

Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., and Geiger, A. (2019). “Occupancy networks: Learning

3d reconstruction in function space.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition

(CVPR), 4455–4465.

Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. (2021). “Nerf: representing

scenes as neural radiance fields for view synthesis.” Commun. ACM, 65(1), 99–106.

Mistral (2024a). “Function calling, <https://docs.mistral.ai/capabilities/function_calling>. Accessed: 2024-07-16.

Mistral, A. T. (2024b). “Mistral large 2, <https://mistral.ai/news/mistral-large-2407/>. Accessed: 2024-08-03.

Nuyts, E., Bonduel, M., and Verstraeten, R. (2024). “Comparative analysis of approaches for automated compliance

checking of construction data.” Advanced Engineering Informatics, 60, 102443.

Oleynikova, H., Millane, A., Taylor, Z., Galceran, E., Nieto, J. I., and Siegwart, R. Y. (2016). “Signed distance fields:

A natural representation for both mapping and planning, <https://api.semanticscholar.org/CorpusID:28083959>.

OpenAI (2024a). “Hello gpt-4o, <https://openai.com/index/hello-gpt-4o/>. Accessed: 2024-07-16.

OpenAI (2024b). “Openai function calling guide, <https://platform.openai.com/docs/guides/function-calling>. Ac-

cessed: 2024-07-16.

Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., and Bernstein, M. S. (2023). “Generative agents:

Interactive simulacra of human behavior, <https://arxiv.org/abs/2304.03442>.

Pauwels, P., Deursen, D. V., Verstraeten, R., Roo, J. D., Meyer, R. D., de Walle, R. V., and Campenhout, J. V. (2011). “A

semantic rule checking environment for building performance checking.” Automation in Construction, 20, 506–518.

Poole, B., Jain, A., Barron, J. T., and Mildenhall, B. (2022). “Dreamfusion: Text-to-3d using 2d diffusion.” ArXiv,

abs/2209.14988.

Pouliou, P., Horvath, A.-S., and Palamas, G. (2023). “Speculative hybrids: Investigating the generation of conceptual

architectural forms through the use of 3d generative adversarial networks.” International Journal of Architectural

Computing, 21(2), 315–336.

Preidel, C. and Borrmann, A. (2018). “BIM-Based Code Compliance Checking.” Building Information Modeling,

Springer International Publishing, 367–381.

Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., and Sutskever, I. (2022). “Robust speech recognition

40 Du, August 16, 2024


via large-scale weak supervision, <https://arxiv.org/abs/2212.04356>.

Radford, A., Metz, L., and Chintala, S. (2015). “Unsupervised representation learning with deep convolutional

generative adversarial networks.” CoRR, abs/1511.06434.

Shabani, M. A., Hosseini, S., and Furukawa, Y. (2023). “Housediffusion: Vector floorplan generation via a diffusion

model with discrete and continuous denoising.” 2023 IEEE/CVF Conference on Computer Vision and Pattern

Recognition (CVPR), 5466–5475.

Solihin, W. and Eastman, C. (2015). “Classification of rules for automated BIM rule checking development.” Automation

in Construction, 53, 69–82.

Stigsen, M., Moisi, A., Rasoulzadeh, S., Schinegger, K., and Rutzinger, S. (2023). “Ai diffusion as design vocabulary

- investigating the use of ai image generation in early architectural design and education.” 587–596 (01).

Sun, C., Han, J., Deng, W., Wang, X., Qin, Z., and Gould, S. (2024). “3d-gpt: Procedural 3d modeling with large

language models, <https://arxiv.org/abs/2310.12945>.

Sun, C., Zhou, Y., and Han, Y. (2022). “Automatic generation of architecture facade for historical urban renovation

using generative adversarial network.” Building and Environment, 212, 108781.

Sydora, C. and Stroulia, E. (2020). “Rule-based compliance checking and generative design for building interiors using

BIM.” Automation in Construction, 120, 103368.

Tomczak, A., v Berlo, L., Krijnen, T., Borrmann, A., and Bolpagni, M. (2022). “A review of methods to specify

information requirements in digital construction projects.” IOP Conference Series: Earth and Environmental

Science, 1101, 092024.

Tono, A. and Fischer, M. (2022). “Vitruvio: 3d building meshes via single perspective sketches,

<https://arxiv.org/abs/2210.13634>.

Vectorworks, D. (2024). “Sdk examples, <https://github.com/VectorworksDeveloper/SDKExamples>. Accessed:

2024-07-16.

Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., Chen, Z., Tang, J., Chen, X., Lin, Y., Zhao, W. X., Wei, Z.,

and Wen, J. (2024). “A survey on large language model based autonomous agents.” Frontiers of Computer Science,

18(6).

Wang, S., Zeng, W., Chen, X., Ye, Y., Qiao, Y., and Fu, C.-W. (2021). “Actfloor-gan: Activity-guided adversarial

networks for human-centric floorplan design.” IEEE Transactions on Visualization and Computer Graphics, PP,

1–1.

Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., and Le, Q. V. (2022). “Finetuned

language models are zero-shot learners, <https://arxiv.org/abs/2109.01652>.

Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E. H., Le, Q. V., and Zhou, D. (2024).

“Chain-of-thought prompting elicits reasoning in large language models.” Proceedings of the 36th International

41 Du, August 16, 2024


Conference on Neural Information Processing Systems, NIPS ’22, Red Hook, NY, USA, Curran Associates Inc.

Wu, J., Nousias, S., and Borrmann, A. (2023). “Parametrization-based solution space exploration for model healing.”

Proc. of the 30th Int. Conference on Intelligent Computing in Engineering (EG-ICE) (Jul).

Xu, X., Wang, Y., Xu, C., Ding, Z., Jiang, J., Ding, Z., and Karlsson, B. F. (2024). “A survey on game playing agents

and large models: Methods, applications, and challenges, <https://arxiv.org/abs/2403.10249>.

Yang, X., Wu, Y., Zhang, K., and Jin, C. (2021). “Cpcgan: A controllable 3d point cloud generative adversarial network

with semantic label generating.” Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3154–3162.

Zhang, J. and El-Gohary, N. M. (2017). “Integrating semantic nlp and logic reasoning into a unified system for

fully-automated code checking.” Automation in Construction, 73, 45–57.

Zhang, L., Zheng, L., Chen, Y., Huang, L., and Zhou, S. (2022). “Cgan-assisted renovation of the styles and features

of street facades—a case study of the wuyi area in fujian, china.” Sustainability, 14, 16575.

Zheng, J. and Fischer, M. (2023). “Dynamic prompt-based virtual assistant framework for bim information search.”

Automation in Construction, 155, 105067.

Zhou, Y. C., Zheng, Z., Lin, J. R., and Lu, X. Z. (2022). “Integrating NLP and context-free grammar for complex rule

interpretation towards automated compliance checking.” Computers in Industry, 142.

Zhuang, X., Ju, Y., Yang, A., and Caldas, L. (2023). “Synthesis and generation for 3d architecture volume with

generative modeling.” International Journal of Architectural Computing, 21(2), 297–314.

Çelen, A., Han, G., Schindler, K., Gool, L. V., Armeni, I., Obukhov, A., and Wang, X. (2024). “I-Design: Personalized

LLM Interior Designer, <https://arxiv.org/abs/2404.02838>.

42 Du, August 16, 2024

You might also like