0% found this document useful (0 votes)
101 views6 pages

LLVM Framework Research and Applications

Uploaded by

shailzakanwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views6 pages

LLVM Framework Research and Applications

Uploaded by

shailzakanwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2023 19th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD) | 979-8-3503-0439-8/23/$31.

00 ©2023 IEEE | DOI: 10.1109/ICNC-FSKD59587.2023.10281186

LLVM Framework: Research and


Applications
Chenyang Li Jiye Jiao
School of Computing School of Computing
Xi’an University of Posts and Telecommunications Xi’an University of Posts and Telecommunications
Xi’an, China Xi’an, China

Abstract—LLVM is an open-source compiler framework that


Due to its flexibility and extensibility, LLVM has been
widely used in many open source and commercial projects. For
provides flexible, modular and reusable compiler components for example, LLVM is widely used in compilers for various
building compilers, static analysis tools, code optimizers and programming languages, such as Clang, Rust, Swift, etc. It is
more. It is based on a modern modular architecture and supports
also widely used in GPU programming, such as AMD ROCm,
multiple programming languages. LLVM also provides a rich
toolchain and libraries for building high-performance compilers
NVIDIA CUDA, etc. In addition, LLVM is also used in
and optimizers. Besides, LLVM has the advantages of scalability compiler-related research and education, such as teaching and
and portability, so it is widely used in the field of compilers. This researching compilation principles courses.
paper will introduce the architecture, features, research hotspots
II. LLVM FRAMEWORK
and applications of LLVM.
LLVM framework is divided into front-end, intermediate
Keywords-LLVM; LLVM Backend; LLVM Toolchain code LLVM IR and back-end.
I. INTRODUCTION A. LLVM Front-End
A compiler is a computer program that translates a program In LLVM, the front-end is the module that converts the
written in one programming language into a program written in source code into LLVM IR. the LLVM front-end consists of
another programming language. Similarly, a compiler is a large three main parts. They are Lexer, Parser and Semantic Analyzer.
software system with many internal components and algorithms The Lexer decomposes the input source code into Tokens. The
[1]. Low level virtual machine (LLVM) is an open source Tokens are the basic units of the source code Tokens are the
compiler infrastructure that is highly flexible and scalable and basic units of source code, including keywords, identifiers,
is widely used in compilers, dynamic languages, GPU constants, operators, etc. The task of the Semantic Analyzer is
programming, and other areas. Compiler performance is a to identify the Tokens in the source code and generate a
reflection of the full advantages of computer system architecture sequence of Tokens. The syntax parser converts the Token
[2], and unlike traditional compiler architectures, LLVM uses sequence into an abstract syntax tree, which is a hierarchical
intermediate representation (LLVM IR) to handle all code representation of the source code, showing the various syntactic
during the compilation process, which allows LLVM to perform structures of the source code, such as variables, functions,
multi-level optimization and analysis and supports multiple control flow, etc., in a tree structure. The task of the syntax
target platforms. LLVM has a modular design that allows users parser is to convert the Token sequence into an abstract syntax
to easily add custom front-ends and backends, or use existing tree (AST) according to the syntax rules. the semantic parser
front- and back-ends to build their own compiler tools. LLVM analyzes the AST, checks it for semantic errors, and generates
also provides a number of optimizers, parsers, and code the LLVM IR code.
generation tools to help you improve the performance and B. Intermediate Code LLVM IR
maintainability of your code.
LLVM IR is an intermediate representation of LLVM,
In LLVM, the compiler front-end is responsible for which is a low-level, machine-oriented representation, similar
converting source code to LLVM IR, and the back-end is to assembly language. LLVM IR code consists of a series of
responsible for converting LLVM IR to assembly code or basic blocks and instructions. A basic block is a sequence of
machine code for the target platform. This separation allows instructions that does not contain branch instructions, and it is
users to write custom front-ends to support new programming the basic unit of control flow. Instructions are abstract
languages or custom back-ends to support new target platforms.
representations of computer instructions, including arithmetic
LLVM supports front-ends for a variety of programming
instructions, logic instructions, memory instructions, etc.
languages, such as C, C++, Objective-C, etc., and back-ends for
a variety of target platforms, such as x86, ARM, MIPS, etc. Because the intermediate code LLVM IR is strongly decoupled,
all front-end codes generate the same intermediate code, as
In addition to the compiler, LLVM includes a number of shown in Fig. 1.
compiler-related tools, such as a static analyzer, code coverage
tools, debuggers, and more. These tools help users to better C/C++ X86

understand and optimize their code, and improve code quality


and maintainability. Also, LLVM supports extensions for Fortran LLVM IR ARM

various languages and platforms, such as NVPTX backend with


Others Others
CUDA support, front-end and back-end with Web Assembly
support, etc.
Figure 1. LLVM framework

Authorized licensed use limited to: National Institute of Technology- Delhi. Downloaded on September 09,2024 at 06:48:34 UTC from IEEE Xplore. Restrictions apply.
Therefore, all front-ends can share LLVM IR, and all back- The backend of LLVM also supports several different
ends can also share LLVM IR. Compare with the traditional optimization levels that control the level of optimization during
compiler architecture, assuming there are n front-ends and m code generation. By default, the backend of LLVM uses a
back-ends, the traditional compiler architecture needs to medium optimization level, which allows for efficient code
implement compilers, while the LLVM framework only generation while maintaining a relatively fast compilation
needs to implement compilers, which greatly reduces the speed. For applications that need to maximize code
workload of developers. performance, higher optimization levels can be used.
C. LLVM Backend In addition to the basic code generation functions, the
The LLVM backend can convert the intermediate LLVM backend provides a number of other important features.
representation LLVM IR to machine code. Often different code These include debug message generation, exception handling,
must be generated and executed by different processors because and basic block selection. These features help developers to
each processor is incompatible with assembly language and better understand the generated code and provide additional
machine language, and if compatibility criteria are considered functionality and protection for the application.
on an instruction set basis, code generation may be the same but III. LLVM TOOLCHAIN
may produce or generate different behavior on each architecture
[3]. The backend of LLVM can generate machine code for a The LLVM framework contains a set of tools, libraries, and
variety of different architectures, including X86, ARM, and common interfaces for building compilers and optimizers, each
MIPS, etc. These architectures have different instruction sets, with its own unique functionality. The LLVM toolset provides
registers, and features. Its back-end flowchart is shown in Fig. a complete infrastructure of compiler utilities for arbitrary
2. programming languages [4], and also provides developers and
compiler researchers with rich functionality for language front-
ends, code generation, code optimization, and debugging
aspects.
LLVM toolset includes LLVM core library, Clang C/C++
compiler, opt optimizer, the LLVM backend code generator
(llc) and so on, as shown in Fig. 3, from left to right for frontend,
intermediate code, and backend tools respectively.

LLVM Tools

Clang opt llc,


llvm-as,
Figure 2. LLVM back-end process llvm-dis,
Llvmlink,
...
The LLVM backend plays a very important role in the
compilation process, as it is responsible for converting code Figure 3. LLVM tools
written in a high-level language into machine language so that
the target machine can understand and execute the code. During Clang is a C, C++ and Objective-C compiler front-end tool
this process, the LLVM backend performs many different tasks, based on the LLVM architecture, in order to provide efficient
including code optimization, register allocation, and instruction code generation and a good user experience. Unlike other
selection. traditional C and C++ compilers, Clang is modular in design,
During code generation, the LLVM backend first converts optimized with modern compiler technology, capable of
the intermediate representation of the LLVM IR into the compiling large code bases quickly, and with better error
assembly language of the target architecture. This process is diagnostic capabilities and language support. Clang is designed
called instruction selection and it converts the intermediate to be easy to use and extensible, and can be used as a standalone
representation LLVM IR into a sequence of instructions for the tool or integrated with other LLVM toolchains. Clang also
target architecture. The instruction selector selects the supports many compiler plug-ins and extensions, such as static
appropriate instructions based on the instruction set and register analysis, code coverage, etc., which can further improve code
level of the target architecture. This process is highly optimized quality and maintainability. Clang is also the compiler driver,
to make the generated code more compact and efficient. integrating all the necessary libraries and tools, freeing the user
from the cumbersome tools involved in the various stages of
After instruction selection, the LLVM backend converts the compilation [5]. Due to its good performance and flexible
assembly code into machine code. This process involves two architecture, Clang has become the default compiler for many
main steps: register allocation and code optimization. In the important open-source projects, such as LLVM, FreeBSD,
register allocation phase, the LLVM backend assigns variables macOS, etc.
to registers in order to facilitate faster access to them. In the code
optimization phase, the LLVM backend optimizes the generated Opt is an optimizer for the LLVM framework that accepts
machine code to reduce the code size and runtime. LLVM IR as input and the optimized LLVM IR as output, and
can perform various optimization operations on LLVM IR.

Authorized licensed use limited to: National Institute of Technology- Delhi. Downloaded on September 09,2024 at 06:48:34 UTC from IEEE Xplore. Restrictions apply.
LLVM IR is a low-level intermediate representation that can be instruction set architecture supports instructions of varying
used by LLVM compilers and other tools to generate target code lengths, ranging from 16-bit to 48-bit. The primary objective is
and binaries. Opt tools can analyze and optimize LLVM IR and to compress frequently used instructions into 16 bits, efficiently
apply various optimization techniques before or after encode 32-bit immediate values using 48-bit instructions, and
generating. The opt tool supports multiple optimization levels combine specific instructions to form new 32-bit instructions.
and optimization targets, such as instruction-based To facilitate the integration, the application binary interface
optimization, function-based optimization, and module-based (ABI) has been modified, resulting in an average reduction of
optimization. Users can specify the desired optimization level approximately 20% in code size compared to the previous
and target via command line options, and can customize microMIPSR6 architecture. NanoMIPS utilizes many of the
optimization policies via a plug-in mechanism. In addition to same operand types as MIPS and microMIPS, while also
optimization features, the opt tool also supports LLVM IR introducing some new types. Each nanoMIPS instruction
conversion, analysis and presentation, such as converting includes six bits, known as opcodes, which uniquely identify the
LLVM IR to a graphical representation in dot format for easy instruction. The remaining bits are allocated to encode
visualization and analysis. opt tool is an important part of the information such as registers, memory locations, or immediate
LLVM tool chain and is widely used in compiler and code values.
optimization related areas.
In [8], academics developed PERCIVAL, a high-level posit
The llc tool is mainly used in the back-end of LLVM RISC-V kernel called CVA6, which leverages the LLVM
framework, accepting LLVM IR as input and assembly code as framework. This kernel is capable of executing all posit
output, and can generate generic LLVM IR code into assembly instructions, including quire fusion operations, and it integrates
code for a target machine of a specific architecture. The llc tool the Xposit extension-a RISC-V extension specifically designed
supports multiple target platforms, such as x86, ARM, MIPS, for posit instructions. PERCIVAL and the Xposit extension
etc., and can generate the corresponding target code according represent a groundbreaking achievement as they successfully
to user requirements. The llc tool can apply various optimization combine standard positive addition, subtraction, and
techniques to improve the performance and efficiency of multiplication with the necessary fusion operations. The
programs, such as instruction-based optimization, function- implementation also includes dedicated hardware for positive
based optimization, and module-based optimization. The user logarithmic approximation in division, square root operations,
can specify the desired optimization level and target through as well as comprehensive support for comparison operations
command line options, and can customize the optimization and conversions between integers and posits. The experimental
strategy through a plug-in mechanism. In addition to results demonstrate the remarkable accuracy improvement
compilation features, the llc tool also supports LLVM IR achieved by the quire fusion operation instructions. In general
conversion, analysis and presentation, such as converting matrix multiplication, these instructions reduce accuracy errors
LLVM IR into a graphical representation in dot format for easy by up to four orders of magnitude, significantly enhancing the
visual analysis. The llc tool is an important part of the LLVM precision of dot product calculations. Furthermore, performance
tool chain and is widely used in compiler and code generation comparisons reveal that posits perform on par with single-
related areas. precision floating-point numbers in terms of speed while
exhibiting superior timing characteristics compared to double-
IV. MAIN APPLICATIONS OF LLVM precision floating-point numbers.
LLVM is a widely used compiler framework that is In [9], experts introduced a novel approach to incorporating
currently used by almost all compilers (including many OpenMP5.0 meta-instruction specifications within the
vendor compilers) due to its modular, portable nature [6]. It LLVM/Clang framework. This approach enabled the
is used in many different areas, including compiler back-end implementation of dynamic extensions to support user-defined
porting, static analyzers, virtual machines, debuggers, and conditions. By dynamically evaluating these conditions, a
code generation tools. collection of adaptive algorithms was made available, offering
potential performance enhancements for various applications.
A. Compiler Back-End Porting
To enhance Clang's functionality, specific instructions were
LLVM was originally developed as a compiler framework, implemented within the LLVM framework, ensuring support
so one of its main applications is compilers. LLVM provides a for OpenMP meta-instructions. This implementation facilitated
set of IR-based compiler architectures that allow LLVM to be compile-time selection and compiler adaptability, enabling code
used to build different kinds of compilers. LLVM back-end portability across diverse architectures. Additionally, a
porting is the process of applying the LLVM compiler groundbreaking semantic implementation was introduced to
framework to different CPU architectures and operating enable runtime instruction selection using LLVM, particularly
systems. LLVM provides a modular back-end architecture that for user-defined contexts. To assess the impact of these
allows LLVM to be ported to new CPUs and operating systems instructions, the Rodinia benchmarks were modified and
with relative ease. utilized. The results obtained from these benchmarks offer
The back-end porting method for LLVM has been studied valuable insights and guidance to end users, assisting them in
and applied by many scholars. effectively applying these features to real-world applications.

In [7], researchers advocated for the integration of In [10], researchers conducted a comprehensive analysis of
nanoMIPS assembler into the LLVM infrastructure, with a the back-end migration mechanism of the FT_MX architecture
specific focus on variable-length instructions. The nanoMIPS within the LLVM framework. They examined the fundamental

Authorized licensed use limited to: National Institute of Technology- Delhi. Downloaded on September 09,2024 at 06:48:34 UTC from IEEE Xplore. Restrictions apply.
architecture and functions of LLVM and provided a step-by- using LLVM tools. This involved optimizing code
step procedure for implementing the back-end migration for parallelization through OpenMP SIMD, refactoring the code,
FT_MX. By utilizing the backend migration mechanism of the and applying transformations to enable SIMD optimization.
LLVM compiler system, the researchers successfully completed Additionally, appropriate libraries were selected to ensure
the description of each attribute of the FT_MX architecture. optimal performance on the A64FX processor. By
They achieved the integration of FT_MX into the LLVM implementing these code modifications, significant
compiler's back-end, enabling the generation of accurate improvements were achieved. The code execution speed was
assembly code specifically tailored for FT_MX. FT_MX is an boosted by 98%, resulting in a remarkable performance of 78
independently developed DSP chip by the National University GFlops on the A64FX processor. This work demonstrates the
of Defense Technology. It features an 11-transfer very long effectiveness of using LLVM-based tools for tuning DCA++
instruction word (VLIW) structure, with 5 scalar outflows and applications and harnessing the capabilities of the ARM A64FX
6 vector outflows. The processor supports instructions with a processor.
width of 40 or 80 bits. Its functional components include an
instruction grabbing unit, a vector processing unit (VPU), a In [13], researchers designed an efficient graph coloring
scalar processing unit (SPU), a vector storage unit (AM), and a algorithm for interference graph coloring in register allocation
direct memory access unit (DMA). The accompanying FT_MX phase. The graph-shading algorithm was run against several
instruction set provides scalar and vector instructions to SPEC CPU 2017 benchmarks with the help of the default
facilitate efficient computation on the processor. register allocator (the Greedy register allocator) in LLVM,
analyzed and tested, and for most benchmarks, the approximate
In [11], academics introduced the CPU design of a algorithm performed as well or better than the Greedy register
simplified 32-bit reduced instruction set computer (RISC), allocator.
written using Verilog. Unlike other embedded RISC
architectures, this CPU has a GCC compiler port written for the In [14], experts created a domain-specific language called
LLVM compiler infrastructure project. The code generated by PRSafe, which is a C-like language for eBPF tools, based on the
LLVM framework. General-purpose programming languages
LLVM back-end on a custom CPU was simulated successfully
using Cadence Incisive, and the CPU was synthesized using such as C++ and Python have resource management and input
Synopsys Design Compiler. errors due to Turing completeness. In terms of language
implementation, they rewrote PRSafe to replace Clang tools,
B. LLVM Tool Usage and Optimization including lexers, parsers, etc. Lex and Bison are used for lexers
LLVM can be used to build static analyzers that can be used and parsing. Code generation and optimization using the LLVM
to check for potential problems and errors in code. LLVM also compiler infrastructure. PRSafe is designed to syntactically and
provides static analysis tools, such as the Clang Static Analyzer, semantically prohibit the creation of Turing-complete
that can help developers find problems while writing code. programs.

LLVM can also be used to build virtual machines that can In [15], academics introduced an innovative method using
interpret and execute code at runtime. LLVM provides an on- deep reinforcement learning to optimize static code. This
the-fly compiler that can compile IR into machine code at approach eliminates the need for manual feature engineering
runtime. This allows LLVM to be used to build efficient virtual and instead learns through observation and interaction with the
machines, such as JIT compilers, script interpreters, etc. environment. The proposed method leverages the LLVM IR
generated from the source code to evaluate and rank different
LLVM can be used to build debuggers that can be used to optimization strategies. By training a predictive model, the
help developers locate and fix problems when debugging system can determine the effectiveness of various optimizations
applications. LLVM provides many tools and APIs that can help and generate specific optimization strategies tailored to the
developers analyze and debug code, such as the LLDB LLVM IR. Unlike traditional approaches that require human
debugger, which is an LLVM-based debugger. expertise for feature engineering, this deep reinforcement
LLVM can also be used to build code generation tools that learning approach autonomously learns the effects of
can be used to automate the code generation process, for optimizations and their corresponding rewards. By harnessing
example, for certain data structures and algorithms, by the power of machine learning, this method provides an
generating code with LLVM. This makes the code generation automated and data-driven solution for static code optimization.
process more efficient and reliable, while also reducing code In [16], academics proposed an application-driven or use-
errors and redundancy. case binary specialization approach that leverages explicit
The tool chain and optimization for LLVM has been studied guidance from the application or user for targeted optimizations
and applied by many scholars. and changes. By converting machine code functions directly to
LLVM IR and optimizing them in LLVM, the BinOpt library
In [12], academics a methodology to optimize dynamic uses information specified by the application to generate new
cluster approximation applications (DAC++) for the ARM specialized code for the functions and integrate it back into the
A64FX processor using LLVM-based tools. The study focused original code. By applying this technique to existing optimized
on adapting the code to the new architecture and generating code, the paper shows that significant performance
efficient single instruction/multiple data (SIMD) instructions improvements can be achieved with very little optimization time
for the Scalable Vector Extension instruction set. To enhance overhead. The focus of the research was to develop a novel and
the performance of the code, manual tuning was performed robust library to perform application-driven optimization and

Authorized licensed use limited to: National Institute of Technology- Delhi. Downloaded on September 09,2024 at 06:48:34 UTC from IEEE Xplore. Restrictions apply.
specialization at the binary level and to demonstrate significant framework called Alive FP for the verification of peephole
performance improvements. optimizations based on floating-point numbers. Peephole
optimization is a technique for optimizing and normalizing
In [17], experts studied the errors of LLVM compiler when code, but it is prone to errors. Previous research has proposed
processing low-level language functions. In particular, it is Alive to verify integer-based peephole optimization and
error-prone when dealing with low-level operations such as generate C++ code for LLVM. This paper extends this
primitive pointers, integer, and pointer conversions in C, C++, framework to propose Alive FP, which is used to validate the
and Rust. The root of the problem is the compiler's difficulty in floating-point peephole optimization in LLVM. Alive FP
ensuring that the semantics of low-level programs are preserved processing involves floating-point optimizations of signed
while trying to implement advanced memory optimization. In zeros, non-numeric, and infinity, while keeping accuracy intact.
addition, the memory model of LLVM IR lacks a clear This paper provides a variety of coding methods to solve the
specification, leading to differences in the semantic problems of undefined behaviors and inadequate specifications
understanding of edge cases among different compiler in the LLVM language reference manual, and translates all
developers. To solve these problems, a new memory model is optimizations in this category into Alive FP. In the process, the
proposed and formally described. The goal of the new model is authors found seven incorrect optimizations in LLVM. The goal
to fix known compilation errors associated with the memory of this study was to provide a reliable verification method to
model and to enable the addition of new optimizations that were ensure the correctness of peephole optimizations based on
previously unavailable. By implementing this new model, the floating point numbers and to improve the optimization
researchers demonstrated that the LLVM compiler's problems algorithm in LLVM.
in handling low-level language functions can be solved without
compromising the quality of the generated code. V. CONCLUSION
In [18], researchers introduced Mull, an open-source tool for LLVM is a modular compiler framework that supports
mutation testing, built on the LLVM framework. Mull leverages multiple languages and can also be used in a variety of
LLVM IR as a low-level intermediate representation and applications such as operating system kernels, embedded
utilizes LLVM JIT for dynamic compilation. The use of LLVM devices, and GPUs. The most common application scenario
IR and JIT compilation gives Mull two important advantages, for LLVM is compiler development, where it can be used to
namely language independence and precise control over the build compilers and interpreters for a variety of
compilation and execution of test programs and their mutants. programming languages. By using the LLVM framework,
Mull can work with code written in various programming compiler development can be made easier and more
languages that support LLVM IR compilation, including C,
efficient. LLVM can also be applied to code optimization.
C++, Rust, and Swift. By directly manipulating LLVM IR, Mull
significantly reduces the effort required for mutation generation.
During the optimization phase of the compiler, LLVM can
Only modified segments of IR code need to be recompiled, use a range of optimization algorithms and techniques to
resulting in faster processing of mutated programs. According improve the performance and execution efficiency of the
to the paper, Mull is currently the only mutation testing tool that program. It is worth mentioning that the use of LLVM in the
offers these capabilities for compiled programming languages. field of machine learning is gradually increasing. LLVM is
The paper provides a detailed explanation of the Mull algorithm used as a Just-in-Time (JIT) compiler to help speed up the
and implementation, while also highlighting the current execution of machine learning applications, and can also be
limitations of the tool. Additionally, evaluation results from used to accelerate the development of libraries or to
real-world projects like RODOS, OpenSSL, and LLVM are optimize machine learning models.
presented. The main objective of this study is to present Mull as
a practical and efficient mutation testing tool and investigate its REFERENCES
application and effectiveness across diverse real-world projects. [1] Cooper, Keith D., and Linda Torczon. Engineering a compiler. Elsevier,
2011.
In [19], researchers studied the role of undefined behavior [2] Lai, Q. K., et al. “A cross-architectural compilation analysis approach for
(UB) in optimizing compiler IR design. IR is at the heart of the ideal performance spaces.” Computer Research and Development
optimization compiler and should be easy to perform 58.03(2021):668-680.
transformations and provide efficient and accurate static [3] Islam, Samra, et al. “Analysis on State of Art Relationship between
analysis. The paper explores how the IRs of optimized Compilers and Multi-Core Processor.” 2020 IEEE 3rd International
compilers such as GCC, LLVM, Intel, and Microsoft support Conference on Computer and Communication Engineering Technology
(CCET). IEEE, 2020.
different forms of UB, and explains their role in reflecting UB
[4] Xie, Xiaoyuan, et al. “Towards Understanding Tool-chain Bugs in the
intensive programming language semantics and modeling low- LLVM Compiler Infrastructure.” 2021 IEEE International Conference on
level operations such as memory storage. The current semantics Software Analysis, Evolution and Reengineering (SANER). IEEE, 2021.
of LLVM IR do not justify some important optimizations, which [5] LLVM Project. Clang: a C language family frontend for LLVM.
leads to persistent errors. The paper presents solutions that Available at: http://clang.llvm.org, 2019.
address the problems found in the IR of LLVM and show that [6] Clement, Valentin, and Jeffrey S. Vetter. “Flacc: Towards OpenACC
most optimizations are still reasonable and allow for some support for Fortran in the LLVM Ecosystem.” 2021 IEEE/ACM 7th
desirable new conversions. This solution does not reduce Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC).
IEEE, 2021.
compile time or the performance of the generated code.
[7] Rakovic, Nemanja, Dragan Mladjenovic, and Miodrag Djukic. “Adding
In [20], academics introduced an automatic verification support for integrated nanoMIPS assembler to LLVM.” 2022 IEEE

Authorized licensed use limited to: National Institute of Technology- Delhi. Downloaded on September 09,2024 at 06:48:34 UTC from IEEE Xplore. Restrictions apply.
Zooming Innovation in Consumer Technologies Conference (ZINC).
IEEE, 2022.
[8] Mallasén, David, et al. “PERCIVAL: Open-source posit RISC-V core
with quire capability.” IEEE Transactions on Emerging Topics in
Computing 10.3 (2022): 1241-1252.
[9] Mishra, Alok, Abid M. Malik, and Barbara Chapman. “Extending the
LLVM/Clang Framework for OpenMP Metadirective Support.” 2020
IEEE/ACM 6th Workshop on the LLVM Compiler Infrastructure in HPC
(LLVM-HPC) and Workshop on Hierarchical Parallelism for Exascale
Computing (HiPar). IEEE, 2020.
[10] Deng, Ping, et al. “Back-end porting of FT_MX based on LLVM
compilation architecture.” MATEC Web of Conferences. Vol. 336. EDP
Sciences, 2021.
[11] Goldberg, Connor Jan. “The Design of a Custom 32-bit RISC CPU and
LLVM Compiler Backend.” (2017).
[12] Huber, Joseph, et al. “A case study of LLVM-based analysis for
optimizing SIMD code generation.” OpenMP: Enabling Massive Node-
Level Parallelism: 17th International Workshop on OpenMP, IWOMP
2021, Bristol, UK, September 14–16, 2021, Proceedings 17. Springer
International Publishing, 2021.
[13] Das, Dibyendu, Shahid Asghar Ahmad, and Venkataramanan Kumar.
“Deep learning-based approximate graph-coloring algorithm for register
allocation.” 2020 IEEE/ACM 6th Workshop on the LLVM Compiler
Infrastructure in HPC (LLVM-HPC) and Workshop on Hierarchical
Parallelism for Exascale Computing (HiPar). IEEE, 2020.
[14] Mahadevan, Sai Veerya, Yuuki Takano, and Atsuko Miyaji. “PRSafe:
Primitive Recursive Function based Domain Specific Language using
LLVM.” 2021 International Conference on Electronics, Information, and
Communication (ICEIC). IEEE, 2021.
[15] Mammadli, Rahim, Ali Jannesari, and Felix Wolf. “Static neural compiler
optimization via deep reinforcement learning.” 2020 IEEE/ACM 6th
Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC)
and Workshop on Hierarchical Parallelism for Exascale Computing
(HiPar). IEEE, 2020.
[16] Engelke, Alexis, and Martin Schulz. “Robust Practical Binary
Optimization at Run-time using LLVM.” 2020 IEEE/ACM 6th
Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC)
and Workshop on Hierarchical Parallelism for Exascale Computing
(HiPar). IEEE, 2020.
[17] Lee, Juneyoung, et al. “Reconciling high-level optimizations and low-
level code in LLVM.” Proceedings of the ACM on Programming
Languages 2.OOPSLA (2018): 1-28.
[18] Denisov, Alex, and Stanislav Pankevich. “Mull it over: mutation testing
based on LLVM.” 2018 IEEE international conference on software
testing, verification and validation workshops (ICSTW). IEEE, 2018.
[19] Lee, Juneyoung, et al. “Taming undefined behavior in LLVM.” ACM
SIGPLAN Notices 52.6 (2017): 633-647.
[20] Menendez, David, Santosh Nagarakatte, and Aarti Gupta. “Alive-FP:
Automated verification of floating point based peephole optimizations in
LLVM.” Static Analysis: 23rd International Symposium, SAS 2016,
Edinburgh, UK, September 8-10, 2016, Proceedings 23. Springer Berlin
Heidelberg, 2016.

Authorized licensed use limited to: National Institute of Technology- Delhi. Downloaded on September 09,2024 at 06:48:34 UTC from IEEE Xplore. Restrictions apply.

You might also like