Phase Transition
Phase Transition
programming
52 languages
Article
Talk
Read
Edit
View history
Tools
History[edit]
The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of
computation built from function application. In 1937 Alan Turing proved that the lambda
[37]
calculus and Turing machines are equivalent models of computation, showing that the
lambda calculus is Turing complete. Lambda calculus forms the basis of all functional
programming languages. An equivalent theoretical formulation, combinatory logic, was
[38]
developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.
Church later developed a weaker system, the simply-typed lambda calculus, which
[39]
extended the lambda calculus by assigning a data type to all terms. This forms the
basis for statically typed functional programming.
The first high-level functional programming language, Lisp, was developed in the late
1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at
[40]
Massachusetts Institute of Technology (MIT). Lisp functions were defined using
Church's lambda notation, extended with a label construct to allow recursive functions.
[41]
Lisp first introduced many paradigmatic features of functional programming, though
early Lisps were multi-paradigm languages, and incorporated support for numerous
programming styles as new paradigms evolved. Later dialects, such as Scheme and
Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp
around a cleanly functional core, while Common Lisp was designed to preserve and
[42]
update the paradigmatic features of the numerous older dialects it replaced.
Information Processing Language (IPL), 1956, is sometimes cited as the first computer-
[43]
based functional programming language. It is an assembly-style language for
manipulating lists of symbols. It does have a notion of generator, which amounts to a
function that accepts a function as an argument, and, since it is an assembly-level
language, code can be data, so IPL can be regarded as having higher-order functions.
However, it relies heavily on the mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A
Programming Language (ISBN 9780471430148). APL was the primary influence on John
Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s,
Arthur Whitney, who had previously worked with Iverson, created K, which is used
commercially in financial industries along with its descendant Q.
[44]
In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for
[45]
a functional programming language, described a correspondence between ALGOL 60
[46][47] [48]
and the lambda calculus, and proposed the ISWIM programming language.
John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be
Liberated From the von Neumann Style? A Functional Style and its Algebra of
[49]
Programs". He defines functional programs as being built up in a hierarchical way by
means of "combining forms" that allow an "algebra of programs"; in modern language,
[citation needed]
this means that functional programs follow the principle of compositionality.
Backus's paper popularized research into functional programming, though it emphasized
function-level programming rather than the lambda-calculus style now associated with
functional programming.
The 1973 language ML was created by Robin Milner at the University of Edinburgh, and
David Turner developed the language SASL at the University of St Andrews. Also in
Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL.
[50]
NPL was based on Kleene Recursion Equations and was first introduced in their work
[51]
on program transformation. Burstall, MacQueen and Sannella then incorporated the
[52]
polymorphic type checking from ML to produce the language Hope. ML eventually
developed into several dialects, the most common of which are now OCaml and Standard
ML.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in
the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer
Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-
call optimization, features that encourage functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive
type theory), which associated functional programs with constructive proofs expressed
as dependent types. This led to new approaches to interactive theorem proving and has
[citation
influenced the development of subsequent functional programming languages.
needed]
The lazy functional language, Miranda, developed by David Turner, initially appeared in
1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell
began with a consensus in 1987 to form an open standard for functional programming
research; implementation releases have been ongoing as of 1990.
More recently it has found use in niches such as parametric CAD in the OpenSCAD
language built on the CGAL framework, although its restriction on reassigning values (all
values are treated as constants) has led to confusion among users who are unfamiliar
[53]
with functional programming as a concept.
[54][55][56]
Functional programming continues to be used in commercial settings.
Concepts[edit]
[57]
A number of concepts and paradigms are specific to functional programming, and
generally foreign to imperative programming (including object-oriented programming).
However, programming languages often cater to several programming paradigms, so
programmers using "mostly imperative" languages may have utilized some of these
[58]
concepts.
Higher-order functions are functions that can either take other functions as arguments or
return them as results. In calculus, an example of a higher-order function is the
differential operator
𝑑/𝑑𝑥
Pure functions[edit]
Pure functions (or expressions) have no side effects (memory or I/O). This means that
pure functions have several useful properties, many of which can be used to optimize the
code:
While most compilers for imperative programming languages detect pure functions and
perform common-subexpression elimination for pure function calls, they cannot always
do this for pre-compiled libraries, which generally do not expose this information, thus
preventing optimizations that involve those external functions. Some compilers, such as
gcc, add extra keywords for a programmer to explicitly mark external functions as pure,
[59]
to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11
added constexpr keyword with similar semantics.
Recursion[edit]
Functional languages can be categorized by whether they use strict (eager) or non-strict
(lazy) evaluation, concepts that refer to how function arguments are processed when an
expression is being evaluated. The technical difference is in the denotational semantics
of expressions containing failing or divergent computations. Under strict evaluation, the
evaluation of any term containing a failing subterm fails. For example, the expression:
fails under strict evaluation because of the division by zero in the third element of the
list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of
items in the list), since evaluating it does not attempt to evaluate the terms making up the
list. In brief, strict evaluation always fully evaluates function arguments before invoking
the function. Lazy evaluation does not evaluate function arguments unless their values
are required to evaluate the function call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph
[65]
reduction. Lazy evaluation is used by default in several pure functional languages,
including Miranda, Clean, and Haskell.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program
modularity through separation of concerns, by easing independent implementation of
[2]
producers and consumers of data streams. Launchbury 1993 describes some
difficulties that lazy evaluation introduces, particularly in analyzing a program's storage
[66]
requirements, and proposes an operational semantics to aid in such analysis. Harper
2009 proposes including both strict and lazy evaluation in the same language, using the
[67]
language's type system to distinguish them.
Type systems[edit]
Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram
are based on intuitionistic type theory, which lets types depend on terms. Such types are
called dependent types. These type systems do not have decidable type inference and
[68][69][70][71]
are difficult to understand and program with. But dependent types can
express arbitrary propositions in higher-order logic. Through the Curry–Howard
isomorphism, then, well-typed programs in these languages become a means of writing
formal mathematical proofs from which a compiler can generate certified code. While
these languages are mainly of interest in academic research (including in formalized
mathematics), they have begun to be used in engineering as well. Compcert is a compiler
for a subset of the C programming language that is written in Coq and formally verified.
[72]
A limited form of dependent types called generalized algebraic data types (GADT's) can
be implemented in a way that provides some of the benefits of dependently typed
[73]
programming while avoiding most of its inconvenience. GADT's are available in the
[74] [75]
Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as
[76]
additions to other languages including Java and C#.
Referential transparency[edit]
Functional programs do not have assignment statements, that is, the value of a variable
in a functional program never changes once defined. This eliminates any chances of side
effects because any variable can be replaced with its actual value at any point of
[77]
execution. So, functional programs are referentially transparent.
Consider C assignment statement x=x * 10, this changes the value assigned to the
variable x. Let us say that the initial value of x was 1, then two consecutive evaluations
of the variable x yields 10 and 100 respectively. Clearly, replacing x=x * 10 with either
10 or 100 gives a program a different meaning, and so the expression is not referentially
transparent. In fact, assignment statements are never referentially transparent.
Data structures[edit]
Purely functional data structures are often represented in a different way to their
[78]
imperative counterparts. For example, the array with constant access and update
times is a basic component of most imperative languages, and many imperative data-
structures, such as the hash table and binary heap, are based on arrays. Arrays can be
replaced by maps or random access lists, which admit purely functional implementation,
but have logarithmic access and update times. Purely functional data structures have
persistence, a property of keeping previous versions of the data structure unmodified. In
Clojure, persistent data structures are used as functional alternatives to their imperative
counterparts. Persistent vectors, for example, use trees for partial updating. Calling the
[79]
insert method will result in some but not all nodes being created.
Functional programming is very different from imperative programming. The most significant
differences stem from the fact that functional programming avoids side effects, which
are used in imperative programming to implement state and I/O. Pure functional
programming completely prevents side-effects and provides referential transparency.
The following two examples (written in JavaScript) achieve the same effect: they multiply all
even numbers in an array by 10 and add them all, storing the final sum in the variable
"result".
Simulating state[edit]
There are tasks (for example, maintaining a bank account balance) that often seem most
naturally implemented with state. Pure functional programming performs these tasks, and I/O
tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads,
[80]
derived from category theory. Monads offer a way to abstract certain types of
computational patterns, including (but not limited to) modeling of computations with
mutable state (and other side effects such as I/O) in an imperative manner without losing
purity. While existing monads may be easy to apply in a program, given appropriate
templates and examples, many students find them difficult to understand conceptually,
e.g., when asked to define new monads (which is sometimes needed for certain types of
[81]
libraries).
Functional languages also simulate states by passing around immutable states. This can
be done by making a function accept the state as one of its parameters, and return a new
[82]
state together with the result, leaving the old state unchanged.
Impure functional languages usually include a more direct method of managing mutable
state. Clojure, for example, uses managed references that can be updated by applying
pure functions to the current state. This kind of approach enables mutability while still
promoting the use of pure functions as the preferred way to express computations.
[citation needed]
Alternative methods such as Hoare logic and uniqueness have been developed to track
side effects in programs. Some modern research languages use effect systems to make
[citation needed]
the presence of side effects explicit.
Efficiency issues[edit]
Functional programming languages are typically less efficient in their use of CPU and memory
[83]
than imperative languages such as C and Pascal. This is related to the fact that some
mutable data structures like arrays have a very straightforward implementation using
present hardware. Flat arrays may be accessed very efficiently with deeply pipelined
CPUs, prefetched efficiently through caches (with no complex pointer chasing), or
handled with SIMD instructions. It is also not easy to create their equally efficient
general-purpose immutable counterparts. For purely functional languages, the worst-
case slowdown is logarithmic in the number of memory cells used, because mutable
memory can be represented by a purely functional data structure with logarithmic access
[84]
time (such as a balanced tree). However, such slowdowns are not universal. For
programs that perform intensive numerical computations, functional languages such as
OCaml and Clean are only slightly slower than C according to The Computer Language
[85]
Benchmarks Game. For programs that handle large matrices and multidimensional
databases, array functional languages (such as J and K) were designed with speed
optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing the
compiler to make assumptions that are unsafe in an imperative language, thus
[86]
increasing opportunities for inline expansion. Even if the involved copying that may
seem implicit when dealing with persistent immutable data structures might seem
computationally costly, some functional programming languages, like Clojure solve this
issue by implementing mechanisms for safe memory sharing between formally
[87]
immutable data. Rust distinguishes itself by its approach to data immutability which
[88] [89]
involves immutable references and a concept called lifetimes.
Immutable data with separation of identity and state and shared-nothing schemes can
also potentially be more well-suited for concurrent and parallel programming by the
virtue of reducing or eliminating the risk of certain concurrency hazards, since
concurrent operations are usually atomic and this allows eliminating the need for locks.
This is how for example java.util.concurrent classes are implemented, where some
of them are immutable variants of the corresponding classes that are not suitable for
[90]
concurrent use. Functional programming languages often have a concurrency model
that instead of shared state and synchronization, leverages message passing
mechanisms (such as the actor model, where each actor is a container for state,
[91][92]
behavior, child actors and a message queue). This approach is common in
Erlang/Elixir or Akka.
Lazy evaluation may also speed up the program, even asymptotically, whereas it may
slow it down at most by a constant factor (however, it may introduce memory leaks if
[66]
used improperly). Launchbury 1993 discusses theoretical issues related to memory
[93]
leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for
analyzing and fixing them. However, the most general implementations of lazy evaluation
making extensive use of dereferenced code and data perform poorly on modern
processors with deep pipelines and multi-level caches (where a cache miss may cost
[citation needed]
hundreds of cycles) .
Abstraction cost[edit]
Some functional programming languages might not optimize abstractions such as higher order
functions like "map" or "filter" as efficiently as the underlying imperative operations.
Consider, as an example, the following two ways to check if 5 is an even number in
Clojure:
● (even? 5)
● (.equals (mod 5 2) 0)
(defn even?
Functional
programming
52 languages
Article
Talk
Read
Edit
View history
Tools
Functional programming has its roots in academia, evolving from the lambda
calculus, a formal system of computation based only on functions. Functional
programming has historically been less popular than imperative programming,
but many functional languages are seeing use today in industry and education,
including Common Lisp, Scheme,[3][4][5][6] Clojure, Wolfram Language,[7][8] Racket,
[9]
Erlang,[10][11][12] Elixir,[13] OCaml,[14][15] Haskell,[16][17] and F#.[18][19] Lean is a
functional programming language commonly used for verifying mathematical
theorems.[20] Functional programming is also key to some languages that have
found success in specific domains, like JavaScript in the Web,[21] R in statistics,
[22][23]
J, K and Q in financial analysis, and XQuery/XSLT for XML.[24][25] Domain-
specific declarative languages like SQL and Lex/Yacc use some elements of
functional programming, such as not allowing mutable values.[26] In addition,
many other programming languages support programming in a functional style or
have implemented features from functional programming, such as C++11, C#,[27]
Kotlin,[28] Perl,[29] PHP,[30] Python,[31] Go,[32] Rust,[33] Raku,[34] Scala,[35] and Java
(since Java 8).[36]
History[edit]
The first high-level functional programming language, Lisp, was developed in the
late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy
while at Massachusetts Institute of Technology (MIT).[40] Lisp functions were
defined using Church's lambda notation, extended with a label construct to allow
recursive functions.[41] Lisp first introduced many paradigmatic features of
functional programming, though early Lisps were multi-paradigm languages, and
incorporated support for numerous programming styles as new paradigms
evolved. Later dialects, such as Scheme and Clojure, and offshoots such as
Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly
functional core, while Common Lisp was designed to preserve and update the
paradigmatic features of the numerous older dialects it replaced. [42]
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book
A Programming Language (ISBN 9780471430148). APL was the primary influence
on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the
mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K,
which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine,[44] the first abstract
machine for a functional programming language,[45] described a correspondence
between ALGOL 60 and the lambda calculus,[46][47] and proposed the ISWIM
programming language.[48]
John Backus presented FP in his 1977 Turing Award lecture "Can Programming
Be Liberated From the von Neumann Style? A Functional Style and its Algebra of
Programs".[49] He defines functional programs as being built up in a hierarchical
way by means of "combining forms" that allow an "algebra of programs"; in
modern language, this means that functional programs follow the principle of
compositionality.[citation needed] Backus's paper popularized research into functional
programming, though it emphasized function-level programming rather than the
lambda-calculus style now associated with functional programming.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as
described in the Lambda Papers and the 1985 textbook Structure and
Interpretation of Computer Programs. Scheme was the first dialect of lisp to use
lexical scoping and to require tail-call optimization, features that encourage
functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called
constructive type theory), which associated functional programs with
constructive proofs expressed as dependent types. This led to new approaches
to interactive theorem proving and has influenced the development of
subsequent functional programming languages.[citation needed]
More recently it has found use in niches such as parametric CAD in the
OpenSCAD language built on the CGAL framework, although its restriction on
reassigning values (all values are treated as constants) has led to confusion
among users who are unfamiliar with functional programming as a concept. [53]
[57]
A number of concepts and paradigms are specific to functional programming, and
generally foreign to imperative programming (including object-oriented
programming). However, programming languages often cater to several
programming paradigms, so programmers using "mostly imperative" languages
may have utilized some of these concepts.[58]
Higher-order functions are functions that can either take other functions as
arguments or return them as results. In calculus, an example of a higher-order
function is the differential operator
𝑑/𝑑𝑥
Pure functions[edit]
Main article: Pure function
Pure functions (or expressions) have no side effects (memory or I/O). This means
that pure functions have several useful properties, many of which can be used to
optimize the code:
Recursion[edit]
fails under strict evaluation because of the division by zero in the third element of
the list. Under lazy evaluation, the length function returns the value 4 (i.e., the
number of items in the list), since evaluating it does not attempt to evaluate the
terms making up the list. In brief, strict evaluation always fully evaluates function
arguments before invoking the function. Lazy evaluation does not evaluate
function arguments unless their values are required to evaluate the function call
itself.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program
modularity through separation of concerns, by easing independent
implementation of producers and consumers of data streams. [2] Launchbury 1993
describes some difficulties that lazy evaluation introduces, particularly in
analyzing a program's storage requirements, and proposes an operational
semantics to aid in such analysis.[66] Harper 2009 proposes including both strict
and lazy evaluation in the same language, using the language's type system to
distinguish them.[67]
Type systems[edit]
Referential transparency[edit]
Functional programs do not have assignment statements, that is, the value of a
variable in a functional program never changes once defined. This eliminates any
chances of side effects because any variable can be replaced with its actual value
at any point of execution. So, functional programs are referentially transparent. [77]
Consider C assignment statement x=x * 10, this changes the value assigned to
the variable x. Let us say that the initial value of x was 1, then two consecutive
evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing
x=x * 10 with either 10 or 100 gives a program a different meaning, and so the
expression is not referentially transparent. In fact, assignment statements are
never referentially transparent.
Data structures[edit]
Main article: Purely functional data structure
Purely functional data structures are often represented in a different way to their
imperative counterparts.[78] For example, the array with constant access and
update times is a basic component of most imperative languages, and many
imperative data-structures, such as the hash table and binary heap, are based on
arrays. Arrays can be replaced by maps or random access lists, which admit
purely functional implementation, but have logarithmic access and update times.
Purely functional data structures have persistence, a property of keeping
previous versions of the data structure unmodified. In Clojure, persistent data
structures are used as functional alternatives to their imperative counterparts.
Persistent vectors, for example, use trees for partial updating. Calling the insert
method will result in some but not all nodes being created. [79]
The following two examples (written in JavaScript) achieve the same effect: they
multiply all even numbers in an array by 10 and add them all, storing the final
sum in the variable "result".
Simulating state[edit]
There are tasks (for example, maintaining a bank account balance) that often seem most
naturally implemented with state. Pure functional programming performs these tasks, and I/O
tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads,
derived from category theory.[80] Monads offer a way to abstract certain types of
computational patterns, including (but not limited to) modeling of computations
with mutable state (and other side effects such as I/O) in an imperative manner
without losing purity. While existing monads may be easy to apply in a program,
given appropriate templates and examples, many students find them difficult to
understand conceptually, e.g., when asked to define new monads (which is
sometimes needed for certain types of libraries).[81]
Alternative methods such as Hoare logic and uniqueness have been developed to
track side effects in programs. Some modern research languages use effect
systems to make the presence of side effects explicit.[citation needed]
Efficiency issues[edit]
Functional programming languages are typically less efficient in their use of CPU and
memory than imperative languages such as C and Pascal.[83] This is related to the
fact that some mutable data structures like arrays have a very straightforward
implementation using present hardware. Flat arrays may be accessed very
efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with
no complex pointer chasing), or handled with SIMD instructions. It is also not
easy to create their equally efficient general-purpose immutable counterparts. For
purely functional languages, the worst-case slowdown is logarithmic in the
number of memory cells used, because mutable memory can be represented by a
purely functional data structure with logarithmic access time (such as a balanced
tree).[84] However, such slowdowns are not universal. For programs that perform
intensive numerical computations, functional languages such as OCaml and
Clean are only slightly slower than C according to The Computer Language
Benchmarks Game.[85] For programs that handle large matrices and
multidimensional databases, array functional languages (such as J and K) were
designed with speed optimizations.
Immutable data with separation of identity and state and shared-nothing schemes
can also potentially be more well-suited for concurrent and parallel programming
by the virtue of reducing or eliminating the risk of certain concurrency hazards,
since concurrent operations are usually atomic and this allows eliminating the
need for locks. This is how for example java.util.concurrent classes are
implemented, where some of them are immutable variants of the corresponding
classes that are not suitable for concurrent use.[90] Functional programming
languages often have a concurrency model that instead of shared state and
synchronization, leverages message passing mechanisms (such as the actor
model, where each actor is a container for state, behavior, child actors and a
message queue).[91][92] This approach is common in Erlang/Elixir or Akka.
Lazy evaluation may also speed up the program, even asymptotically, whereas it
may slow it down at most by a constant factor (however, it may introduce
memory leaks if used improperly). Launchbury 1993[66] discusses theoretical
issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008[93]
give some practical advice for analyzing and fixing them. However, the most
general implementations of lazy evaluation making extensive use of dereferenced
code and data perform poorly on modern processors with deep pipelines and
multi-level caches (where a cache miss may cost hundreds of cycles) [citation needed].
Abstraction cost[edit]
Some functional programming languages might not optimize abstractions such as higher order
functions like "map" or "filter" as efficiently as the underlying imperative operations.
Consider, as an example, the following two ways to check if 5 is an even number
in Clojure:
● (even? 5)
● (.equals (mod 5 2) 0)
(defn even?
Functional
programming
52 languages
Article
Talk
Read
Edit
View history
Tools
Functional programming has its roots in academia, evolving from the lambda
calculus, a formal system of computation based only on functions. Functional
programming has historically been less popular than imperative programming,
but many functional languages are seeing use today in industry and education,
including Common Lisp, Scheme,[3][4][5][6] Clojure, Wolfram Language,[7][8] Racket,
[9]
Erlang,[10][11][12] Elixir,[13] OCaml,[14][15] Haskell,[16][17] and F#.[18][19] Lean is a
functional programming language commonly used for verifying mathematical
theorems.[20] Functional programming is also key to some languages that have
found success in specific domains, like JavaScript in the Web,[21] R in statistics,
[22][23]
J, K and Q in financial analysis, and XQuery/XSLT for XML.[24][25] Domain-
specific declarative languages like SQL and Lex/Yacc use some elements of
functional programming, such as not allowing mutable values.[26] In addition,
many other programming languages support programming in a functional style or
have implemented features from functional programming, such as C++11, C#,[27]
Kotlin,[28] Perl,[29] PHP,[30] Python,[31] Go,[32] Rust,[33] Raku,[34] Scala,[35] and Java
(since Java 8).[36]
History[edit]
The first high-level functional programming language, Lisp, was developed in the
late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy
while at Massachusetts Institute of Technology (MIT).[40] Lisp functions were
defined using Church's lambda notation, extended with a label construct to allow
recursive functions.[41] Lisp first introduced many paradigmatic features of
functional programming, though early Lisps were multi-paradigm languages, and
incorporated support for numerous programming styles as new paradigms
evolved. Later dialects, such as Scheme and Clojure, and offshoots such as
Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly
functional core, while Common Lisp was designed to preserve and update the
paradigmatic features of the numerous older dialects it replaced. [42]
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book
A Programming Language (ISBN 9780471430148). APL was the primary influence
on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the
mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K,
which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine,[44] the first abstract
machine for a functional programming language,[45] described a correspondence
between ALGOL 60 and the lambda calculus,[46][47] and proposed the ISWIM
programming language.[48]
John Backus presented FP in his 1977 Turing Award lecture "Can Programming
Be Liberated From the von Neumann Style? A Functional Style and its Algebra of
Programs".[49] He defines functional programs as being built up in a hierarchical
way by means of "combining forms" that allow an "algebra of programs"; in
modern language, this means that functional programs follow the principle of
compositionality.[citation needed] Backus's paper popularized research into functional
programming, though it emphasized function-level programming rather than the
lambda-calculus style now associated with functional programming.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as
described in the Lambda Papers and the 1985 textbook Structure and
Interpretation of Computer Programs. Scheme was the first dialect of lisp to use
lexical scoping and to require tail-call optimization, features that encourage
functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called
constructive type theory), which associated functional programs with
constructive proofs expressed as dependent types. This led to new approaches
to interactive theorem proving and has influenced the development of
subsequent functional programming languages.[citation needed]
The lazy functional language, Miranda, developed by David Turner, initially
appeared in 1985 and had a strong influence on Haskell. With Miranda being
proprietary, Haskell began with a consensus in 1987 to form an open standard for
functional programming research; implementation releases have been ongoing
as of 1990.
More recently it has found use in niches such as parametric CAD in the
OpenSCAD language built on the CGAL framework, although its restriction on
reassigning values (all values are treated as constants) has led to confusion
among users who are unfamiliar with functional programming as a concept. [53]
Concepts[edit]
[57]
A number of concepts and paradigms are specific to functional programming, and
generally foreign to imperative programming (including object-oriented
programming). However, programming languages often cater to several
programming paradigms, so programmers using "mostly imperative" languages
may have utilized some of these concepts.[58]
Higher-order functions are functions that can either take other functions as
arguments or return them as results. In calculus, an example of a higher-order
function is the differential operator
𝑑/𝑑𝑥
Pure functions[edit]
Pure functions (or expressions) have no side effects (memory or I/O). This means
that pure functions have several useful properties, many of which can be used to
optimize the code:
Recursion[edit]
fails under strict evaluation because of the division by zero in the third element of
the list. Under lazy evaluation, the length function returns the value 4 (i.e., the
number of items in the list), since evaluating it does not attempt to evaluate the
terms making up the list. In brief, strict evaluation always fully evaluates function
arguments before invoking the function. Lazy evaluation does not evaluate
function arguments unless their values are required to evaluate the function call
itself.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program
modularity through separation of concerns, by easing independent
implementation of producers and consumers of data streams. [2] Launchbury 1993
describes some difficulties that lazy evaluation introduces, particularly in
analyzing a program's storage requirements, and proposes an operational
semantics to aid in such analysis.[66] Harper 2009 proposes including both strict
and lazy evaluation in the same language, using the language's type system to
distinguish them.[67]
Type systems[edit]
Main article: Type system
Referential transparency[edit]
Functional programs do not have assignment statements, that is, the value of a
variable in a functional program never changes once defined. This eliminates any
chances of side effects because any variable can be replaced with its actual value
at any point of execution. So, functional programs are referentially transparent. [77]
Consider C assignment statement x=x * 10, this changes the value assigned to
the variable x. Let us say that the initial value of x was 1, then two consecutive
evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing
x=x * 10 with either 10 or 100 gives a program a different meaning, and so the
expression is not referentially transparent. In fact, assignment statements are
never referentially transparent.
Data structures[edit]
Purely functional data structures are often represented in a different way to their
imperative counterparts.[78] For example, the array with constant access and
update times is a basic component of most imperative languages, and many
imperative data-structures, such as the hash table and binary heap, are based on
arrays. Arrays can be replaced by maps or random access lists, which admit
purely functional implementation, but have logarithmic access and update times.
Purely functional data structures have persistence, a property of keeping
previous versions of the data structure unmodified. In Clojure, persistent data
structures are used as functional alternatives to their imperative counterparts.
Persistent vectors, for example, use trees for partial updating. Calling the insert
method will result in some but not all nodes being created. [79]
The following two examples (written in JavaScript) achieve the same effect: they
multiply all even numbers in an array by 10 and add them all, storing the final
sum in the variable "result".
Simulating state[edit]
There are tasks (for example, maintaining a bank account balance) that often seem most
naturally implemented with state. Pure functional programming performs these tasks, and I/O
tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads,
derived from category theory.[80] Monads offer a way to abstract certain types of
computational patterns, including (but not limited to) modeling of computations
with mutable state (and other side effects such as I/O) in an imperative manner
without losing purity. While existing monads may be easy to apply in a program,
given appropriate templates and examples, many students find them difficult to
understand conceptually, e.g., when asked to define new monads (which is
sometimes needed for certain types of libraries).[81]
Alternative methods such as Hoare logic and uniqueness have been developed to
track side effects in programs. Some modern research languages use effect
systems to make the presence of side effects explicit.[citation needed]
Efficiency issues[edit]
Functional programming languages are typically less efficient in their use of CPU and
memory than imperative languages such as C and Pascal.[83] This is related to the
fact that some mutable data structures like arrays have a very straightforward
implementation using present hardware. Flat arrays may be accessed very
efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with
no complex pointer chasing), or handled with SIMD instructions. It is also not
easy to create their equally efficient general-purpose immutable counterparts. For
purely functional languages, the worst-case slowdown is logarithmic in the
number of memory cells used, because mutable memory can be represented by a
purely functional data structure with logarithmic access time (such as a balanced
tree).[84] However, such slowdowns are not universal. For programs that perform
intensive numerical computations, functional languages such as OCaml and
Clean are only slightly slower than C according to The Computer Language
Benchmarks Game.[85] For programs that handle large matrices and
multidimensional databases, array functional languages (such as J and K) were
designed with speed optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing
the compiler to make assumptions that are unsafe in an imperative language,
thus increasing opportunities for inline expansion.[86] Even if the involved
copying that may seem implicit when dealing with persistent immutable data
structures might seem computationally costly, some functional programming
languages, like Clojure solve this issue by implementing mechanisms for safe
memory sharing between formally immutable data.[87] Rust distinguishes itself by
its approach to data immutability which involves immutable references[88] and a
concept called lifetimes.[89]
Immutable data with separation of identity and state and shared-nothing schemes
can also potentially be more well-suited for concurrent and parallel programming
by the virtue of reducing or eliminating the risk of certain concurrency hazards,
since concurrent operations are usually atomic and this allows eliminating the
need for locks. This is how for example java.util.concurrent classes are
implemented, where some of them are immutable variants of the corresponding
classes that are not suitable for concurrent use.[90] Functional programming
languages often have a concurrency model that instead of shared state and
synchronization, leverages message passing mechanisms (such as the actor
model, where each actor is a container for state, behavior, child actors and a
message queue).[91][92] This approach is common in Erlang/Elixir or Akka.
Lazy evaluation may also speed up the program, even asymptotically, whereas it
may slow it down at most by a constant factor (however, it may introduce
memory leaks if used improperly). Launchbury 1993[66] discusses theoretical
issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008[93]
give some practical advice for analyzing and fixing them. However, the most
general implementations of lazy evaluation making extensive use of dereferenced
code and data perform poorly on modern processors with deep pipelines and
multi-level caches (where a cache miss may cost hundreds of cycles) [citation needed].
Abstraction cost[edit]
Some functional programming languages might not optimize abstractions such as higher order
functions like "map" or "filter" as efficiently as the underlying imperative operations.
Consider, as an example, the following two ways to check if 5 is an even number
in Clojure:
● (even? 5)
● (.equals (mod 5 2) 0)
When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a
Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1,
the first implementation, which is implemented as:
(defn even?