Time Complexity Computation of An Algorithm Using Asymptotic Notation
Time Complexity Computation of An Algorithm Using Asymptotic Notation
Almeda, Dexter
Bercasio, Charlene
Loto, Renz Ramzel
BSCS 2-1D
Group 1
INTRODUCTION
Normally, the process of computational complexity is like this: Given two functions
(running time) and formulas for different asymptotic notation types, as for this case, three
namely - big O, omega, and theta, we would compute for the positive values of the given
function variable, say for example n and the constant C.
With the use of the application Time Complexity Computation of an Algorithm using
Asymptotic Notation, we aim to implement this process using a software for better output and
lesser cost (time and effort).
PROJECT OBJECTIVES
General Objective:
This project aims to put to use the theories that we learned from our assigned topic Asymptotic
Notations through the creation of a program that manifests the computational complexity computing for the running time.
Specific Objectives:
To lessen the effort and time cost in computational complexity with the use of an
application
ALGORITHM USED
In this project, we will test the suitability of a segment in an algorithm approach to the
solution of the Time Complexity Computation of an Algorithm using Asymptotic Notation. In
order to do this, we will utilize the Java programming language and integrate Graphical User
Interface (GUI) and file handling to come up with a friendly and better environment for the user.
Brute force is the algorithm that the proponents implemented to complete this project. To get
what we need for the computation, particularly the code and the number of inputs. We will then
make use of string matching to identify the code and formulas to compute for the information the
user need, to be exact, the running time. By performing the algorithms, we hope to successfully
benefit the user which is the students and faculty through a user-friendly and helpful program.
language, and the compiler that translates the program from the programming language into
code that runs directly on the computer, as well as other factors. First, we determine how long
the algorithm takes, in terms of the size of its input. The second idea is that we focus on how
fast this function grows with the input size. We call that the rate of growth of the running time.
To keep things manageable, we simplify the function to distill the most important part and cast
aside the less important parts. [2]
Improving the Automatic Evaluation of Problem Solutions in Programming
Determining a problem's asymptotic complexity is important in theoretical computer
science; there are many problems (such as matrix multiplication) whose complexities remain
unknown. Often, a computer scientist will discover an algorithm for a problem and then show
that it is asymptotically optimal; this is usually considered a landmark result in the study of that
problem. Asymptotic analysis is the most frequently used technique to quantify the
performance of an algorithm. Its name refers to the fact that this form of analysis neglects the
exact amount of time or memory that the algorithm uses on specific cases, but is concerned
only with the algorithm's asymptotic behaviourthat is, how the algorithm performs in the limit of
infinite problem size. [3]
Formalization of Asymptotic Notations in Higher-Order-Logic
The study is the first set-theoretic formalization of asymptotic notations in the open
literature and thus allows us to utilize interactive theorem proving in a very interesting new
direction.
The main idea is to model the complexity of the algorithm in higher-order logic and then
reason about its asymptotic bounds using our formalization of asymptotic notations we can
guarantee the accuracy of the analysis. For illustration purposes, we present the analyses of two
algorithms: insertion sort and uniqueness of array elements in this thesis. [4]
Lexical Analysis
Lexical Analysis is a process of analysing a stream of individual characters; normally
arranged as lines, into a sequence of lexical toxens to feed into the parser. The theory of lexers
and parsers are most often used in compiler. The lexical syntax is usually a regular language
with a regular rules consisting of regular expressions which define the set of possible character
sequences that are used to form individual toxens or lexemes. A lexer recognizes strings, and
for each kind, of string found the lexical program takes action, most simply producing a token.
[5]
SCREENSHOTS
BIBLIOGRAPHY
[1]
Dept . of Computing and Information Sciences Kansas State University, Manhattan,KS 66506
USA, January 2008.
[2]
[5]
2012