Ir Data Collection
Ir Data Collection
Data Collection
Research Question:
Hypothesis:
applications, then they will provide the user with the best experience.
Variables:
The dependent is measured by a scale that evaluates the programs debugging methods by
examining how it reacts to logical errors, semantic errors and its run time. The independent
variable is the application that is running the code. The code remains constant throughout the
experiment
Procedures:
2. Once they have fully downloaded type up some code that simulates rolling dice, a
tic-tac-toe game, and rock-paper-scissors (if not familiar with code use an online
tutorial).
3. Within these simply coded games there are built in semantic and logical errors.
4. Run each program 3 times and right down observations like how fast it picked up the
errors, how they helped fix them and how fast its run time is.
5. Then give each trial a rating on a scale from 1 to 5 by looking at the debugging
7. Repeat these steps until the debugging methods are evaluated and rated for each
programming application.
Data:
GNAT
Programming
Code::Blocks Studio CodeLite
Trial-1:
Rolling Dice 4 2 4
Trial-2:
Rolling Dice 3 1 3
Trial-3:
Rolling Dice 3 1 4
Trial-1:
Tic-Tac-Toe 3 5 5
Trial-2:
Tic-Tac-Toe 2 3 5
Trial-3:
Tic-Tac-Toe 3 3 5
Trial-1:
Rock-Paper-S
cissors 1 4 3
Trial-1:
Rock-Paper-S
cissors 3 5 4
Trial-1:
Rock-Paper-S 2 4 5
cissors
JDK (Java
Development
Kit) Eclipse JUnit
Trial-1:
Rolling Dice 3 3 2
Trial-2:
Rolling Dice 2 3 1
Trial-3:
Rolling Dice 3 3 3
Trial-1:
Tic-Tac-Toe 2 5 4
Trial-2:
Tic-Tac-Toe 2 3 4
Trial-3:
Tic-Tac-Toe 2 2 3
Trial-1:
Rock-Paper-S
cissors 4 1 2
Trial-1:
Rock-Paper-S
cissors 1 5 5
Trial-1:
Rock-Paper-S
cissors 3 4 5
Eclipse with
PyCharm Eric PyDev
Trial-1:
Rolling Dice 4 3 5
Trial-2:
Rolling Dice 4 5 4
Trial-3:
Rolling Dice 3 5 3
Trial-1:
Tic-Tac-Toe 4 4 5
Trial-2:
Tic-Tac-Toe 4 4 5
Trial-3:
Tic-Tac-Toe 4 4 4
Trial-1:
Rock-Paper-S
cissors 5 4 5
Trial-1:
Rock-Paper-S
cissors 4 3 4
Trial-1:
Rock-Paper-S
cissors 4 2 4
Scale Used to Evaluate the Applications:
Rating 5:
- Gave very descriptive feedback and showed the user how to fix the error
Rating 4:
Rating 3:
- Run time was 20 to 30 seconds
Rating 2:
Rating 1:
- Errors were present, but the program didn’t explain them or highlight them
Rating 0:
The data shows the relations and trends throughout the evaluation process. Each excerpt
of code has logical and semantic errors that the application should be able to identify. The scale
created measured all aspects needed to debug a program well. Overall, the language that was
proven to be most user-friendly and debugged the program excerpts the best was Python. Python
is usually referred to as one of few languages out there that is both easy to get started with for
beginners yet incredibly powerful when beginners improve and begin to working on real-world
projects. The programming applications as a whole, within Python, had ratings mostly of 4’s and
5’s with some 3’s. In terms of consistency no matter the complexity of the program, Python
maintained the same rating. That means that even when doing complicated programs later on that
Python would still use its user-friendly and extensively detailed debugging methods to help the
programmer. The specific applications of Python all were on the same level as each other.
Eclipse with PyDev had a slight edge in rating than the other applications. This is mainly
because of its extensive features. Such as code completion, integration of Python debugging,
addition of a token browser, refactoring tools, and much more. Those features usually help more
advanced Python programmers. Eric was also a helpful application but had a longer run time
compared to the other application, but in that time it collected more errors. However, some
criticisms of Eric are that it’s long run time becomes a problem when running large excerpts of
code. PyCharm is a mixture of the other two Python applications. It’s relatively fast run time
makes up for the errors missed. Also the feedback given is extremely helpful because it
highlights the error, explains what went wrong and tells the user how to fix it.
The second most helpful applications to use are the ones in Java. Java may seem
complicated at first because of the error messages, but after coding more they become easier
understand. JDK and JUnit earned higher ratings than Eclipse, but Eclipse remained consistent
even though the excerpts of code became more complicated. JUnit earned a higher rating as the
code became more complex, which shows that when programming something advanced and
larger that JUnit is the best. JUnit can test one block of code at a time rather than waiting for the
entire program to be completed before you run a test. You can actually test and then code,
meaning there is very little doubt about the final functionality of an program. JDK’s rating
decreased as the complexity increased. So starting out with JDK would probably be the best
experience because it includes the necessary Java Compiler, the Java Runtime Environment, and
the Java APIs. Basically meaning everything is up to date all the time and gives the user a proper
user friendly experience. However, the consistency of Eclipse puts it ahead of the other Java
applications. It got close to the same rating no matter the complexity. Having consistency is most
important when learning programming and eventually coding more advanced programs. Java
provides much needed assistance for code completion, refactoring and syntax checking. It also
offers something called the Java Development Tools project, which provides a range of tool
The worst language to begin programming in is C++. Unless this is a profession, C++
still remains as one of the hardest programming languages to fully understand. That’s why all the
C++ applications had various ratings as the complexity increased. GNAT and Code::Blocks
reamined equal and had similar ratings in each complexity. GNAT highlights syntax and has an
incredibly fast run time, but not all the errors are caught. Code::Blocks is opposite to GNAT
programming, but the error messages of it has little to no description of the error itself and how
to fix it. CodeLite was the best because it performs well when iterating complex programs. Due
to the fact that it displays errors in an organized, descriptive glossary and also used code
navigation which helps beginners understand where to place what and why. Throughout every
application in any programming language, consistency has proven to be the main key when
learning new programming skills. Finding errors, giving feedback on errors and measuring run
time are all characteristics of debugging programs. Beginners should remain persistent when
learning how to program because of the various obstacles and rules that must be learned.
Overall, the experiment was the best way for the researcher to answer the question. The
access to all the programming tools and background knowledge in computational skills allowed
this to occur. This project teached programmers the main characteristics and ideas to understand
in order to begin computer programming. It explains good examples of where to begin, such as
Eclipse, which is well-known due to its consistency when iterating the code of beginners. BAd
examples of a starting point were also shared. Such as C++ because the complexity and
perception of the programming language itself often makes beginners confused and causes
computer programming to be viewed as boring and difficult. THe impact to society gained from
this experiment is a place to start for people who have an interest in computational skills and
want to pursue this interest. This paper will provide them with traits and qualities of each
programming tool and they themselves can use the scale given to evaluate their experience or use