HW 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Homework 2: Unit Testing

In this assignment, you will test the FlashCards system that you expanded in the previous
homework.

This assignment will give you experience creating unit tests against a specification and to cover
code with tests as you write it. You will practice following dedicated testing strategies and best
practices for unit testing. You will automate your tests with continuous integration and use test
coverage tools.

Starter code
Continue to work with the code from Homework 1. If you have made any changes to file names
or interfaces in the provided code in the directories cards , data , or ordering please restore
them to the original names and interfaces and do not change them in this assignment.

If you want, you can start over by creating a new branch off the first commit in the branch (find id
of first commit in branch and follow instructions) and then copy over your implementation for
RecentMistakesFirstSorter .

Tasks

Part 1: Infrastructure setup


Set up the project so that you can write and execute unit tests. We explain JUnit and ts-jest in both
class and the recitation and provide an example setup in the latter, but you are welcome to user
other test frameworks. Make sure that tests are automatically executed with mvn test or npm
test . Make sure the tests are executed in Travis-CI, if necessary by modifying the .travis.yml
file. We recommend also to identify how you can view test coverage in your IDE or in a generated
report.

Part 2: Specification-base testing of existing implementation


Test existing code. Read the public interface specification of every method in the FlashCard
source code, except the exclusions below (don't forget your newly added
RecentMistakesFirstSorter !). Write test cases that check whether the implementation covers
this specification completely and precisely as best as possible -- even if the implementation is
wrong! You may find implementation issues in your newly added
RecentMistakesFirstOrganizer or other code;* make sure to fix such bugs so that your tests
all pass.

That means:

Consider the input space and the specification to intentionally select valid and invalid inputs
as tests.
Write a test that confirms that everything that is stated works as expected. Guard against
"typical" mistakes, such as off-by-one errors or forgetting to implement one part of a multi-
faceted specification.
Do not test what is not specified. Many specifications allow some flexibility and tests should
not reject valid implementations of a specification. For example, if the potential nullity of a
parameter is not stated, do not write a test that fails if the parameter is null . That might be
intuitively (and realistically) undesirable behavior, but if it is not in the specification, it is not
tested. (Specifications are often a bit incomplete like that; writing full specifications is very
tedious.)
Ideally, follow a test design strategy discussed in class, such as boundary value analysis.

The goal is to achieve 100% specification coverage and to write tests that are good at detecting
defects. Note that specification coverage is not automatically measurable.

We will evaluate the quality of your tests by injecting bugs into the implementations to see
whether your tests catch them. We will also inject allowed changes to the implementation that do
not violate the specification to see whether your tests still pass as expected. You can perform the
same kind of experiments yourself to evaluate the quality of your tests by injecting some
mistakes yourself (e.g., replacing < by > or by <= or by injecting off-by-one errors adding +1 to
expressions).

When writing test cases, make sure you follow good practices for test design, as discussed in
class and the reading.

What to test. You do not need to test:

toString , get* methods: these contain trivial or non-essential functionality. Your tests
may, of course, use getter methods.
Main.java / index.ts and any code written for parsing command line options: these
depend heavily on your implementation in homework 1, which specified no interface.
UI.* , CardLoader.java (Java), loadCards in store.ts (TypeScript): these deal with I/O.
CardShuffler.* : randomness is hard to test.

Please note that the latter two exclusions would not be industry-standard. They are nontrivial to
test, though. We will show you later in the course how you can test such functionality.

Do test everything else, including your own implementation of RecentMistakesFirstSorter


from Homework 1.

In Java: Don't forget reorganize in CardRepeater : although it is in an interface, it has an


implementation that is used by classes that implement it, so you can test it via those.

Our reference implementation has about 40 tests.

Part 3: Structural testing of new code


Implement an extension. Let's extend the flash card application with a bit of gamification. Users
of the flash card application can receive achievements when they meet certain criteria. Extend the
program with at least different 3 achievements of your choice. What achievements you
implement is up to you; example achievements are (1) getting everything correct in the last
round, (2) answering everything in under 3 seconds on average, (3) answering a card for more
than 5 times, or (4) taking more than 1h to answer all cards in one round. The implemented
achievements should be reasonably distinct, that is, don't just copy one implementation and
change a parameter (e.g., not answering under 3 seconds, under 4 seconds, and under 5
seconds). When a user has met an achievement for the first time in the program's execution,
simply print the achievement on the console.

We suggest to integrate the achievement mechanism in the UI code as follows: Create an object
to handle all achievements logic. In the main loop of the studyCards method, call a
beginRound() and getNewAchievements() in each iteration of cueAllCards , for example:
while (!producer.isComplete()) {
console.log(`${producer.countCards()} cards to go...`)
achievements.beginRound()
cueAllCards(producer)
const newAchievements: string[] =
achievements.getNewAchievements(producer)
for (const newAchievement of newAchievements)
console.log("** New achievement unlocked: " + newAchievement)
console.log("Reached the end of the card deck, reorganizing...")
producer.reorganize()
}

We leave the implementation of this largely up to you, but you will have an easier time testing
your code if it is structured into multiple testable methods. Write a textual specification for all
your achievements as part of the methods that implement them.

Test the extension. Write tests for your new code (excluding the UI code) to achieve 100%
branch coverage. You do not need to achieve a branch coverage goal for any other code and we
will not evaluate your tests for the achievement part with injected bugs.

You can write tests after you implement the extension, before, or during. Usually code is easier to
test if it was written to be testable, for example by splitting it into smaller units.

As in Part 2, follow the best practices for unit testing.

Report coverage. Extend the project's build system (Maven's pom.xml or npm's package.json) so
that it creates a coverage report with mvn site or npm run coverage . The third recitation
provides a template.

Part 4: Documentation and Reflection


Add three new sections to the README.md file in your branch:

In section Achievements briefly describe the achievements you implemented.

In section Testing strategy briefly describe whether you followed any specific test case design
strategy, such as boundary value analysis, when creating tests. Briefly explain, why you did or did
not use these techniques. (about 1 paragraph)

In section Specification vs structure testing briefly reflect on your experience with the two
different testing approaches. Was one harder or better than the other for you? (about 1 or 2
paragraphs)

Submitting your work


Always submit all code to GitHub. Once you have pushed your final code there, submit a link to
your final commit on Canvas. A link will look like https://github.com/CMU-17-
214/<reponame>/commit/<commitid> . You can get to this link easily when you click on the last
commit (above the list of files) in the GitHub web interface.

Evaluation
The assignment is worth 100 points. We expect to grade the assignment approximately with this
rubric:

Specification Tests (25pt):


5: No tests fail on a correct implementation. We will swap out the code in cards , data , and
ordering for five different correct implementations of the provided specification. Points
will be awarded proportional to the number of implementations that pass your test suite.

20: At least one test fails for each of the ca. 20 bugs we introduce in an otherwise correct
implementation. We will run your code against multiple new implementations in cards ,
data , and ordering that are entirely correct except for a single bug. One or more of your
tests should fail because of said bug. Points will be awarded proportional to the number of
bugs discovered by your test suite.

Structural Tests (25pt):

5: The implemented achievements are listed in the README.md file

5: Achievements are reasonably distinct and their implementations are functionally correct,
matching their documented specification.
15: Test cases achieve perfect branch coverage on the implementation of the achievements.

Test Quality (20pt)

15: Tests are independent, cohesive, small, avoid excessive redundancies.

5: Tests are readable (e.g., meaningful names, comments).

Infrastructure and style (20pt):

5: Travis-CI is set up to automatically execute tests

5: The build passes on Travis-CI


5: The project is set up to generate coverage reports (on the command line or in a file) with
mvn site or npm run coverage

5: Commits are reasonably cohesive; commit messages are reasonable

Reflection (10pt):

5: Your writing on your testing strategy demonstrates an understanding of one or more


relevant test case design strategies and supports your decision whether or not to adopt
these.
5: Your reflection on the two forms of testing shows that you have given thought to the
strengths and weaknesses of each based on your experience. The reflection is grounded in
concrete experiences, not generic statements.

Footnote

*As has been noted, the TypeScript branch initially contained one mistake in mostmistakes.ts ,
line 18, which should read:

c.sort((a, b) => numberOfFailures(a) > numberOfFailures(b) ? -1 :


(numberOfFailures(a) < numberOfFailures(b) ? 1 : 0))

If you cloned the repository early, please fix this now.

You might also like