0% found this document useful (0 votes)
36 views2 pages

CSP Student Digital Notebook Activity 323

The document discusses analyzing collision data involving cyclists in Manhattan to determine the most common causes of accidents. It finds that the most common cause is driver inattention or distraction. It also finds that the most common location for collisions is Greene Street, with 3.5% of accidents occurring there. The document then examines insertion sort and merge sort algorithms, comparing the number of comparisons each makes on different sized datasets.

Uploaded by

smuniz2608
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views2 pages

CSP Student Digital Notebook Activity 323

The document discusses analyzing collision data involving cyclists in Manhattan to determine the most common causes of accidents. It finds that the most common cause is driver inattention or distraction. It also finds that the most common location for collisions is Greene Street, with 3.5% of accidents occurring there. The document then examines insertion sort and merge sort algorithms, comparing the number of comparisons each makes on different sized datasets.

Uploaded by

smuniz2608
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Name: Sebastien Muniz Date: 4/17/24

Lesson 2 – Little Data to Big Data

**Instructions: Please change the text color of your responses to red text. Please organize the endings to each page.

ACTIVITY 3.2.3 Pirates are the Problem

GOALS:

● Examine the efficiency of an algorithm.


● Separate correlation from causality.
● Analyze an algorithm.

You will be using Google Sheets for this assignment. You will use four different algorithms to analyze data. The code is
provided for you. Complete the following:

Sometimes sorting the data in different ways will allow different patterns to emerge.

The data sheet contains just data about collisions in Manhattan that involved a cyclist injury or fatality. Use this
data to determine the most common cause of these accidents.

What is the last row number that has an entry?

Row 9254

In what row number did you enter the formula?

Row 9255

According to your data, what was the most common cause of collision?

The most common cause of collision is Driver Inattention/Distraction


The insight that you gained from processing the data in this spreadsheet is information. Information is
the collection of facts and patterns extracted from data. Use the same techniques to gain information
about the most dangerous cross streets for cyclists in Manhattan.
The most common collision is in Greene Street with 3.5 percent of the accidents occuring on that street.
The page linked below has several sort algorithms along with animations illustrating their execution.
You will all examine insertion sort and merge sort; your teacher may have you examine additional
algorithms if time permits. Use the code, VS Code, and the animations to help you answer the following
questions.
How many times was a comparison made?
Insertion Sort has 53 comparisons and the Merge Sort has 11 comparisons.

How many times does the algorithm compare two numbers now?
Name: Sebastien Muniz Date: 4/17/24

Insertion Sort has 139 comparisons and the Merge Sort has 23 comparisons.

How many times does the algorithm compare two numbers now?

Insertion Sort has 554 comparisons and the Merge Sort has 47 comparisons.

Use a spreadsheet to create a line graph and predict how many comparisons would be made by the
algorithm if the dataset contained 10,000 items.

There would be 1727705.5 comparisons for the insertion sort. There would be 1608610.5 comparisons for the
bubble sort. There would be 119999 comparisons for the merge sort. There would be 499265.5 comparisons
for the quick sort.

Repeat the above step for the merge sort algorithm. How does the graph of insertion sort compare to the
graph merge sort?
The graph of insertion sort is way larger in terms of slope than the graph of merge sort, as they have
extremely different amounts of comparisons, thus greatly changing the slope sizes.

Is it true that efficiency is the most important factor when writing code?
I feel that efficiency is not the most important factor when writing code, but it allows you to improve your
code without having to be repetitive and making it confusing, as instead, it is shorter and more neat.

You might also like