0% found this document useful (0 votes)
39 views11 pages

Ananiya Ameha Emerging Technology Assignment

Uploaded by

aduyaregal2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

Ananiya Ameha Emerging Technology Assignment

Uploaded by

aduyaregal2022
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

g p

ADDIS ABABA SCIENCE AND TECHNOLOGY


UNIVERSITY

College of Social and Natural Sciences

Emerging technology
INDIVIDUAL ASSIGNMENT

SECTION: 3
By Ananiya Ameha ETS0174/16

DATE : may,2024
Submitted To: Dr.Habib
Chapter one questions
1. What is Emerging Technology?
Answer:
"Emerging technology" typically denotes a novel technological advancement, but it can also
encompass the ongoing evolution of established technologies. It commonly pertains to technologies
currently in development or anticipated to become accessible within the upcoming five to ten years.
It includes areas such as quantum computing, blockchain technology, augmented reality, virtual reality,
Internet of Things (IoT), 5G networks, autonomous vehicles, advanced materials science, and
sustainable energy solutions. These technologies hold the promise of transforming industries,
revolutionizing communication, improving healthcare, enhancing transportation systems, and
addressing global challenges such as climate change and resource depletion. They represent the
forefront of human ingenuity and have the potential to redefine the way we live, work, and interact
with the world around us.

2. Given an example of currently emerged and future trends of emerging


technologies.
Answer :
Currently Emerged Technology: Artificial Intelligence (AI) - AI has already
emerged as a transformative technology across various industries. It involves the
development of computer systems capable of performing tasks that typically require
human intelligence, such as visual perception, speech recognition, decision-making,
and language translation. AI applications range from virtual assistants like Siri and
Alexa to advanced machine learning algorithms used in healthcare diagnostics,
finance, autonomous vehicles, and more.
Future Trend in Emerging Technologies: Quantum Computing - Quantum
computing represents a future trend in emerging technologies that has the potential to
revolutionize computation. Unlike classical computers, which use bits to process
information in binary (0s and 1s), quantum computers leverage quantum bits or
qubits. These qubits can exist in multiple states simultaneously, enabling quantum
computers to perform complex calculations at speeds exponentially faster than
classical computers. Quantum computing holds promise for solving optimization
problems, cryptography, drug discovery, materials science, and simulating complex
quantum phenomena, opening up new frontiers in scientific research and
technological innovation. While still in the early stages of development, quantum
computing is expected to have profound implications across various fields in the
coming decades.
3. Mention the most important inventions of industrial revolutions?
Answer;
Transportation: The Steam Engine, The Railroad, The Diesel Engine, The Airplane
Communication: Telegraph, Transatlantic Cable, Phonograph, Telephone
Industry: Cotton Gin, Sewing Machine, Electric Lights

4.List and discuss the 4 Industrial revolutions


Answer;
First Industrial Revolution: This period, which occurred in the late 18th to early
19th century, marked the transition from agrarian economies to industrialized ones.
Key innovations included the steam engine, mechanized textile production, and the
development of iron and coal industries.
Second Industrial Revolution: Taking place roughly from the late 19th to early 20th
century, this period was characterized by advancements in manufacturing,
transportation, and communication technologies. Key inventions included electricity,
the telephone, the internal combustion engine, and the assembly line.
Third Industrial Revolution: Also known as the Digital Revolution, this era began
in the late 20th century with the advent of digital electronics, telecommunications, and
the internet. It facilitated the automation of production processes, the rise of
information technology, and the globalization of economies.
Fourth Industrial Revolution: This ongoing period, often referred to as Industry 4.0,
is marked by the convergence of digital, biological, and physical technologies. It
includes advancements in artificial intelligence, robotics, 3D printing, biotechnology,
and the Internet of Things (IoT), leading to increased automation, interconnectedness,
and data-driven decision-making.
5. What is the Role of Data for Emerging Technologies?
Answer;
Learning and Improving: New technologies need data to learn and get better.
Checking and Testing: Data helps make sure new stuff works right before it's used a
lot.
Making Things Just Right for You: Companies use data to make products fit your
likes and needs.
Making Better Choices: Data helps businesses and groups do things smarter.
Adapting to Change: New tech uses data to adjust to new situations and learn from
mistakes.
Being Fair and Right: Data helps us make sure new tech treats everyone fairly and
doesn't have unfair biases.
6. List out programmable devices and discuss their features

Answer;
Simple programmable logic devices
Customizable: You can teach them to do different jobs.
Basic Logic: They understand simple rules like "and", "or", and "not".
Connect to Stuff: They have plugs to connect to other things like buttons, sensors, or
lights.
Quick to Respond: They can react fast to what's happening around them.
Easy to Use: They're not too hard to understand or work with.
Not Too Expensive: They're affordable compared to fancier versions.
Field programmable gate array
Like a Digital Playground: FPGAs are like playgrounds where you can design and
build your own digital toys or tools.
Do Many Things at Once: They're good at doing lots of tasks simultaneously, which
is handy for quick processing.
Can Change Their Minds: FPGAs can change what they do, so they're flexible and
can adapt to different needs.
Not Power-Hungry: They don't need a lot of power to work, which can be helpful for
saving energy.
Really Fast: They're speedy, which is great for tasks that need to be done quickly.
Boost Performance: They can help speed up certain tasks by working alongside other
parts of a system.
Complex programmable logic devices
Make Things Work: Complex programmable logic devices (CPLDs) can be taught to
do different jobs, from simple to complex.
All-in-One: They have many little parts inside them that can work together to solve
problems.
Really Fast: They can work quickly, which is great for tasks that need speed.
Not Power-Hungry: They don't need a lot of power to do their jobs, which can help
save energy.
Can Learn New Tricks: You can change what they do, so they're flexible and can
adapt to different needs.
Affordable Solutions: They're cost-effective options for making electronic things
work smarter without breaking the bank
7. What id HCI (Human-Computer Interaction)?

Answer;
HMI, or Human-Machine Interaction, is all about how people and machines
communicate with each other through a user interface. This interface includes both
the input (like buttons or touchscreens) and output (like displays).
HCI, or Human-Computer Interaction, is a branch of HMI that focuses on studying
how people use computers and how computers can better understand and respond to
human needs. It's all about making computers easier to use and more responsive to
what people want.
8. Write disciplines that contribute to Human-Computer Interaction

Answer;
Psychology: Helps understand how users think and act when using computers.
Design: Focuses on making interfaces easy to use and attractive.
Computer Science: Provides the technical skills to build software and hardware for
interfaces.
Human Factors Engineering: Makes sure interfaces match how people naturally
work.
Information Science: Helps organize information so users can find what they need
easily.
Anthropology: Considers cultural differences in how people use technology.
Interaction Design: Creates enjoyable and easy-to-use interactions.
User Experience (UX) Design: Makes sure users have a great overall experience
with a product or service.
Industrial Design: Focuses on making physical products comfortable and appealing
to use
Chapter two questions

1.Discuss the difference of Big data and Data Science.


Answer;

Big data and data science are intertwined but distinct concepts within the realm of
handling and deriving value from data.

Nature and Scope:

Big Data: Refers to the massive volume of structured, semi-structured, and


unstructured data that inundates businesses on a day-to-day basis. This data is
characterized by its size, velocity, and variety, and it often exceeds the processing
capabilities of traditional databases and software tools.
Data Science: Is an interdisciplinary field that combines expertise in statistics,
computer science, domain knowledge, and problem-solving skills to extract insights
and knowledge from data. Data science focuses on analyzing and interpreting data to
uncover patterns, trends, and correlations that can be used to inform decision-making
and solve complex problems.

Focus and Objectives:

Big Data: Primarily focuses on the infrastructure and technologies required to collect,
store, and process large volumes of data. The goal of big data initiatives is to
efficiently handle data at scale, often leveraging technologies like Hadoop, Spark, and
NoSQL databases.
Data Science: Concentrates on the methodologies and techniques used to extract
insights and value from data. Data scientists utilize statistical analysis, machine
learning, data mining, and visualization tools to uncover patterns, make predictions,
and derive actionable insights from the data.
Tools and Techniques:

Big Data: Involves technologies and platforms designed to manage and process
massive datasets. This includes distributed storage systems (e.g., Hadoop Distributed
File System), distributed processing frameworks (e.g., Apache Spark), and stream
processing systems (e.g., Apache Kafka).
Data Science: Employs a wide range of tools and techniques for data analysis and
modeling. These may include programming languages like Python and R, libraries
and frameworks such as TensorFlow and scikit-learn for machine learning, and
visualization tools like Tableau or matplotlib.

Application:

Big Data: Finds applications in various industries and domains where large volumes
of data need to be processed and analyzed, such as finance, healthcare, retail, and
manufacturing. Big data is often used for tasks like real-time analytics, customer
behavior analysis, fraud detection, and risk management.
Data Science: Is applied across industries to solve specific business problems and
extract value from data. Applications of data science include predictive analytics,
recommendation systems, natural language processing, image recognition, and
personalized marketing.

2.Briefly discuss the Big data life cycle.


Answer;
The big data lifecycle typically involves several stages:

Data Acquisition: This stage involves collecting data from various sources such as
sensors, social media, transactions, and more. It's about gathering raw data and
bringing it into the system for further processing.

Data Storage: Once acquired, the data needs to be stored in a way that allows for easy
access and retrieval. This may involve using distributed storage systems like Hadoop
Distributed File System (HDFS) or cloud-based storage solutions.
Data Processing: In this stage, the raw data undergoes processing to transform it into a
usable format. This may include cleaning the data to remove errors or inconsistencies,
integrating data from different sources, and performing transformations or
aggregations as needed.

Data Analysis: Once processed, the data is ready for analysis. This involves applying
various analytical techniques such as statistical analysis, machine learning, or data
mining to uncover patterns, trends, and insights within the data.

Data Visualization: The insights gained from analysis are often visualized using
charts, graphs, or dashboards to make them easier to understand and interpret.
Visualization helps stakeholders gain a clear understanding of the data and its
implications.

Decision Making: The final stage of the lifecycle involves using the insights derived
from the data to make informed decisions. This may involve taking action based on
the findings, adjusting strategies, or implementing changes to improve performance or
outcomes.

3.List and explain Big data application domains with example


Answer;

Healthcare:

Using big data to analyze patient records and medical images for better diagnosis and
treatment decisions.
Example: Analyzing patient data to predict disease outbreaks or using machine
learning to detect anomalies in medical images.
Finance:

Employing big data to detect and prevent fraudulent transactions and make data-
driven investment decisions.
Example: Using algorithms to analyze transaction patterns for detecting credit card
fraud or analyzing market data to automate trading decisions.

Retail:

Leveraging big data to personalize marketing campaigns and optimize inventory


management.
Example: Recommending products based on past purchases or analyzing sales data to
improve inventory stocking.

Manufacturing:

Using big data to predict equipment failures and optimize production processes.
Example: Analyzing sensor data to schedule maintenance before equipment
breakdowns or detecting defects in real-time during production.

Transportation and Logistics:

Employing big data to optimize transportation routes and manage fleets efficiently.
Example: Optimizing delivery routes to reduce fuel consumption and delivery times
or tracking vehicle performance to improve maintenance schedules.
In each domain, big data is used to gather insights from large volumes of data, leading
to improved decision-making, efficiency, and innovation.
4.What is Clustered Computing? Explain its advantages.
Answer;
Clustered computing is like teamwork for computers. Instead of one computer doing
all the work, multiple computers work together as a team. This has a few advantages:

Better Performance: Just like more people can get a job done faster, more computers
working together can handle tasks quicker.
Reliability: If one computer has a problem, the others can step in to keep things
running smoothly, kind of like having a backup plan.

Cost-Effectiveness: It's often cheaper to use a bunch of smaller, standard computers


than one big supercomputer, so clustered computing can be more budget-friendly.

Scalability: Need more computing power? Just add more computers to the cluster. It's
like adding more people to a team when you need to get more work done.
5. Briefly discuss the following Bigdata platforms and compare them to
Hadoop.
a. Apache Spark c. Ceph e. Google BigQuery
b. Apache Storm d. Hydra

Answer;
a. Apache Spark:
Overview: Apache Spark is a high-speed data processing engine capable of handling
both batch and real-time tasks.
Comparison with Hadoop: Spark outperforms Hadoop in speed due to its in-memory
processing capabilities and offers a broader range of functionalities, including
machine learning and graph processing.
b. Apache Storm:
Overview: Apache Storm is a real-time stream processing system designed for
continuous data streams.
Comparison with Hadoop: Storm specializes in real-time processing, making it more
suitable for low-latency applications like real-time analytics compared to Hadoop,
which is primarily geared towards batch processing.
c. Ceph:
Overview: Ceph is a distributed storage system providing scalable and fault-tolerant
storage for large datasets.
Comparison with Hadoop: While Hadoop relies on HDFS for storage, Ceph offers
more flexible storage options capable of handling diverse data types and workloads
while offering improved scalability and fault tolerance.
d. Hydra:
Overview: Hydra is a distributed data processing platform optimized for graph
analytics tasks.
Comparison with Hadoop: Unlike Hadoop's generic graph processing libraries, Hydra
is specifically designed to efficiently analyze large-scale graph data.
e. Google BigQuery:
Overview: Google BigQuery is a managed data warehouse service that enables SQL-
based analysis of large datasets.
Comparison with Hadoop: BigQuery differs from Hadoop by providing a fully
managed, serverless solution for data analytics, abstracting away the complexities of
cluster management and infrastructure setup.

You might also like