Ananiya Ameha Emerging Technology Assignment
Ananiya Ameha Emerging Technology Assignment
Emerging technology
INDIVIDUAL ASSIGNMENT
SECTION: 3
By Ananiya Ameha ETS0174/16
DATE : may,2024
Submitted To: Dr.Habib
Chapter one questions
1. What is Emerging Technology?
Answer:
"Emerging technology" typically denotes a novel technological advancement, but it can also
encompass the ongoing evolution of established technologies. It commonly pertains to technologies
currently in development or anticipated to become accessible within the upcoming five to ten years.
It includes areas such as quantum computing, blockchain technology, augmented reality, virtual reality,
Internet of Things (IoT), 5G networks, autonomous vehicles, advanced materials science, and
sustainable energy solutions. These technologies hold the promise of transforming industries,
revolutionizing communication, improving healthcare, enhancing transportation systems, and
addressing global challenges such as climate change and resource depletion. They represent the
forefront of human ingenuity and have the potential to redefine the way we live, work, and interact
with the world around us.
Answer;
Simple programmable logic devices
Customizable: You can teach them to do different jobs.
Basic Logic: They understand simple rules like "and", "or", and "not".
Connect to Stuff: They have plugs to connect to other things like buttons, sensors, or
lights.
Quick to Respond: They can react fast to what's happening around them.
Easy to Use: They're not too hard to understand or work with.
Not Too Expensive: They're affordable compared to fancier versions.
Field programmable gate array
Like a Digital Playground: FPGAs are like playgrounds where you can design and
build your own digital toys or tools.
Do Many Things at Once: They're good at doing lots of tasks simultaneously, which
is handy for quick processing.
Can Change Their Minds: FPGAs can change what they do, so they're flexible and
can adapt to different needs.
Not Power-Hungry: They don't need a lot of power to work, which can be helpful for
saving energy.
Really Fast: They're speedy, which is great for tasks that need to be done quickly.
Boost Performance: They can help speed up certain tasks by working alongside other
parts of a system.
Complex programmable logic devices
Make Things Work: Complex programmable logic devices (CPLDs) can be taught to
do different jobs, from simple to complex.
All-in-One: They have many little parts inside them that can work together to solve
problems.
Really Fast: They can work quickly, which is great for tasks that need speed.
Not Power-Hungry: They don't need a lot of power to do their jobs, which can help
save energy.
Can Learn New Tricks: You can change what they do, so they're flexible and can
adapt to different needs.
Affordable Solutions: They're cost-effective options for making electronic things
work smarter without breaking the bank
7. What id HCI (Human-Computer Interaction)?
Answer;
HMI, or Human-Machine Interaction, is all about how people and machines
communicate with each other through a user interface. This interface includes both
the input (like buttons or touchscreens) and output (like displays).
HCI, or Human-Computer Interaction, is a branch of HMI that focuses on studying
how people use computers and how computers can better understand and respond to
human needs. It's all about making computers easier to use and more responsive to
what people want.
8. Write disciplines that contribute to Human-Computer Interaction
Answer;
Psychology: Helps understand how users think and act when using computers.
Design: Focuses on making interfaces easy to use and attractive.
Computer Science: Provides the technical skills to build software and hardware for
interfaces.
Human Factors Engineering: Makes sure interfaces match how people naturally
work.
Information Science: Helps organize information so users can find what they need
easily.
Anthropology: Considers cultural differences in how people use technology.
Interaction Design: Creates enjoyable and easy-to-use interactions.
User Experience (UX) Design: Makes sure users have a great overall experience
with a product or service.
Industrial Design: Focuses on making physical products comfortable and appealing
to use
Chapter two questions
Big data and data science are intertwined but distinct concepts within the realm of
handling and deriving value from data.
Big Data: Primarily focuses on the infrastructure and technologies required to collect,
store, and process large volumes of data. The goal of big data initiatives is to
efficiently handle data at scale, often leveraging technologies like Hadoop, Spark, and
NoSQL databases.
Data Science: Concentrates on the methodologies and techniques used to extract
insights and value from data. Data scientists utilize statistical analysis, machine
learning, data mining, and visualization tools to uncover patterns, make predictions,
and derive actionable insights from the data.
Tools and Techniques:
Big Data: Involves technologies and platforms designed to manage and process
massive datasets. This includes distributed storage systems (e.g., Hadoop Distributed
File System), distributed processing frameworks (e.g., Apache Spark), and stream
processing systems (e.g., Apache Kafka).
Data Science: Employs a wide range of tools and techniques for data analysis and
modeling. These may include programming languages like Python and R, libraries
and frameworks such as TensorFlow and scikit-learn for machine learning, and
visualization tools like Tableau or matplotlib.
Application:
Big Data: Finds applications in various industries and domains where large volumes
of data need to be processed and analyzed, such as finance, healthcare, retail, and
manufacturing. Big data is often used for tasks like real-time analytics, customer
behavior analysis, fraud detection, and risk management.
Data Science: Is applied across industries to solve specific business problems and
extract value from data. Applications of data science include predictive analytics,
recommendation systems, natural language processing, image recognition, and
personalized marketing.
Data Acquisition: This stage involves collecting data from various sources such as
sensors, social media, transactions, and more. It's about gathering raw data and
bringing it into the system for further processing.
Data Storage: Once acquired, the data needs to be stored in a way that allows for easy
access and retrieval. This may involve using distributed storage systems like Hadoop
Distributed File System (HDFS) or cloud-based storage solutions.
Data Processing: In this stage, the raw data undergoes processing to transform it into a
usable format. This may include cleaning the data to remove errors or inconsistencies,
integrating data from different sources, and performing transformations or
aggregations as needed.
Data Analysis: Once processed, the data is ready for analysis. This involves applying
various analytical techniques such as statistical analysis, machine learning, or data
mining to uncover patterns, trends, and insights within the data.
Data Visualization: The insights gained from analysis are often visualized using
charts, graphs, or dashboards to make them easier to understand and interpret.
Visualization helps stakeholders gain a clear understanding of the data and its
implications.
Decision Making: The final stage of the lifecycle involves using the insights derived
from the data to make informed decisions. This may involve taking action based on
the findings, adjusting strategies, or implementing changes to improve performance or
outcomes.
Healthcare:
Using big data to analyze patient records and medical images for better diagnosis and
treatment decisions.
Example: Analyzing patient data to predict disease outbreaks or using machine
learning to detect anomalies in medical images.
Finance:
Employing big data to detect and prevent fraudulent transactions and make data-
driven investment decisions.
Example: Using algorithms to analyze transaction patterns for detecting credit card
fraud or analyzing market data to automate trading decisions.
Retail:
Manufacturing:
Using big data to predict equipment failures and optimize production processes.
Example: Analyzing sensor data to schedule maintenance before equipment
breakdowns or detecting defects in real-time during production.
Employing big data to optimize transportation routes and manage fleets efficiently.
Example: Optimizing delivery routes to reduce fuel consumption and delivery times
or tracking vehicle performance to improve maintenance schedules.
In each domain, big data is used to gather insights from large volumes of data, leading
to improved decision-making, efficiency, and innovation.
4.What is Clustered Computing? Explain its advantages.
Answer;
Clustered computing is like teamwork for computers. Instead of one computer doing
all the work, multiple computers work together as a team. This has a few advantages:
Better Performance: Just like more people can get a job done faster, more computers
working together can handle tasks quicker.
Reliability: If one computer has a problem, the others can step in to keep things
running smoothly, kind of like having a backup plan.
Scalability: Need more computing power? Just add more computers to the cluster. It's
like adding more people to a team when you need to get more work done.
5. Briefly discuss the following Bigdata platforms and compare them to
Hadoop.
a. Apache Spark c. Ceph e. Google BigQuery
b. Apache Storm d. Hydra
Answer;
a. Apache Spark:
Overview: Apache Spark is a high-speed data processing engine capable of handling
both batch and real-time tasks.
Comparison with Hadoop: Spark outperforms Hadoop in speed due to its in-memory
processing capabilities and offers a broader range of functionalities, including
machine learning and graph processing.
b. Apache Storm:
Overview: Apache Storm is a real-time stream processing system designed for
continuous data streams.
Comparison with Hadoop: Storm specializes in real-time processing, making it more
suitable for low-latency applications like real-time analytics compared to Hadoop,
which is primarily geared towards batch processing.
c. Ceph:
Overview: Ceph is a distributed storage system providing scalable and fault-tolerant
storage for large datasets.
Comparison with Hadoop: While Hadoop relies on HDFS for storage, Ceph offers
more flexible storage options capable of handling diverse data types and workloads
while offering improved scalability and fault tolerance.
d. Hydra:
Overview: Hydra is a distributed data processing platform optimized for graph
analytics tasks.
Comparison with Hadoop: Unlike Hadoop's generic graph processing libraries, Hydra
is specifically designed to efficiently analyze large-scale graph data.
e. Google BigQuery:
Overview: Google BigQuery is a managed data warehouse service that enables SQL-
based analysis of large datasets.
Comparison with Hadoop: BigQuery differs from Hadoop by providing a fully
managed, serverless solution for data analytics, abstracting away the complexities of
cluster management and infrastructure setup.