87% found this document useful (15 votes)
17K views12 pages

Role of Data For Emerging Technologies

The document discusses the role of data for emerging technologies. It states that data drives science, technology, the economy, and society. Large amounts of data present challenges but also opportunities for innovation. Emerging technologies like data science are increasingly important and involve computing, business, social science, and health fields. Data is transforming paradigms and triggering debates around data-driven discovery.

Uploaded by

Gosa Guta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
87% found this document useful (15 votes)
17K views12 pages

Role of Data For Emerging Technologies

The document discusses the role of data for emerging technologies. It states that data drives science, technology, the economy, and society. Large amounts of data present challenges but also opportunities for innovation. Emerging technologies like data science are increasingly important and involve computing, business, social science, and health fields. Data is transforming paradigms and triggering debates around data-driven discovery.

Uploaded by

Gosa Guta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Role of Data for Emerging Technologies

Data is regarded as the new oil and strategic asset since we are living in the age of big data, and
drives or even determines the future of science, technology, the economy, and possibly
everything in our world today and tomorrow. Data have not only triggered tremendous hype and
buzz but more importantly, presents enormous challenges that in turn bring incredible innovation
and economic opportunities. This reshaping and paradigm-shifting are driven not just by data
itself but all other aspects that could be created, transformed, and/or adjusted by understanding,
exploring, and utilizing data. The preceding trend and its potential have triggered new debate
about data-intensive scientific discovery as an emerging technology, the so-called “fourth
industrial revolution,” There is no doubt, nevertheless, that the potential of data science and
analytics to enable data-driven theory, economy, and professional development is increasingly
being recognized. This involves not only core disciplines such as computing, informatics, and
statistics, but also the broad-based fields of business, social science, and health/medical science.

Enabling devices and network (Programmable devices)

In the world of digital electronic systems, there are four basic kinds of devices:

 Memory
 Microprocessors
 Logic and
 Networks.
Memory devices store random information such as the contents of a spreadsheet or database.
Microprocessors execute software instructions to perform a wide variety of tasks such as running a word
processing program or video game.

Logic devices provide specific functions, including device-to-device interfacing, data communication,
signal processing, data display, timing and control operations, and almost every other function a system
must perform.

The network is a collection of computers, servers, mainframes, network devices, peripherals, or other
devices connected to one another to allow the sharing of data. An excellent example of a network is the
Internet, which connects millions of people all over the world Programmable devices (see Figure 1.5)
usually refer to chips that incorporate field programmable logic devices (FPGAs), complex programmable
logic devices (CPLD) and programmable logic devices (PLD). There are also devices that are the analog
equivalent of these called field-programmable analog arrays.

Figure 1.5 Programmable device

Why is a computer referred to as a programmable device?

Because what makes a computer is that it follows a set of instructions. Many electronic devices are
computers that perform only one operation, but they are still following instructions that reside
permanently in the unit.

List of some Programmable devices

 Achronix Speedster SPD60


 Actel’s
 Altera Stratix IV GT and Arria II GX
 Atmel’s AT91CAP7L
 Cypress Semiconductor’s programmable system-on-chip (PSoC) family
 Lattice Semiconductor’s ECP3
 Lime Microsystems’ LMS6002
 Silicon Blue Technologies
 Xilinx Virtex 6 and Spartan 6
 Xmos Semiconductor L series

A full range of network-related equipment referred to as Service Enabling Devices (SEDs), which can
include:

 Traditional channel service unit (CSU) and data service unit (DSU)
 Modems
 Routers
 Switches
 Conferencing equipment
 Network appliances (NIDs and SIDs)
 Hosting equipment and servers

Human to Machine Interaction


Human-machine interaction (HMI) refers to the communication and interaction between a human
and a machine via a user interface. Nowadays, natural user interfaces such as gestures have gained
increasing attention as they allow humans to control machines through natural and intuitive
behaviors.
What is interaction in human-computer interaction?
HCI (human-computer interaction) is the study of how people interact with computers and to what
extent computers are or are not developed for successful interaction with human beings. As its
name implies, HCI consists of three parts: the user, the computer itself, and the ways they work
together.
How do users interact with computers?
The user interacts directly with hardware for the human input and output such as displays, e.g. through
a graphical user interface. The user interacts with the computer over this software interface using the
given input and output (I/O) hardware.
How important is human-computer interaction?
The goal of HCI is to improve the interaction between users and computers by making computers more
user-friendly and receptive to the user's needs.
The main advantages of HCI are:
 Simplicity,
 Ease of deployment & operations and
 Cost savings for smaller set-ups.
 They also reduce solution design time and integration complexity.
Disciplines Contributing to Human-Computer Interaction (HCI)
Cognitive psychology: Limitations, information processing, performance prediction, cooperative
working, and capabilities.
Computer science: Including graphics, technology, prototyping tools, user interface management
systems.
Linguistics.
Engineering and design.
Artificial intelligence.
Human factors.
Future Trends in Emerging Technologies
a. Emerging technology trends in 2019
 5G Networks
 Artificial Intelligence (AI)
 Autonomous Devices
 Blockchain
 Augmented Analytics
 Digital Twins
 Enhanced Edge Computing and
 Immersive Experiences in Smart Spaces
b. Some emerging technologies that will shape the future of you and your business
The future is now or so they say. So-called emerging technologies are taking over our minds
more and more each day. These are very high-level emerging technologies though. They sound
like tools that will only affect the top tier of technology companies who employ the world’s top
1% of geniuses. This is totally wrong. Chatbots, virtual/augmented reality, blockchain,
Ephemeral Apps and Artificial Intelligence are already shaping your life whether you like it or
not. At the end of the day, you can either adapt or die.
Chapter 2: Data Science:
Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms, and systems
to extract knowledge and insights from structured, semi-structured and unstructured data.
Data science is much more than simply analyzing data. It offers a range of roles and requires a range of
skills.
What are data and information?
Data can be defined as a representation of facts, concepts, or instructions in a formalized manner, which
should be suitable for communication, interpretation, or processing, by human or electronic machines.
It can be described as unprocessed facts and figures. It is represented with the help of characters such
as alphabets (A-Z, a-z), digits (0-9) or special characters (+, -, /, *, , =, etc.).
Information is the processed data on which decisions and actions are based. It is data that has been
processed into a form that is meaningful to the recipient and is of real or perceived value in the current
or the prospective action or decision of recipient. Furtherer more, information is interpreted data;
created from organized, structured, and processed data in a particular context.
Data Processing Cycle
Data processing is the re-structuring or re-ordering of data by people or machines to increase their
usefulness and add values for a particular purpose. Data processing consists of the following basic steps
- input, processing, and output. These three steps constitute the data processing cycle.

Input − in this step, the input data is prepared in some convenient form for processing. The form will
depend on the processing machine. For example, when electronic computers are 24 used, the input data
can be recorded on any one of the several types of storage medium, such as hard disk, CD, flash disk and
so on.
Processing − in this step, the input data is changed to produce data in a more useful form. For example,
interest can be calculated on deposit to a bank, or a summary of sales for the month can be calculated
from the sales orders.
Output − at this stage, the result of the proceeding processing step is collected. The particular form of
the output data depends on the use of the data. For example, output data may be payroll for
employees.
Data types and their representation
Data types can be described from diverse perspectives. In computer science and computer
programming, for instance, a data type is simply an attribute of data that tells the compiler or
interpreter how the programmer intends to use the data.
Data types from Computer programming perspective
Almost all programming languages explicitly include the notion of data type, though different languages
may use different terminology.
Common data types include:
• Integers (int)- is used to store whole numbers, mathematically known as integers
• Booleans (bool)- is used to represent restricted to one of two values: true or false
• Characters (char)- is used to store a single character
• Floating-point numbers (float)- is used to store real numbers
• Alphanumeric strings (string)- used to store a combination of characters and numbers .
A data type makes the values that expression, such as a variable or a function, might take. This data type
defines the operations that can be done on the data, the meaning of the data, and the way values of
that type can be stored.
Data types from Data Analytics perspective
From a data analytics point of view, it is important to understand that there are three common types of
data types or structures:
 Structured,
 Semi-structured, and
 Unstructured data types. Fig. 2.2 below describes the three types of data and metadata.

Structured Data
Structured data is data that adheres to a pre-defined data model and is therefore straightforward to
analyze. Structured data conforms to a tabular format with a relationship between the different
rows and columns. Common examples of structured data are Excel files or SQL databases. Each of
these has structured rows and columns that can be sorted.
Semi-structured Data
Semi-structured data is a form of structured data that does not conform with the formal structure
of data models associated with relational databases or other forms of data tables, but nonetheless,
contains tags or other markers to separate semantic elements and enforce hierarchies of records
and fields within the data. Therefore, it is also known as a self-describing structure. Examples of
semi-structured data include JSON and XML are forms of semi-structured data.
Unstructured Data
Unstructured data is information that either does not have a predefined data model or is not
organized in a pre-defined manner. Unstructured information is typically text-heavy but may contain
data such as dates, numbers, and facts as well. These results in irregularities and ambiguities 26 that
make it difficult to understand using traditional programs as compared to data stored in structured
databases. Common examples of unstructured data include audio, video files or NoSQL databases.
Metadata – Data about Data
The last category of data type is metadata. From a technical point of view, this is not a separate data
structure, but it is one of the most important elements for Big Data analysis and big data solutions.
Metadata is data about data. It provides additional information about a specific set of data. In a set
of photographs, for example, metadata could describe when and where the photos were taken. The
metadata then provides fields for dates and locations which, by themselves, can be considered
structured data. Because of this reason, metadata is frequently used by Big Data solutions for initial
analysis.
Data value Chain
The Data Value Chain is introduced to describe the information flow within a big data system as a
series of steps needed to generate value and useful insights from data. The Big Data Value Chain
identifies the following key high-level activities.

Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data warehouse or any
other storage solution on which data analysis can be carried out. Data acquisition is one of the major
big data challenges in terms of infrastructure requirements. The infrastructure required to support
the acquisition of big data must deliver low, predictable latency in both capturing data and in
executing queries; be able to handle very high transaction volumes, often in a distributed
environment; and support flexible and dynamic data structures.
Data Curation
It is the active management of data over its life cycle to ensure it meets the necessary data quality
requirements for its effective usage. Data curation processes can be categorized into different
activities such as content creation, selection, classification, transformation, validation, and
preservation. Data curation is performed by expert curators that are responsible for improving the
accessibility and quality of data. Data curators (also known as scientific curators or data annotators)
hold the responsibility of ensuring that data are trustworthy, discoverable, accessible, and reusable
and fit their purpose. A key trend for the duration of big data utilizes community and crowdsourcing
approaches.
Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs of
applications that require fast access to the data. Relational Database Management Systems
(RDBMS) have been the main, and almost unique, a solution to the storage paradigm for nearly 40
years. However, the ACID (Atomicity, Consistency, Isolation, and Durability) properties that
guarantee database transactions lack flexibility with regard to schema changes and the performance
and fault tolerance when data volumes and complexity grow, making them unsuitable for big data
scenarios. NoSQL technologies have been designed with the scalability goal in mind and present a
wide range of solutions based on alternative data models.
Data Usage
It covers the data-driven business activities that need access to data, its analysis, and the tools
needed to integrate the data analysis within the business activity. Data usage in business decision
making can enhance competitiveness through the reduction of costs, increased added value, or any
other parameter that can be measured against existing performance criteria.
What Is Big Data?
Big data is the term for a collection of data sets so large and complex that it becomes difficult to
process using on-hand database management tools or traditional data processing applications. In
this context, a “large dataset” means a dataset too large to reasonably process or store with
traditional tooling or on a single computer. This means that the common scale of big datasets is
constantly shifting and may vary significantly from organization to organization. Big data is
characterized by 3V and more:
 Volume: large amounts of data Zeta bytes/Massive datasets
 Velocity: Data is live streaming or in motion
 Variety: data comes in many different forms from diverse sources
 Veracity: can we trust the data? How accurate is it? etc.

Clustered Computing and Hadoop Ecosystem


Clustered Computing
Because of the qualities of big data, individual computers are often inadequate for handling the
data at most stages. To better address the high storage and computational needs of big data,
computer clusters are a better fit. Big data clustering software combines the resources of many
smaller machines, seeking to provide a number of benefits:
• Resource Pooling: Combining the available storage space to hold data is a clear benefit, but
CPU and memory pooling are also extremely important. Processing large datasets requires large
amounts of all three of these resources.
• High Availability: Clusters can provide varying levels of fault tolerance and availability
guarantees to prevent hardware or software failures from affecting access to data and
processing. This becomes increasingly important as we continue to emphasize the importance of
real-time analytics. 30
• Easy Scalability: Clusters make it easy to scale horizontally by adding additional machines to
the group. This means the system can react to changes in resource requirements without
expanding the physical resources on a machine. Using clusters requires a solution for managing
cluster membership, coordinating resource sharing, and scheduling actual work on individual
nodes. Cluster membership and resource allocation can be handled by software like Hadoop’s
YARN (which stands for Yet another Resource Negotiator). The assembled computing cluster
often acts as a foundation that other software interfaces with to process the data. The machines
involved in the computing cluster are also typically involved with the management of a
distributed storage system, which we will talk about when we discuss data persistence. Activity

Hadoop and its Ecosystem


Hadoop is an open-source framework intended to make interaction with big data easier. It is a
framework that allows for the distributed processing of large datasets across clusters of computers
using simple programming models. It is inspired by a technical document published by Google. The four
key characteristics of Hadoop are:
• Economical: Its systems are highly economical as ordinary computers can be used for data processing.
• Reliable: It is reliable as it stores copies of the data on different machines and is resistant to hardware
failure.
• Scalable: It is easily scalable both, horizontally and vertically. A few extra nodes help in scaling up the
framework. 31
• Flexible: It is flexible and you can store as much structured and unstructured data as you need to and
decide to use them later.
Hadoop has an ecosystem that has evolved from its four core components: data management, access,
processing, and storage. It is continuously growing to meet the needs of Big Data. It comprises the
following components and many others:
• HDFS: Hadoop Distributed File System
• YARN: Yet Another Resource Negotiator
• MapReduce: Programming based Data Processing
• Spark: In-Memory data processing
• PIG, HIVE: Query-based processing of data services
• HBase: NoSQL Database
• Mahout, Spark MLLib: Machine Learning algorithm libraries
• Solar, Lucene: Searching and Indexing
• Zookeeper: Managing cluster
• Oozie: Job Scheduling
Big Data Life Cycle with Hadoop
Ingesting data into the system
The first stage of Big Data processing is Ingest. The data is ingested or transferred to Hadoop from
various sources such as relational databases, systems, or local files. Sqoop transfers data from RDBMS to
HDFS, whereas Flume transfers event data.
Processing the data in storage
The second stage is processing. In this stage, the data is stored and processed. The data is stored in the
distributed file system, HDFS, and the NoSQL distributed data, HBase. Spark and MapReduce perform
data processing. 2.5.3.3. Computing and analyzing data The third stage is to Analyze. Here, the data is
analyzed by processing frameworks such as Pig, Hive, and Impala. Pig converts the data using a map and
reduces and then analyzes it. Hive is also based on the map and reduces programming and is most
suitable for structured data.
Visualizing the results
The fourth stage is Access, which is performed by tools such as Hue and Cloudera Search. In this stage,
the analyzed data can be accessed by users.
Chapter 3: Artificial Intelligence (AI)
Artificial defines "man-made," and intelligence defines "thinking power", or “the ability to learn and
solve problems” hence Artificial Intelligence means "a man-made thinking power."
So, we can define Artificial Intelligence (AI) as the branch of computer science by which we can create
intelligent machines which can behave like a human, think like humans, and able to make decisions.
Intelligence, as we know, is the ability to acquire and apply knowledge. Knowledge is the information
acquired through experience. Experience is the knowledge gained through exposure (training). Summing
the terms up, we get artificial intelligence as the “copy of something natural (i.e., human beings) ‘WHO’
is capable of acquiring and applying the information it has gained through exposure.”
Artificial Intelligence exists when a machine can have human-based skills such as learning, reasoning,
and solving problems with Artificial Intelligence you do not need to preprogram a machine to do some
work, despite that you can create a machine with programmed algorithms which can work with own
intelligence.
Intelligence is composed of:
➢ Reasoning
➢ Learning
➢ Problem Solving
➢ Perception
➢ Linguistic Intelligence

You might also like