Role of Data For Emerging Technologies
Role of Data For Emerging Technologies
Data is regarded as the new oil and strategic asset since we are living in the age of big data, and
drives or even determines the future of science, technology, the economy, and possibly
everything in our world today and tomorrow. Data have not only triggered tremendous hype and
buzz but more importantly, presents enormous challenges that in turn bring incredible innovation
and economic opportunities. This reshaping and paradigm-shifting are driven not just by data
itself but all other aspects that could be created, transformed, and/or adjusted by understanding,
exploring, and utilizing data. The preceding trend and its potential have triggered new debate
about data-intensive scientific discovery as an emerging technology, the so-called “fourth
industrial revolution,” There is no doubt, nevertheless, that the potential of data science and
analytics to enable data-driven theory, economy, and professional development is increasingly
being recognized. This involves not only core disciplines such as computing, informatics, and
statistics, but also the broad-based fields of business, social science, and health/medical science.
In the world of digital electronic systems, there are four basic kinds of devices:
Memory
Microprocessors
Logic and
Networks.
Memory devices store random information such as the contents of a spreadsheet or database.
Microprocessors execute software instructions to perform a wide variety of tasks such as running a word
processing program or video game.
Logic devices provide specific functions, including device-to-device interfacing, data communication,
signal processing, data display, timing and control operations, and almost every other function a system
must perform.
The network is a collection of computers, servers, mainframes, network devices, peripherals, or other
devices connected to one another to allow the sharing of data. An excellent example of a network is the
Internet, which connects millions of people all over the world Programmable devices (see Figure 1.5)
usually refer to chips that incorporate field programmable logic devices (FPGAs), complex programmable
logic devices (CPLD) and programmable logic devices (PLD). There are also devices that are the analog
equivalent of these called field-programmable analog arrays.
Because what makes a computer is that it follows a set of instructions. Many electronic devices are
computers that perform only one operation, but they are still following instructions that reside
permanently in the unit.
A full range of network-related equipment referred to as Service Enabling Devices (SEDs), which can
include:
Traditional channel service unit (CSU) and data service unit (DSU)
Modems
Routers
Switches
Conferencing equipment
Network appliances (NIDs and SIDs)
Hosting equipment and servers
Input − in this step, the input data is prepared in some convenient form for processing. The form will
depend on the processing machine. For example, when electronic computers are 24 used, the input data
can be recorded on any one of the several types of storage medium, such as hard disk, CD, flash disk and
so on.
Processing − in this step, the input data is changed to produce data in a more useful form. For example,
interest can be calculated on deposit to a bank, or a summary of sales for the month can be calculated
from the sales orders.
Output − at this stage, the result of the proceeding processing step is collected. The particular form of
the output data depends on the use of the data. For example, output data may be payroll for
employees.
Data types and their representation
Data types can be described from diverse perspectives. In computer science and computer
programming, for instance, a data type is simply an attribute of data that tells the compiler or
interpreter how the programmer intends to use the data.
Data types from Computer programming perspective
Almost all programming languages explicitly include the notion of data type, though different languages
may use different terminology.
Common data types include:
• Integers (int)- is used to store whole numbers, mathematically known as integers
• Booleans (bool)- is used to represent restricted to one of two values: true or false
• Characters (char)- is used to store a single character
• Floating-point numbers (float)- is used to store real numbers
• Alphanumeric strings (string)- used to store a combination of characters and numbers .
A data type makes the values that expression, such as a variable or a function, might take. This data type
defines the operations that can be done on the data, the meaning of the data, and the way values of
that type can be stored.
Data types from Data Analytics perspective
From a data analytics point of view, it is important to understand that there are three common types of
data types or structures:
Structured,
Semi-structured, and
Unstructured data types. Fig. 2.2 below describes the three types of data and metadata.
Structured Data
Structured data is data that adheres to a pre-defined data model and is therefore straightforward to
analyze. Structured data conforms to a tabular format with a relationship between the different
rows and columns. Common examples of structured data are Excel files or SQL databases. Each of
these has structured rows and columns that can be sorted.
Semi-structured Data
Semi-structured data is a form of structured data that does not conform with the formal structure
of data models associated with relational databases or other forms of data tables, but nonetheless,
contains tags or other markers to separate semantic elements and enforce hierarchies of records
and fields within the data. Therefore, it is also known as a self-describing structure. Examples of
semi-structured data include JSON and XML are forms of semi-structured data.
Unstructured Data
Unstructured data is information that either does not have a predefined data model or is not
organized in a pre-defined manner. Unstructured information is typically text-heavy but may contain
data such as dates, numbers, and facts as well. These results in irregularities and ambiguities 26 that
make it difficult to understand using traditional programs as compared to data stored in structured
databases. Common examples of unstructured data include audio, video files or NoSQL databases.
Metadata – Data about Data
The last category of data type is metadata. From a technical point of view, this is not a separate data
structure, but it is one of the most important elements for Big Data analysis and big data solutions.
Metadata is data about data. It provides additional information about a specific set of data. In a set
of photographs, for example, metadata could describe when and where the photos were taken. The
metadata then provides fields for dates and locations which, by themselves, can be considered
structured data. Because of this reason, metadata is frequently used by Big Data solutions for initial
analysis.
Data value Chain
The Data Value Chain is introduced to describe the information flow within a big data system as a
series of steps needed to generate value and useful insights from data. The Big Data Value Chain
identifies the following key high-level activities.
Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data warehouse or any
other storage solution on which data analysis can be carried out. Data acquisition is one of the major
big data challenges in terms of infrastructure requirements. The infrastructure required to support
the acquisition of big data must deliver low, predictable latency in both capturing data and in
executing queries; be able to handle very high transaction volumes, often in a distributed
environment; and support flexible and dynamic data structures.
Data Curation
It is the active management of data over its life cycle to ensure it meets the necessary data quality
requirements for its effective usage. Data curation processes can be categorized into different
activities such as content creation, selection, classification, transformation, validation, and
preservation. Data curation is performed by expert curators that are responsible for improving the
accessibility and quality of data. Data curators (also known as scientific curators or data annotators)
hold the responsibility of ensuring that data are trustworthy, discoverable, accessible, and reusable
and fit their purpose. A key trend for the duration of big data utilizes community and crowdsourcing
approaches.
Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs of
applications that require fast access to the data. Relational Database Management Systems
(RDBMS) have been the main, and almost unique, a solution to the storage paradigm for nearly 40
years. However, the ACID (Atomicity, Consistency, Isolation, and Durability) properties that
guarantee database transactions lack flexibility with regard to schema changes and the performance
and fault tolerance when data volumes and complexity grow, making them unsuitable for big data
scenarios. NoSQL technologies have been designed with the scalability goal in mind and present a
wide range of solutions based on alternative data models.
Data Usage
It covers the data-driven business activities that need access to data, its analysis, and the tools
needed to integrate the data analysis within the business activity. Data usage in business decision
making can enhance competitiveness through the reduction of costs, increased added value, or any
other parameter that can be measured against existing performance criteria.
What Is Big Data?
Big data is the term for a collection of data sets so large and complex that it becomes difficult to
process using on-hand database management tools or traditional data processing applications. In
this context, a “large dataset” means a dataset too large to reasonably process or store with
traditional tooling or on a single computer. This means that the common scale of big datasets is
constantly shifting and may vary significantly from organization to organization. Big data is
characterized by 3V and more:
Volume: large amounts of data Zeta bytes/Massive datasets
Velocity: Data is live streaming or in motion
Variety: data comes in many different forms from diverse sources
Veracity: can we trust the data? How accurate is it? etc.