CC Assignment PDF
CC Assignment PDF
CC Assignment PDF
IaaS grew out of the broader conversion from traditional hardware-oriented data centres
to virtualized and cloud-based infrastructure. By removing the fixed relationship between
hardware and operating software and middleware, organizations found that they could scale data
environments quickly and easily to meet workload demands.
From there, it was just a small step to begin purchasing infrastructure on a service model
to cut costs and deliver the kind of flexibility needed to accommodate the growing demand for
digital services.
While the typical consumption model for IaaS is to acquire services from a third-party
provider, many large enterprises are adapting it for their own internal, private clouds. IaaS, after
all, is built on virtual pools of resources which ideally are parcelled out on-demand and then
returned to the pool when no longer needed.
Rather than providing discrete server, storage and networking resources in this way,
internal IaaS models deliver them in an integrated fashion to avoid bottlenecks and conflicts. In
this way, the enterprise is able to streamline its actual hardware infrastructure while still
providing the needed resources to serve the business model.
Ajax is a set of web development techniques using many web technologies on the client
side to create asynchronous web applications. With Ajax, web applications can send and retrieve
data from a server asynchronously (in the background) without interfering with the display and
behaviour of the existing page. By decoupling the data interchange layer from the presentation
layer, Ajax allows web pages and, by extension, web applications, to change content
dynamically without the need to reload the entire page.[3] In practice, modern implementations
commonly utilize JSON instead of XML.
Ajax is not a single technology, but rather a group of technologies. HTML and CSS can
be used in combination to mark up and style information. The webpage can then be modified by
JavaScript to dynamically display—and allow the user to interact with—the new information.
The built-in XMLHttpRequest object, or since 2017 the new "fetch ()" function within
JavaScript, is commonly used to execute Ajax on webpages allowing websites to load content
onto the screen without refreshing the page. Ajax is not a new technology, or different language,
just existing technologies used in new ways.
In the early-to-mid 1990s, most Web sites were based on complete HTML pages. Each
user action required that a completely new page be loaded from the server. This process was
inefficient, as reflected by the user experience: all page content disappeared, then the new page
appeared. Each time the browser reloaded a page because of a partial change, all of the content
had to be re-sent, even though only some of the information had changed. This placed additional
load on the server and made bandwidth a limiting factor on performance.
In 1996, the iframe tag was introduced by Internet Explorer; like the object element, it can load
or fetch content asynchronously. In 1998, the Microsoft Outlook Web Access team developed
the concept behind the XMLHttpRequest scripting object. It appeared as XMLHTTP in the
second version of the MSXML library, which shipped with Internet Explorer 5.0 in March 1999.
The functionality of the XMLHTTP ActiveX control in IE 5 was later implemented by Mozilla,
Safari, Opera and other browsers as the XMLHttpRequest JavaScript object.[7] Microsoft adopted
the native XMLHttpRequest model as of Internet Explorer 7. The ActiveX version is still
supported in Internet Explorer, but not in Microsoft Edge. The utility of these background HTTP
requests and asynchronous Web technologies remained fairly obscure until it started appearing in
large scale online applications such as Outlook Web Access (2000) and Odd post (2002).
XML:
Extensible Markup Language (XML) is a markup language that defines a set of rules for
encoding documents in a format that is both human-readable and machine-readable. The World
Wide Web Consortium's XML 1.0 Specification of 1998 and several other related
specifications—all of them free open standards—define XML.
The design goals of XML emphasize simplicity, generality, and usability across the
Internet. It is a textual data format with strong support via Unicode for different human
languages. Although the design of XML focuses on documents, the language is widely used for
the representation of arbitrary data structures such as those used in web services. Several schema
systems exist to aid in the definition of XML-based languages, while programmers have
developed many application programming interfaces (APIs) to aid the processing of XML data.
Hundreds of document formats using XML syntax have been developed, including RSS,
Atom, SOAP, SVG, and XHTML. XML-based formats have become the default for many
office-productivity tools, including Microsoft Office (Office Open XML), OpenOffice.org and
LibreOffice (OpenDocument), and Apple's iWork. XML has also provided the base language for
communication protocols such as XMPP. Applications for the Microsoft .NET Framework use
XML files for configuration, and property lists are an implementation of configuration storage
built on XML.
Many industry data standards, such as Health Level 7, Open Travel Alliance, FpML,
MISMO, and National Information Exchange Model are based on XML and the rich features of
the XML schema specification. Many of these standards are quite complex and it is not
uncommon for a specification to comprise several thousand pages. In publishing, Darwin
Information Typing Architecture is an XML industry data standard. XML is used extensively to
underpin various publishing formats
JSON:
JavaScript Object Notation is an open standard file format, and data interchange format,
that uses human-readable text to store and transmit data objects consisting of attribute–value
pairs and array data types (or any other serializable value). It is a very common data format, with
a diverse range of applications, such as serving as replacement for XML in AJAX systems.
JSON is a language-independent data format. It was derived from JavaScript, but many modern
programming languages include code to generate and parse JSON-format data. The official
Internet media type for JSON is application/json . JSON filenames use the extension. json.
Douglas Crockford originally specified the JSON format in the early 2000s. JSON was first
standardized in 2013, as ECMA-404. RFC 8259, published in 2017, is the current version of the
Internet Standard STD 90, and it remains consistent with ECMA-404. That same year, JSON
was also standardized as ISO/IEC 21778:2017.[1] The ECMA and ISO standards describe only
the allowed syntax, whereas the RFC covers some security and interoperability considerations.
JSON was based on a subset of the JavaScript scripting language (specifically, Standard
ECMA-262 3rd Edition—December 1999) and is commonly used with JavaScript, but it is a
language-independent data format. Code for parsing and generating JSON data is readily
available in many programming languages. JSON's website lists JSON libraries by language.
Though JSON was originally advertised and believed to be a strict subset of JavaScript
and ECMAScript, it inadvertently allows some unescaped characters in strings that were illegal
in JavaScript and ECMAScript string literals. JSON is a strict subset of ECMAScript as of the
language's 2019 revision.
Answer:
Bigtable:
Bigtable is a compressed, high performance, proprietary data storage system built on Google File
System, Chubby Lock Service, SSTable (log-structured storage like LevelDB) and a few other
Google technologies. On May 6, 2015, a public version of Bigtable was made available as a
service. Bigtable also underlies Google Cloud Datastore, which is available as a part of the
Google Cloud Platform.
Bigtable is one of the prototypical examples of a wide column store. It maps two arbitrary
string values (row key and column key) and timestamp (hence three-dimensional mapping) into
an associated arbitrary byte array. It is not a relational database and can be better defined as a
sparse, distributed multi-dimensional sorted map. Bigtable is designed to scale into the petabyte
range across "hundreds or thousands of machines, and to make it easy to add more machines [to]
the system and automatically start taking advantage of those resources without any
reconfiguration". For example, Google's copy of the web can be stored in a Bigtable where the
row key is a domain-reversed URL, and columns describe various properties of a web page, with
one particular column holding the page itself. The page column can have several timestamped
versions describing different copies of the web page timestamped by when they were fetched.
Each cell of a Bigtable can have zero or more timestamped versions of the data. Another
function of the timestamp is to allow for both versioning and garbage collection of expired data.
Amazon Dynamo:
Amazon DynamoDB is a fully managed proprietary NoSQLdatabase service that supports key-
value and document data structures and is offered by Amazon.com as part of the Amazon Web
Services portfolio. DynamoDB exposes a similar data model to and derives its name from
Dynamo, but has a different underlying implementation. Dynamo had a multi-master design
requiring the client to resolve version conflicts and DynamoDB uses synchronous replication
across multiple data centers for high durability and availability. DynamoDB was announced by
Amazon CTO Werner Vogels on January 18, 2012, and is presented as an evolution of Amazon
Simple DB solution.
A DynamoDB table features items that have attributes, some of which form a primary
key. In relational systems, however, an item features each table attribute (or juggles "null" and
"unknown" values in their absence), DynamoDB items are schema-less. The only exception:
when creating a table, a developer specifies a primary key, and the table requires a key for every
item. Primary keys must be scalar (strings, numbers, or binary) and can take one of two forms. A
single-attribute primary key is known as the table's "partition key", which determines the
partition that an item hashes to––more on partitioning below––so an ideal partition key has a
uniform distribution over its range. A primary key can also feature a second attribute, which
DynamoDB calls the table's "sort key". In this case, partition keys do not have to be unique; they
are paired with sort keys to make a unique identifier for each item. The partition key is still used
to determine which partition the item is stored in, but within each partition, items are sorted by
the sort key.
Answer:
Virtualization uses software to create an abstraction layer over computer hardware that
allows the hardware elements of a single computer—processors, memory, storage and more—to
be divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though it is
running on just a portion of the actual underlying computer hardware.
CPU Virtualization goes by different names depending on the CPU manufacturer. For
Intel CPUs, this feature is called Intel Virtualization Technology, or Intel VT, and with AMD
CPUs it is called AMD-V. Regardless of what it is called, each virtualization technology
provides generally the same features and benefits to the operating system.
Answer:
Answer:
Amazon Elastic Block Store (EBS) provides raw block-level storage that can be
attached to Amazon EC2 instances and is used by Amazon Relational Database Service (RDS).
Amazon EBS provides a range of options for storage performance and cost. These options are
divided into two major categories: SSD-backed storage for transactional workloads, such as
databases and boot volumes (performance depends primarily on IOPS), and disk-backed storage
for throughput intensive workloads, such as MapReduce and log processing (performance
depends primarily on MB/s).
In a typical use case, using EBS would include formatting the device with a filesystem
and mounting it. EBS supports advanced storage features, including snapshotting and cloning.
As of June 2014, EBS volumes can be up to 1TB in size. EBS volumes are built on replicated
back end storage, so that the failure of a single component will not cause data loss.
Reliable and secure storage − Each of the EBS volume will automatically respond to its
Availability Zone to protect from component failure.
Secure − Amazon’s flexible access control policies allows to specify who can access
which EBS volumes. Access control plus encryption offers a strong defense-in-depth
security strategy for data.
Higher performance − Amazon EBS uses SSD technology to deliver data results with
consistent I/O performance of application.
Easy data backup − Data backup can be saved by taking point-in-time snapshots of
Amazon EBS volumes.
2. From the navigation bar, select the Region in which you would like to create your volume.
This choice is important because some Amazon EC2 resources can be shared between
Regions, while others can't. For more information, see Resource Locations.
Q.8 What AWS load balancing service? Explain the Elastic Load Balancer
and its types with its advantages.
Answer:
A load balancer distributes workloads across multiple compute resources, such as virtual
servers. Using a load balancer increases the availability and fault tolerance of your applications.
You can add and remove compute resources from your load balancer as your needs change,
without disrupting the overall flow of requests to your applications.
You can configure health checks, which monitor the health of the compute resources, so
that the load balancer sends requests only to the healthy ones. You can also offload the work of
encryption and decryption to your load balancer so that your compute resources can focus on
their main work.
Types:
A Network Load Balancer makes routing decisions at the transport layer (TCP/SSL). It
can handle millions of requests per second. After the load balancer receives a connection, it
selects a target from the target group for the default rule using a flow hash routing algorithm. It
attempts to open a TCP connection to the selected target on the port specified in the listener
configuration. It forwards the request without modifying the headers. Network Load Balancers
support dynamic host port mapping. For example, if your task's container definition specifies
port 80 for an NGINX container port, and port 0 for the host port, then the host port is
dynamically chosen from the ephemeral port range of the container instance (such as 32768 to
61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the NGINX
container is registered with the Network Load Balancer as an instance ID and port combination,
and traffic is distributed to the instance ID and port corresponding to that container. This
dynamic mapping allows you to have multiple tasks from a single service on the same container
instance. For more information, see the User Guide for Network Load Balancers.
A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL)
or the application layer (HTTP/HTTPS). Classic Load Balancers currently require a fixed
relationship between the load balancer port and the container instance port. For example, it is
possible to map the load balancer port 80 to the container instance port 3030 and the load
balancer port 4040 to the container instance port 4040. However, it is not possible to map the
load balancer port 80 to port 3030 on one container instance and port 4040 on another container
instance. This static mapping requires that your cluster has at least as many container instances
as the desired count of a single service that uses a Classic Load Balancer. For more information,
see the User Guide for Classic Load Balancers.
Answer:
A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge
of the Internet. The main purpose of the cloudlet is supporting resource-intensive and interactive
mobile applications by providing powerful computing resources to mobile devices with lower
latency. It is a new architectural element that extends today’s cloud computing infrastructure. It
represents the middle tier of a 3-tier hierarchy: mobile device - cloudlet - cloud. A cloudlet can
be viewed as a data center in a box whose goal is to bring the cloud closer. The cloudlet term
was first coined by M. Satyanarayana, Victor Bahl, Ramón Caceres, and Nigel Davies, and a
prototype implementation is developed by Carnegie Mellon University as a research project. The
concept of cloudlet is also known as follow me cloud, and mobile micro-cloud.
Cloudlets aim to support mobile applications that are both resource-intensive and
interactive. Augmented reality applications that use head-tracked systems require end-to-end
latencies of less than 16 Ms.Cloud games with remote rendering also require low latencies and
high bandwidth. Wearable cognitive assistance systems combine devices such as Google
Glasswith cloud-based processing to guide users through complex tasks. This futuristic genre of
applications is characterized as “astonishingly transformative” by the report of the 2013 NSF
Workshop on Future Directions in Wireless Networking. These applications use cloud resources
in the critical path of real-time user interaction. Consequently, they cannot tolerate end-to-end
operation latencies of more than a few tens of milliseconds. Apple Siri and Google Now which
perform compute-intensive speech recognition in the cloud, are further examples in this
emerging space.
HPC tasks are characterized as needing large amounts of computing power for short periods of
time, whereas HTC tasks also require large amounts of computing, but for much longer times
(months and years, rather than hours and days). HPC environments are often measured in terms
of FLOPS.
The HTC community, however, is not concerned about operations per second, but rather
operations per month or per year. Therefore, the HTC field is more interested in how many jobs
can be completed over a long period of time instead of how fast.
MTC aims to bridge the gap between HTC and HPC. MTC is reminiscent of HTC, but it
differs in the emphasis of using many computing resources over short periods of time to
accomplish many computational tasks (i.e. including both dependent and independent tasks),
where the primary metrics are measured in seconds (e.g. FLOPS, tasks/s, MB/s I/O rates), as
opposed to operations (e.g. jobs) per month. MTC denotes high-performance computations
comprising multiple distinct activities, coupled via file system operations.
Consumer applications:
A growing portion of IoT devices are created for consumer use, including connected
vehicles, home automation, wearable technology, connected health, and appliances with remote
monitoring capabilities.
Smart home:
IoT devices are a part of the larger concept of home automation, which can include
lighting, heating and air conditioning, media and security systems. Long-term benefits could
include energy savings by automatically ensuring lights and electronics are turned off. A smart
home or automated home could be based on a platform or hubs that control smart devices and
appliances. For instance, using Apple's HomeKit, manufacturers can have their home products
and accessories controlled by an application in iOS devices such as the iPhone and the Apple
Watch. This could be a dedicated app or iOS native applications such as Siri. This can be
demonstrated in the case of Lenovo's Smart Home Essentials, which is a line of smart home
devices that are controlled through Apple's Home app or Siri without the need for a Wi-Fi
bridge. There are also dedicated smart home hubs that are offered as standalone platforms to
connect different smart home products and these include the Amazon Echo, Google Home,
Apple's HomePod, and Samsung's SmartThings Hub. In addition to the commercial systems,
there are many non-proprietary, open source ecosystems; including Home Assistant, OpenHAB
and Domoticz.
Elder care:
One key application of a smart home is to provide assistance for those with disabilities
and elderly individuals. These home systems use assistive technology to accommodate an
owner's specific disabilities. Voice control can assist users with sight and mobility limitations
while alert systems can be connected directly to cochlear implants worn by hearing-impaired
users. They can also be equipped with additional safety features. These features can include
sensors that monitor for medical emergencies such as falls or seizures. Smart home technology
applied in this way can provide users with more freedom and a higher quality of life. The term
"Enterprise IoT" refers to devices used in business and corporate settings. By 2019, it is
estimated that the EIoT will account for 9.1 billion devices.
Answer:
Cloud computing, as a trending model for the information technology, provides unique
features and opportunities including scalability, broad accessibility and dynamic provision of
computing resources with limited capital investments. This paper presents the criteria, assets, and
models for energy-aware cloud computing practices and envisions a market structure that
addresses the impact of the quality and price of energy supply on the quality and cost of cloud
computing services. Energy management practices for cloud providers at the macro and micro
levels to improve the cost and reliability of cloud services are presented.
space data was lost which meant a few gigabytes for the Google. Cloud providers handle
various technical challenges in smart grids including optimizing energy management costs
through monitoring and controlling the power grid assets, providing software applications on
both producer and consumer sides to control the power flow and implementing various pricing
strategies according to the energy consumption, decreasing carbon emission by dispatching
renewable energy resources effectively, and providing unlimited storage capacity for storing
customers’ data. Adopting cloud services by power grid operators would result in more efficient
and reliable delivering of electricity. However, the reliability and security of the provided grid
services would be heavily dependent on the data and data processing capabilities of the cloud
providers and any failure in the data centers that provide the cloud computing services can lead
to considerable loss in the power grid. In 2003 north-east blackout caused by a software flaw in
an alarm system in the control room in Ohio that eventually led to a cascading failure. As a
result, the energy supply is cut off to 45 million people in eight states and 10 million people in
Ontario. The cloud providers implement different pricing schemes for the offered services. The
pricing strategy of each cloud provider is dependent on the strategies adopted by the competitors.
Offering higher prices for the same cloud service would result in losing the customers in the
cloud computing market. The pricing schemes for the cloud providers are divided into three
categories: static, dynamic and market dependent. In static pricing scheme, the customer pays a
fixed price for the cloud services regardless of the volume of the received services. In dynamic
pricing scheme, the prices of cloud services alter dynamically with the service characteristics as
well as customer characteristics and preferences. In market dependent pricing schemes, the price
of services is determined based on the real-time market conditions including bargaining,
auctioning, and demand behavior. Regardless of the choice of pricing scheme, the price of cloud
computing services depends on several factors including the initial cost of the cloud resources,
the quality of offered services including privacy and security, the availability of the resources
and the operation and maintenance costs.
Currently, cloud computing market is an oligopoly among the vendors such as Amazon,
Microsoft, Google, and IBM to provide similar cloud services to the customers. These cloud
providers implement inflexible pricing schemes based on the duration of the service and usage
threshold. The lack of standard application programming interfaces (APIs) for the provided
cloud services restricts the customers’ choice. API is set of clearly defined methods protocols
and tools devised for communication between various software components including routines,
data structures, object classes, variables or remote calls. Adopting a unified interface would lead
to forming a market structure in which the cloud services are treated as commodities. SHARP,
Tycoon, Bellagio, and Shirako are some examples of research projects that propose a unified
market structure for the cloud services. 4. Macro-level Energy Management Solutions for Cloud
Service Providers Cloud service providers, such as Google, Amazon, and Microsoft own and
operate geographically dispersed data centers that ensure acceptable quality of service for the
end-users across the globe. The ability to reroute applications between multiple data centers is
one of the important factors to provide secure, fast, and more available services to the end users.
Geo-distributed cloud environment that runs over distributed data centers enables the cloud
providers to foster power management techniques with heterogeneous objectives.
Answer:
Types:
1. Hardware virtualization
2. Desktop virtualization
3. Network virtualization
4. Storage virtualization
5. Data Virtualization
6. Memory Virtualization
7. Software Virtualization
Answer:
Ajax:
AJAX stands for Asynchronous JavaScript and XML. AJAX is a new technique for creating
better, faster, and more interactive web applications with the help of XML, HTML, CSS, and
Java Script.
Ajax uses XHTML for content, CSS for presentation, along with Document Object Model and
JavaScript for dynamic content display.
Conventional web applications transmit information to and from the sever using synchronous
requests. It means you fill out a form, hit submit, and get directed to a new page with new
information from the server.
With AJAX, when you hit submit, JavaScript will make a request to the server, interpret the
results, and update the current screen. In the purest sense, the user would never know that
anything was even transmitted to the server.
XML is commonly used as the format for receiving server data, although any format, including
plain text, can be used.
AJAX is a web browser technology independent of web server software.
A user can continue to use the application while the client program requests information from
the server in the background.
Intuitive and natural user interaction. Clicking is not required, mouse movement is a sufficient
event trigger.
XML:
XML stands for Extensible Markup Language. It is a text-based markup language derived from
Standard Generalized Markup Language (SGML).
XML tags identify the data and are used to store and organize the data, rather than specifying
how to display it like HTML tags, which are used to display the data. XML is not going to
replace HTML in the near future, but it introduces new possibilities by adopting many
successful features of HTML.
There are three important characteristics of XML that make it useful in a variety of systems and
solutions −
XML is extensible − XML allows you to create your own self-descriptive tags, or language,
that suits your application.
XML carries the data, does not present it − XML allows you to store the data irrespective of
how it will be presented.
XML is a public standard − XML was developed by an organization called the World Wide
Web Consortium (W3C) and is available as an open standard.
JSON:
JSON or JavaScript Object Notation is a lightweight text-based open standard designed for
human-readable data interchange. Conventions used by JSON are known to programmers, which
include C, C++, Java, Python, Perl, etc.
Answer:
The Open Cloud Consortium is a newly formed group of universities that is both trying to
improve the performance of storage and computing clouds spread across geographically
disparate data centers and promote open frameworks that will let clouds operated by different
entities work seamlessly together. Everyone’s talking about building a cloud these days. But if
the IT world is filled with computing clouds, will each one be treated like a separate island or
will open standards allow all to interoperate with each other?
That’s one of the questions being examined by the Open Cloud Consortium (OCC), a newly
formed group of universities that is both trying to improve the performance of storage and
computing clouds spread across geographically disparate data centers and promote open
frameworks that will let clouds operated by different entities work seamlessly together. Cloud is
certainly one of the most used buzzwords in IT today, and marketing hype from vendors can at
times obscure the real technical issues being addressed by researchers such as those in the Open
Cloud Consortium.
Answer:
In I/O virtualization, a virtual device is substituted for its physical equivalent, such as a network
interface card (NIC) or host bus adapter (HBA). Aside from simplifying server configurations,
this setup has cost implications by reducing the electric power drawn by these devices.
Virtualization and blade server technologies cram dense computing power into a small form
factor. With the advent of virtualization, data centers started using commodity hardware to
support functions such as burst computing, load balancing and multi-tenant networked storage.
I/O virtualization is based on a one-to-many approach. The path between a physical server and
nearby peripherals is virtualized, allowing a single IT resource to be shared among virtual
machines (VMs). The virtualized devices interoperate with commonly used applications,
operating systems and hypervisors.
This technique can be applied to any server component, including disk-based RAID controllers,
Ethernet NICs, Fibre Channel HBAs, graphics cards and internally mounted solid-state drives
(SSDs). For example, a single physical NIC is presented as a series of multiple virtual NICs.
Q.17 Explain Amazon EBS Snapshot. Give steps to create EBS Snapshot.
Answer:
An EBS snapshot is a point-in-time copy of your Amazon EBS volume, which is lazily copied to
Amazon Simple Storage Service (Amazon S3). EBS snapshots are incremental copies of data.
This means that only unique blocks of EBS volume data that have changed since the last EBS
snapshot are stored in the next EBS snapshot. This is how incremental copies of data are created
in Amazon AWS EBS Snapshot.
Each AWS snapshot contains all the information needed to restore your data starting from the
moment of creating the EBS snapshot. EBS snapshots are chained together. By using them, you
will be able to properly restore your EBS volumes, when needed.
Deletion of an EBS snapshot is a process of removing only the data related to that specific
snapshot. Therefore, you can safely delete any old snapshots with no harm. If you delete an old
snapshot, AWS will consolidate the snapshot data: all valid data will be moved forward to the
next snapshot and all invalid data will be discarded.
RFID reader: Depending on the frequency that is used and its performance, an RFID reader
sends radio waves of between one centimeter and 30 meters or more. If a transponder enters this
electromagnetic region, it detects the activating signal from the reader. The RFID reader decodes
the data stored in the integrated circuit of the transponder (silicon chip), and communicates them,
depending on the application, to a host system.
RFID antenna: An RFID antenna consists of a coil with one or more windings and a matching
network. It radiates the electromagnetic waves generated by the reader, and receives the RF
signals from the transponder. An RFID system can be designed so that the electromagnetic field
is constantly generated, or activated by a sensor.
RFID transponder (or tag): The heart of an RFID system is a data carrier, referred to as the
transponder, or simply the Tag. The designs and modes of function of the transponders also
differ depending on the frequency range, just as with the antennas.
1. Plan
The initial stage of the supply chain process is the planning stage. We need to develop a plan or
strategy in order to address how the products and services will satisfy the demands and
necessities of the customers. In this stage, the planning should mainly focus on designing a
strategy that yields maximum profit.
2. Develop(Source)
After planning, the next step involves developing or sourcing. In this stage, we mainly
concentrate on building a strong relationship with suppliers of the raw materials required for
production. This involves not only identifying dependable suppliers but also determining
different planning methods for shipping, delivery, and payment of the product.
3. Make
The third step in the supply chain management process is the manufacturing or making of
products that were demanded by the customer. In this stage, the products are designed,
produced, tested, packaged, and synchronized for delivery.
4. Deliver
The fourth stage is the delivery stage. Here the products are delivered to the customer at the
destined location by the supplier. This stage is basically the logistics phase, where customer
orders are accepted and delivery of the goods is planned. The delivery stage is often referred as
logistics, where firms collaborate for the receipt of orders from customers, establish a network
of warehouses, pick carriers to deliver products to customers and set up an invoicing system to
receive payments.
5. Return
The last and final stage of supply chain management is referred as the return. In the stage,
defective or damaged goods are returned to the supplier by the customer. Here, the companies
need to deal with customer queries and respond to their complaints etc.
Answer:
By virtualizing STB functionality, CloudTV enables Web-like guides and full online video
experiences on existing and next-generation devices, including QAM STBs and “newer IP-
capable devices, such as Charter’s new Worldbox,” Internet-connected TVs and specialized
streaming boxes.
Multichannel News notes that “instead of requiring operators to write a different version of the
UI for each device, operating system and rendering engine, ActiveVideo’s approach looks to
avoid that operational nightmare by requiring that it only be written once, in HTML5, and
managed from the cloud.” ScreenPlays adds that the platform enables delivery of “protected
OTT streams as an integral part of channel offerings” without replacing existing customer
devices. The analyst firm nScreenMedia cites such advantages as: Compatibility with the widest
range of devices; the ability to update an app once and see it reflected on every device;
scalability for large and small service providers; and the ability to use the most advanced UI
techniques available to ensure high “coolness” factors. ACG Research notes that for cable
operators, CloudTV can reduce total cost of ownership by up to 83% when compared to a set-top
box replacement program.
Answer:
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which
does the heavy lifting of building, running, and distributing your Docker containers. The Docker
client and daemon can run on the same system, or you can connect a Docker client to a remote
Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.
The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such
as images, containers, networks, and volumes. A daemon can also communicate with other
daemons to manage Docker services.
The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to dockerd,
which carries them out. The docker command uses the Docker API. The Docker client can
communicate with more than one daemon.
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use,
and Docker is configured to look for images on Docker Hub by default. You can even run your
own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry
(DTR).
When you use the docker pull or docker run commands, the required images are pulled from
your configured registry. When you use the docker push command, your image is pushed to your
configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.
IMAGES
An image is a read-only template with instructions for creating a Docker container. Often, an
image is based on another image, with some additional customization. For example, you may
build an image which is based on the ubuntu image, but installs the Apache web server and your
application, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published
in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining
the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in
the image. When you change the Dockerfile and rebuild the image, only those layers which have
changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.
CONTAINERS
A container is a runnable instance of an image. You can create, start, stop, move, or delete a
container using the Docker API or CLI. You can connect a container to one or more networks,
attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You
can control how isolated a container’s network, storage, or other underlying subsystems are from
other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when
you create or start it. When a container is removed, any changes to its state that are not stored in
persistent storage disappear.
Answer:
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which
does the heavy lifting of building, running, and distributing your Docker containers. The Docker
client and daemon can run on the same system, or you can connect a Docker client to a remote
Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.
The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such
as images, containers, networks, and volumes. A daemon can also communicate with other
daemons to manage Docker services.
The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to dockerd,
which carries them out. The docker command uses the Docker API. The Docker client can
communicate with more than one daemon.
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use,
and Docker is configured to look for images on Docker Hub by default. You can even run your
own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry
(DTR).
When you use the docker pull or docker run commands, the required images are pulled from
your configured registry. When you use the docker push command, your image is pushed to your
configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.
IMAGES
An image is a read-only template with instructions for creating a Docker container. Often, an
image is based on another image, with some additional customization. For example, you may
build an image which is based on the ubuntu image, but installs the Apache web server and your
application, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published
in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining
the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in
the image. When you change the Dockerfile and rebuild the image, only those layers which have
changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.
CONTAINERS
A container is a runnable instance of an image. You can create, start, stop, move, or delete a
container using the Docker API or CLI. You can connect a container to one or more networks,
attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You
can control how isolated a container’s network, storage, or other underlying subsystems are from
other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when
you create or start it. When a container is removed, any changes to its state that are not stored in
persistent storage disappear.