0% found this document useful (0 votes)
17 views15 pages

CAT2

The document discusses the architecture of the Android operating system. It describes the key components including the Linux kernel, hardware abstraction layer, Android runtime, native libraries, Java API framework, and system apps. It provides details on what each component is used for and how they interact with each other.

Uploaded by

Just Log
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views15 pages

CAT2

The document discusses the architecture of the Android operating system. It describes the key components including the Linux kernel, hardware abstraction layer, Android runtime, native libraries, Java API framework, and system apps. It provides details on what each component is used for and how they interact with each other.

Uploaded by

Just Log
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1.

ANDROID ARCHITECTURE:

Linux kernel
The foundation of the Android platform is the Linux kernel. For
example, the Android Runtime (ART) relies on the Linux kernel
for underlying functionalities such as threading and low-level
memory management.
Using a Linux kernel lets Android take advantage of key security
features and lets device manufacturers develop hardware
drivers for a well-known kernel.
Hardware abstraction layer (HAL)
The hardware abstraction layer (HAL) provides standard
interfaces that expose device hardware capabilities to the
higher-level Java API framework. The HAL consists of multiple
library modules, each of which implements an interface for a
specific type of hardware component, such as
the camera or Bluetooth module. When a framework API makes
a call to access device hardware, the Android system loads the
library module for that hardware component.
Android runtime
For devices running Android version 5.0 (API level 21) or higher,
each app runs in its own process and with its own instance of
the Android Runtime (ART). ART is written to run multiple
virtual machines on low-memory devices by executing Dalvik
Executable format (DEX) files, a bytecode format designed
specifically for Android that's optimized for a minimal memory
footprint. Build tools, such as d8, compile Java sources into DEX
bytecode, which can run on the Android platform.
Some of the major features of ART include the following:
• Ahead-of-time (AOT) and just-in-time (JIT) compilation
• Optimized garbage collection (GC)
• On Android 9 (API level 28) and higher, conversion of an app
package's DEX files to more compact machine code
• Better debugging support, including a dedicated sampling
profiler, detailed diagnostic exceptions and crash reporting, and
the ability to set watchpoints to monitor specific fields
Prior to Android version 5.0 (API level 21), Dalvik was the
Android runtime. If your app runs well on ART, then it can work
on Dalvik as well, but the reverse might not be true.
Android also includes a set of core runtime libraries that
provide most of the functionality of the Java programming
language, including some Java 8 language features, that the
Java API framework uses.
Native C/C++ libraries
Many core Android system components and services, such as
ART and HAL, are built from native code that requires native
libraries written in C and C++. The Android platform provides
Java framework APIs to expose the functionality of some of
these native libraries to apps. For example, you can
access OpenGL ES through the Android framework’s Java
OpenGL API to add support for drawing and manipulating 2D
and 3D graphics in your app.
If you are developing an app that requires C or C++ code, you
can use the Android NDK to access some of these native
platform libraries directly from your native code.
Java API framework
The entire feature-set of the Android OS is available to you
through APIs written in the Java language. These APIs form the
building blocks you need to create Android apps by simplifying
the reuse of core, modular system components and services,
which include the following:
• A rich and extensible view system you can use to build an app’s
UI, including lists, grids, text boxes, buttons, and even an
embeddable web browser
• A resource manager, providing access to non-code resources
such as localized strings, graphics, and layout files
• A notification manager that enables all apps to display custom
alerts in the status bar
• An activity manager that manages the lifecycle of apps and
provides a common navigation back stack
• Content providers that enable apps to access data from other
apps, such as the Contacts app, or to share their own data
Developers have full access to the same framework APIs that
Android system apps use.
System apps
Android comes with a set of core apps for email, SMS
messaging, calendars, internet browsing, contacts, and more.
Apps included with the platform have no special status among
the apps the user chooses to install. So, a third-party app can
become the user's default web browser, SMS messenger, or
even the default keyboard. Some exceptions apply, such as the
system's Settings app.
The system apps function both as apps for users and to provide
key capabilities that developers can access from their own app.
For example, if you want your app to deliver SMS messages,
you don't need to build that functionality yourself. You can
instead invoke whichever SMS app is already installed to deliver
a message to the recipient you specify.

https://www.geeksforgeeks.org/android-architecture/
2.Algorithms for implementing DSM:
CENTRAL SERVER ALGORITHM:
MIGRATION ALGORITHM:
READ REPLICATION ALGORITHM:
FULL REPLICATION ALGORITHM:
2. HADOOP DISTRIBUTED FILE SYSTEM -pdf
HDFS, a fundamental component of Hadoop, serves as a distributed file
system designed to scale seamlessly, tolerate faults, and ensure high
availability. The system’s structure revolves around master and slave nodes,
specifically the NameNode and DataNode.
i) NameNode and DataNode:

The NameNode serves as the master in a Hadoop cluster, overseeing the


DataNodes (slaves). Its primary role is to manage metadata, such as
transaction logs tracking user activity. The NameNode instructs DataNodes on
operations like creation, deletion, and replication.
DataNodes, acting as slaves, are responsible for storing data in the Hadoop
cluster. It’s recommended to have DataNodes with high storage capacity to
accommodate a large number of file blocks.

4. LAMPORT SHOSTAK PEASE ALGORITHM:


5. LOAD DISTIBUTING ALGORITHMS -ppt

2 marks

• Transparent local access — Data to be accessed as if it’s local to the


user for high performance.
• Location independence — No need for users to know where file data
physically resides.
• Scale-out capabilities — The ability to scale out massively by adding
more machines. DFS systems can scale to exceedingly large clusters
with thousands of servers.
• Fault tolerance — A need for your system to continue operating
properly even if some of its servers or disks fail. A fault-tolerant DFS is
able to handle such failures by spreading data across multiple
machines.

DISTRIBUTED FILE SYSTEM: File types


• Windows Distributed File System
• Network File System (NFS)
• Server Message Block (SMB)
• Google File System (GFS)
• Lustre
• Hadoop Distributed File System (HDFS)
• GlusterFS
• Ceph
• MapR File System

Characteristics of mobile application

Hardware support
Open Handset alliance
The Open Handset Alliance (OHA) is a consortium of technology and mobile companies
formed in November 2007. The alliance was established with the goal of developing open
standards for mobile devices, leading to the creation and subsequent release of Android, a
free, open-source mobile operating system
Load distributing algorithm

The goal of distributed scheduling is to distribute a system’s load across


available resources in a way that optimizes overall system performance
while maximizing resource utilization.
The primary concept is to shift workloads from strongly laden machines to
idle or lightly loaded machines.

Components of Load Distributing Algorithm :


A load distributing algorithm has 4 components –
• Transfer Policy –
Determine whether or not a node is in a suitable state for a task
transfer.
• Process Selection Policy –
Determines the task to be transferred.
• Site Location Policy –
Determines the node to which a task should be transferred to
when it is selected for transfer.
• Information Policy –
It is in-charge of initiating the gathering of system state data.
A transfer policy requires information on the local nodes state to make
the decisions. A location policy requires information on the states of the
remote nodes to make the decision.
1. Transfer Policy –
Threshold policies make up a substantial portion of transfer policies. The
threshold is measured in units of load. The transfer policy determines
that a node is a Sender when a new task begins at that node and the
load at the node exceeds a threshold T. If the node’s load falls below T,
the transfer policy determines that the node can be used as a remote task
recipient.
2. Selection Policy –
A selection policy decides which task in the node should be transferred
(as determined by the transfer policy). If the selection policy cannot
locate an appropriate job in the node, the transfer procedure is halted
until the transfer policy signals that the site is a sender again. The
selection policy selects a task for transfer after the transfer policy
decides that the node is a sender.
• The most straightforward method is to choose a recently
generated task that has led the node to become a sender by
exceeding the load threshold.
• On the other way, a job is only transferred if its response time
improves as a result of the transfer.
Other criteria to consider in a task selection approach are: first, the
overhead imposed by the transfer should be as low as possible, and
second, the number of location-dependent calls made by the selected
task should be as low as possible.
3. Location Policy –
The location policy’s job is to discover suitable nodes for sharing. After
the transfer policy has determined that a task should be transmitted, the
location policy must determine where the task should be sent. This will
be based on data collected through the information policy. Polling is a
widely used approach for locating a suitable node. In polling, a node
polls another node to see if it is a suitable load-sharing node. Nodes can
be polled sequentially or concurrently. A site polls other sites in a
sequential or parallel manner to see whether they are acceptable for a
transfer and/or if they are prepared to accept one. For polling, nodes
could be chosen at random or more selectively depending on information
obtained during prior polls. It’s possible that the number of sites polled
will change.
4. Information Policy –
The information policy is in charge of determining when information
regarding the states of the other nodes in the system should be
collected. Most information policies fall into one of three categories:

• Demand – driven –
Using sender initiated or receiver initiated polling techniques, a
node obtains the state of other nodes only when it desires to get
involved in either sending or receiving tasks. Because their
actions are dependent on the status of the system, demand-
driven policies are inherently adaptive and dynamic. The policy
here can be sender initiative : sender looks for receivers to
transfer the load, receiver initiated – receivers solicit load from
the senders and symmetrically initiated – a combination of both
sender & receiver initiated.
• Periodic –
At regular intervals, nodes exchange data. To inform localization
algorithms, each site will have a significant history of global
resource utilization over time. At large system loads, the
benefits of load distribution are negligible, and the periodic
exchange of information may thus be an unnecessary overhead.
• State – change – driven –
When a node’s state changes by a specific amount, it sends out
state information. This data could be forwarded to a centralized
load scheduling point or shared with peers. It does not collect
information about other nodes like demand-driven policy. This
policy does not alter its operations in response to changes in
system state. For example, if the system is already overloaded,
exchanging system state information on a regular basis will
exacerbate the problem.

You might also like