Secure File Storage in Cloud Computing
Secure File Storage in Cloud Computing
This paper aims to propose a task scheduling algorithm based on particle swarm optimization
algorithm, which can schedule efficiently, shorten the task completion time and improve the
utilization of resources in cloud computing. First, the position of the particle is encoded by
the natural number, and the population is initialized randomly in the solution space. Then, the
particle is repaired to reduce the probability that the particle runs out of the solution space
and the particle velocity is limited after each iteration. Furthermore, an improved particle
swarm optimization algorithm is proposed, which is based on chaos perturbation strategy.
The experimental results of Cloudsim simulation platform show that the improved Particle
Swarm Optimization(PSO) has faster convergence speed and avoids premature convergence
and jumps out of the local optimum.
4. A Study of Task Scheduling Based on Differential Evolution Algorithm in Cloud
Computing
In this paper, we put forward a task scheduling algorithm in cloud computing with the goal of
the minimum completion time, maximum load balancing degree, and the minimum energy
consumption using improved differential evolution algorithm. In order to improve the global
search ability in the earlier stage and the local search ability in the later stage, we have
adopted the adaptive zooming factor mutation strategy and adaptive crossover factor
increasing strategy. At the same time, we have strengthened the selection mechanism to keep
the diversity of population in the later stage. In the process of simulation, we have performed
the functional verification of the algorithm and compared with the other representative
algorithms. The experimental results show that the improved differential evolution algorithm
can optimize cloud computing task scheduling problems in task completion time, load
balancing, and energy efficient optimization.
For the cloud computing, task scheduling problems are of paramount importance. It becomes
more challenging when takes into account energy consumption, traditional make span criteria
and users QoS as objectives. This paper considers independent tasks scheduling in cloud
computing as a bi-objective minimization problem with make span and energy consumption
as the scheduling criteria. We use Dynamic Voltage Scaling (DVS) to minimize energy
consumption and propose two algorithms. These two algorithms use the methods of unify and
double fitness to define the fitness function and select individuals. They adopt the genetic
algorithm to parallel find the reasonable scheduling scheme. The simulation results
demonstrate the two algorithms can efficiently find the right compromise between make span
and energy consumption.
In Cloud computing environment the resources are managed dynamically based on the need
and demand for resources for a particular task. With a lot of challenges to be addressed our
concern is Load balancing where load balancing is done for optimal usage of resources and
reduces the cost associated with it as we use pay-as-you-go policy. The task scheduling is
done by the cloud service provider using preemption and non-preemption based on the
requirements in a virtualized scenario which has been focused here. In this paper, various task
scheduling algorithms are studied to present the dynamic allocation of resources under each
category and the ways each of this scheduling algorithm adapts to handle the load and have
high-performance computing.
7. An improved task scheduling and load balancing algorithm under the heterogeneous
cloud computing network
In recent decades, with the rapid development and popularization of Internet and computer
technology, cloud computing had become a highly-demanded service due to the advantages
of high computing power, cheap cost of services, scalability, accessibility as well as
availability. However, a fly in the ointment was that the system is more complex while
dispatching variety of tasks to servers. It means that dispatching tasks to the servers is a
challenge since there has a large number of heterogeneous servers, core and diverse
application services need to cooperate with each other in the cloud computing network. To
deal with the huge number of tasks, an appropriate and effective scheduling algorithm is to
allocate these tasks to appropriate servers within the minimum completion time, and to
achieve the load balancing of workload. Based on the reasons above, a novel dispatching
algorithm, called Advanced MaxSufferage algorithm (AMS), is proposed in this paper to
improve the dispatching efficiency in the cloud computing network. The main concept of the
AMS is to allocate the tasks to server nodes by comparing the SV value, MSV value, and
average value of expected completion time of the server nodes between each task. Basically,
the AMS algorithm can obtain better task completion time than previous works and can
achieve loadbalancing in cloud computing network.
Cloud computing is an advanced computing model using which several applications, data and
countless IT services are provided over the Internet. Task scheduling plays a crucial role in
cloud computing systems. The issue of task scheduling can be viewed as the finding or
searching an optimal mapping/assignment of a set of subtasks of different tasks over the
available set of resources so that we can achieve the desired goals for tasks. In the proposed
research methodology, the researcher has extended this technique using dynamic voltage
fluctuation system (DVFS). By using DVFS, if further migration is not possible or the
number of tasks running on the machine is going to compete, then migration further reduces
the performance. In DVFS, the voltage given to under load machines has been reduced which
further optimize the energy consumption to the next level. In the research, DVFS has
improved the energy consumption without violating the SLA.
10. An efficient architecture and algorithm to prevent data leakage in Cloud Computing
using multi-tier security approach
11. Throughput and Energy Efficiency for S-FFR in Massive MIMO Enabled
Heterogeneous C-RAN
This paper considers the massive multiple-input multiple-output (MIMO) enabled
heterogeneous cloud radio access network (C-RAN), in which both remote radio heads
(RRHs) and massive MIMO macrocell base stations (BS) are deployed to potentially
accomplish high throughput and energy efficiency (EE). In this network, the soft fractional
frequency reuse (S-FFR) is employed to mitigate the inter-tier interference. We develop a
tractable analytical approach to evaluate the throughput and EE of the entire network, which
can well predict the impacts of the key system parameters such as number of macrocell BS
antennas, RRH density, and S-FFR factor, etc. Our results demonstrate that massive MIMO is
still a powerful tool for improving the throughput of the heterogeneous C-RAN while RRHs
are capable of achieving higher EE. The impact of S-FFR on the network throughput is
dependent on the density of RRHs. Furthermore, more radio resources allocated to the RRHs
can greatly improve the EE of the network.
12. Scalable and Reliable Key Management for Secure Deduplication in Cloud Storage
Secure deduplication using convergent encryption eliminates duplicate data and stores only
one copy to save storage costs while preserving the security of the outsourced data. However,
convergent encryption produces a number of encryption keys, of which size is linear to the
number of different data. Although a deduplication scheme has been proposed for efficient
convergent key management recently, it has drawbacks in terms of scalability and key
management security. In order to solve these problems, we propose a novel secure
deduplication scheme with scalable and reliable key management based on paring-based
cryptography. The proposed scheme does not require additional secure channels to distribute
key components while still guaranteeing secure key management as opposed to the previous
schemes.
Cloud computing plays a major role in the business domain today as computing resources are
delivered as a utility on demand to customers over the Internet. Cloud storage is one of the
services provided in cloud computing which has been increasing in popularity. The main
advantage of using cloud storage from the customers' point of view is that customers can
reduce their expenditure in purchasing and maintaining storage infrastructure while only
paying for the amount of storage requested, which can be scaled-up and down upon demand.
With the growing data size of cloud computing, a reduction in data volumes could help
providers reducing the costs of running large storage system and saving energy consumption.
So data deduplication techniques have been brought to improve storage efficiency in cloud
storages. With the dynamic nature of data in cloud storage, data usage in cloud changes
overtime, some data chunks may be read frequently in period of time, but may not be used in
another time period. Some datasets may be frequently accessed or updated by multiple users
at the same time, while others may need the high level of redundancy for reliability
requirement. Therefore, it is crucial to support this dynamic feature in cloud storage.
However current approaches are mostly focused on static scheme, which limits their full
applicability in dynamic characteristic of data in cloud storage. In this paper, we propose a
dynamic deduplication scheme for cloud storage, which aiming to improve storage efficiency
and maintaining redundancy for fault tolerance.