Java Projects For Final Year Computer Science

By | January 20, 2013
1. AN ENERGY EFFICIENT CLUSTERING ALGORITHM IN LARGE- SCALE MOBILE SENSOR NETWORKS
Abstract: Clustering offers a kind of hierarchical organization to provide scalability and basic performance guarantee by partitioning the network into disjoint groups of nodes. In this paper an energy efficient clustering algorithm is proposed under large-scale mobile sensor networks scenario. In the initial cluster formation phase, our proposed scheme features a simple execution process, which has a time and message complexity of O(n), and eliminates the frozen time requirement by introducing some GPS-capable mobile nodes. In the following cluster maintenance stage, the maintenance of clusters is asynchronously and event driven so as to thoroughly avoid the ripple effect, thus being well suitable for the high mobility environment. Extensive simulations have been conducted and the simulation results reveal that our proposed algorithm successfully achieves its target at incurring much less clustering overheads as well as maintaining much more stable cluster structure, as compared to HCC(High Connectivity Clustering)  algorithm.
 
2. UPDATE SCHEDULING FOR IMPROVING CONSISTENCY IN DISTRIBUTED VIRTUAL ENVIRONMENTS
Abstract:  The fundamental goal of distributed virtual environments (DVEs) is to create a common and consistent presentation of the virtual world among a set of computers interconnected by a network. This paper investigates update scheduling algorithms to make efficient use of network capacity and improve consistency in DVEs. Our approach is to schedule state updates according to their potential impacts on consistency. In DVEs, the perceptions of participants are affected by both the spatial magnitude and temporal duration of inconsistency in the virtual world. Using the metric of time-space inconsistency, we analytically derive the optimal update schedules for minimizing the impact of inconsistency. Based on the analysis, we propose a number of scheduling algorithms that integrate spatial and temporal factors. These algorithms also consideration the effect of network delays. The algorithms can be used on top of many existing mechanisms such as dead reckoning. Experimental results show that our proposed algorithms significantly outperform the intuitive algorithms that are based on spatial or temporal factors only.

3. ENERGY EFFICIENT REPROGRAMMING OF SWARM OF MOBILE SENSORS
abstract:ReMo, an energy-efficient, multihop reprogramming protocol for mobile sensor networks. Without making any assumptions on the location of nodes, ReMo uses the LQI and RSSI measurements of received packets to estimate link qualities and relative distances with neighbors in order to select the best node for code Exchange. The protocol is based on a probabilistic broadcast paradigm with the mobile nodes smoothly modifying their advertisement transmission rates based on the dynamic changes in network density, thereby saving valuable energy. Contrary to previous protocols, ReMo downloads pages regardless of their order, thus, exploiting the mobility of the nodes and facilitating a fast transfer of the code. Our simulation results show significant improvement in reprogramming time and number of message transmissions over other existing protocols under different settings of network mobility. Our implementation results of ReMo on a testbed of SunSPOTs also showcase its better performance than existing reprogramming protocols in terms of transfer time and number of message transmissions.

4. OPTIMIZE STORAGE PLACEMENT IN SENSOR NETWORKS
ABSTRACT:Data storage has become an important issue in sensor networks as a large amount of collected data need to be archived for future information retrieval. Storage nodes are introduced in this paper to store the data collected from the sensors in their proximities. The storage nodes alleviate the heavy load of transmitting all data to a central place for archiving and reduce the communication cost induced by the network query. The objective of this paper is to address the storage node placement problem aiming to minimize the total energy cost for gathering data to the storage nodes and replying queries. We examine deterministic placement of storage nodes and present optimal algorithms based on dynamic programming. Further, we give stochastic analysis for random deployment and conduct simulation evaluation for both deterministic and random placements of storage nodes.

5. HOST-TO-HOST CONGESTION CONTROL FOR TCP
Abstract: The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCPnamely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP based application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed that address several problems with different levels of reliability and precision. There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. The survey describes each congestion control alternative, its strengths and its weaknesses. Additionally, techniques that are in common use or available for testing are described.

6. ENSURING DATA STORAGE SECURITY IN CLOUD COMPUTING
Abstract: Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users’ data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server (s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
Please Share: Tweet about this on TwitterShare on FacebookShare on Google+Share on RedditPin on PinterestShare on LinkedInDigg thisShare on StumbleUponShare on TumblrBuffer this pageShare on VKEmail this to someone

Leave a Reply

Your email address will not be published. Required fields are marked *