EXPLOITATION OF VULNERABILITIES IN CLOUD STORAGE
Narendran Calluru Rajasekar
A dissertation submitted in partial fulfillment
of the requirements for the degree of:
Master of Science
Internet Systems Engineering
Supervised by Dr. Chris Imafidon
(Formerly Queen Mary University of London)
Submitted to the
University of East London
18 May 2010
University of Cambridge
Table of Contents
I thank 'God', the almighty to have been with me throughout my life blessing me with energy and support towards the accomplishment of my goal. I thank my parents for their continuous motivation and love which acts as moral strength in achieving my goals.
The research work could not have been this interesting and accomplished without the guidelines provided by my project supervisor Dr Chris Imafidon, Senior Lecturer, University of East London. I would like to extend my gratefulness to my project supervisor for his timely contribution of ideas and guidance which thereby paved way to achieve the work today.
Apart from the efforts and initiatives from me, the success of my work largely depends on the motivation and encouragement provided by all my tutors in the University of East London. I also extend my thanks to all my friends who have been supportive achieving this work and special thanks to all my friends who reviewed this work.
I also owe a big thanks to University of Cambridge for the security seminar held at William Gates building. This helped me to learn aspects of security oriented languages.
I owe a special thanks to MSDN Academic Alliance which made possible for me to gain access to recent version of Microsoft Window Server 2008, Visual Studio 2010 and Microsoft SQL Server 2008.
The extensive support from my university has to be mentioned. Support in the form of its exorbitant lab facilities, library, e-journal management and so on, without which it would have been intricate to precede the study; hence I would like to thank the 'University of East London' in whole for facilitating the students to a great extent.
Computers has evolved from small computing devices such as abacus to super computers, and the computing has changed from stand alone computers to centralised computing and then to distributed computing. Current era is cloud computing era where all the software, platform and infrastructure are virtualised and provided as services typically exploited using pay-per-use model. In Traditional model, a company or an organisation will maintain their own IT Infrastructure and hence had full control on the data and processes. But in cloud computing, the data and processes are maintained by some 3rd party vendors and hence the control is lost.
Internet is the communication channel for cloud computing which is open to everyone. If a soft spot is identified by attackers, it could be exploited to a great extent. Hence it is the tempting target for cyber crime. Most of the companies like to move their applications to cloud services due to the huge cost savings it provides. But they are taken aback due to one main reason "Security".
In this paper, we explore the vulnerabilities of cloud storage, one of the domains of cloud computing and various possible attacks exploiting these vulnerabilities. The study is extended to available defence mechanisms and current research areas of cloud storage. The cloud storage features provided by some of the leading companies are reviewed and cloud storage architecture is devised based on the study. Finally the cloud storage service is implemented based on the devised architecture.
Keywords: Cloud Computing, Cloud Security, Cloud Legal Issues, Cloud Storage, Security Implications, Architecture, Implementation, Exploitation, Vulnerabilities.
Internet is ubiquitous and its penetration rate is very high in recent years. 80 percent of UK residents have access to Internet (www.internetworldstats.com). Almost everyone using Internet is involved in some form of cloud computing activities such as Gmail, Yahoo, Picasa, Facebook and so on.
This paper explores cloud storage, which is classified as one of the domains of cloud computing by Cloud Security Alliance (2009). There are many leading cloud storage service providers such as Google Docs, Amazon S3, Nirvanix, Adrive and Zumo drive. The vulnerabilities of cloud storage are very high, even the leading service providers have been compromised at some point. The aim of this paper is to identify various vulnerabilities that could be exploited and to explore different ways of mitigating the risk and implement an optimal solution.
1.1 Aims and Objectives
The aim and objectives of this study is listed below.
* To study the usage statistics of Internet and cloud computing.
* To study "why cloud computing is important?"
* To study the target audience of cloud computing.
* To study the scope, future and importance of cloud computing.
* Explore the different aspects of cloud storage which is one of the domains of cloud computing.
* To explore the vulnerabilities involved in access and storage implementation pertaining to cloud storage.
* To explore the possible ways of exploitation of the identified vulnerabilities.
* To figure out the security components required to prevent and defend the attacks and to mitigate the risk of exploitation.
* To explore ways to ensure privacy using techniques like encryption and identity and access management.
* To review the features of current cloud storage service provided by some of the leading companies.
* To devise architecture for implementing cloud storage with identified security components.
* To implement an optimal solution based on the devised architecture on a simple web hosting.
1.2 Paper Organisation
Chapter 1 introduces cloud storage: a domain of cloud computing; lists the aims and objectives of the paper and details the paper organisation.
Chapter 2 formally defines cloud computing and describes the statistics on Internet usage and cloud computing activities performed. Later presents analysis on cost savings on cloud adoption, its benefits and influence of social media as cloud computing. Finally discusses various developments in cloud computing and the main reason hampering cloud computing adoption.
Chapter 3 defines cloud storage and explores its background and current development. Later in the chapter the need for security and the key security components are discussed. Some of the important vulnerabilities, exploitation and its mitigation are discussed. Finally topics such as accountability, archive, and backup and also evolving security oriented programming languages are discussed.
Chapter 4 reviews features of Google Docs, Adrive and Zumo drive.
Chapter 5 devises cloud storage architecture based on the study in previous chapters. It includes paradigms such as security components, usability, data security, legal issues and finally illustrates cloud storage architecture diagram.
Chapter 6 describes step by step implementation of cloud storage application with appropriate screen shots.
Chapter 7 presents the evaluation of the project implemented based on this study.
Chapter 8 contains the list of acronyms used in this paper along with its translation.
Chapter 9 lists all the references used in the paper.
Chapter 10 lists bibliography which contains list of sources which are good to read and provides knowledge and information relevant to this paper.
2 Cloud Computing - Background and Trend
Cloud computing is defined as:
"Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay-per-use model in which guarantees are offered by the Infrastructure Provider by means of customized SLA"
- Vaquero, L., L. Rodero-Merino, et al. (2008)
2.2 Understanding Cloud Computing
Vision of 21st century is accessing Internet services from light weight portable devices, instead of accessing it from a traditional desktop PC. Cloud computing is a technology which will facilitate companies and organisations to host their services without worrying about IT infrastructure and other supporting services (Dikaiakos, et al., 2009).
The cloud concept draws on the existing technologies which aren't new such as Centralised Computing, Distributed Computing, Utility Computing, SaaS. It is new in the way it integrates all the above and shifts them from a processing unit to a network (Weiss, 2007).
The cloud computing facilitates a starting company by moving Capital Expense to Operational Expense (Computing, et al., 2009). Amazon (EC2, S3), Microsoft Azure, IBM Blue Cloud, HP Cloud Assure are some of the cloud computing services available in the market (Kaufman, 2009).
Organisations can decide upon their operating model either by running their own private cloud or buying a cloud from 3rd party service providers based on their requirements (Grossman, 2009). The private cloud is similar to public cloud but it has its own security and compliance needs hosted for and by their own (Rash, 2009).
Cloud computing provides extensive computing power for web services but is not mature enough to perform HPC (High Performance Computing). Napper and Bientinesi (2009) has experimentally shown that the execution speed per dollar spent decreased exponentially with increase computing cores and hence the cost of solving linear systems increased exponentially which is a clear evidence cloud computing is still in its evolving stage (Napper and Bientinesi, 2009).
2.3 Usage Statistics
2.3.1 Worldwide Internet Usage
Chart 1 - World Internet User by Regions
Source: (www.internetworldstats.com, 2009)
The number of .com domain names registered started with six in mid 1980's has grown over 80 million over last 25 years. The Internet usage has grown as well. Currently, out of 6.8 billion world population, 1.7 billion People have access to Internet (U.S Census Bureau, 2010). Though Asia is leading with 42.6% of world Internet users, U.K leads the way in Internet usage in % of population in a specific region. In U.K alone, 46 million people have access to Internet out of the whole population of 61 million people which is more than 75% of U.K population (www.internetworldstats.com, 2009). The average daily DNS query volume is now more than 52 billion queries per day (VeriSign, 2010).
Chart 2 - .com domain names registered and DNS queries executed
Source: (VeriSign, 2010)
2.3.2 Penetration Rates
Chart 3 - World Internet Penetration Rates
Source: (www.internetworldstats.com, 2009)
In last 10 years the average penetration rate of Internet in various regions across the world is 25.6%. Chart 4 shows that the Internet usage in U.K is not only increasing but also people are switching from narrow band to broadband which means that people are some way or the other getting benefitted from Internet. Internet is used in many ways such as email, online shopping, social networking, chatting, eLearning, marketing, entertainment and so on and has become an essential service in all households in U.K. (www.internetworldstats.com, 2009).
Chart 4 - Households with Internet Access in UK
Source: (Office for National Statistics, 2009)
All the people who have access to Internet is engaged in some sort of cloud computing activities such as email services like Gmail, Yahoo, Hotmail or storing documents or photos online, some of them use online applications such as Google Docs, Facebook, Picnik, Adobe Photoshop Express etc. (Pew Internet, 2008)
2.3.3 Cloud Computing Activities
Table 1 - Cloud Computing Activities
Source: (Pew Internet, 2008)
According to the statistics by Pew Research Centre, the buzz words of people who engage in cloud computing activities are convenience, flexibility and cost effective. 51% of the users say that it is easy to use, 41% say that they like the ability to access the data from any computer they are using, 39% say that it makes it easy to share information with their friends and even share between applications. On the other side, the only buzz word or concern for not adopting cloud computing is "security". 90% of cloud application users are concerned about their data being sold to other companies, 68% of users of at least one of 6 cloud applications are concerned about the ability of others to track their activities (Pew Internet, 2008).
Table 2 - Why People Use Cloud Applications
Source: (Pew Internet, 2008)
Table 2 shows that most people use cloud applications because it is easy and convenient to use and share information and they don't have to worry about backing up or losing data. Table 3 show that young users in the age group of 18 - 29 years actively participate in cloud related activities.
Table 3 - Young users appreciate benefits of cloud
Source: (Pew Internet, 2008)
2.3.4 Social Media
Chart 5 - Facebook Growth
Source: (Facebook Internal Data, 2009)
In recent years social media has become very popular. Facebook, a cloud application is getting popular rapidly with less than 10 million users in 2007 to 350 million users in 2009. Facebook is a social network application which by itself acts as a platform to run many other applications on Facebook API. Facebook users in U.K have reached 23 million which is 50% of Internet users in U.K. Though it is most popular among the age group of 20 - 29 years, it is widely used by all age groups (Facebook Internal Data, 2009).
Chart 6 - UK Facebook usage Age Break Up
Source: (Facebook Internal Data, 2009)
Similar to Facebook, other social media like Orkut, Bebo, Twitter and Digg are also used by many people across the world and have become grounds for Internet Marketing. In traditional methods of marketing the cost involved in spreading promotions and offers is high where as it is almost zero when cloud applications are used as a platform. Also it is quick and effective as the promotions and offers can be delivered only to appropriate and interested users.
2.3.5 Enterprise perspective
Table 4 - Reasons for using / plan-to-use Cloud
Source: (CIO Research, 2009)
In an enterprise perspective, there are various reasons to adopt cloud computing of which one main reason is cost savings on IT infrastructure. 50% of the enterprises surveyed reported scalability and flexibility as other prime reason; IT staffing and access to skills were also been reported as important reasons (CIO Research, 2009).
2.4 Cost Savings
"How much can I save by adopting cloud computing?" would be the question for all the companies planning to adopt cloud computing. TCO and ROI calculator available at http://www.microsoft.com/windowsazure/tco/ is a powerful tool which can be used to estimate the cost savings for a company. The tool allows users to input various details such as business domain, number of servers, logins, duration etc. Also it allows users to change the cost of each and every service provided. Hence this tool can be used to estimate cost for any cloud computing by collecting cost information from various cloud providers.
Chart 7- Estimated Total Cost of Ownership for a small business over a period of 3 years
Using this tool, analysis on estimated cost, TCO and ROI was performed for small, medium and large business across IT, healthcare and education domains. The analysis showed that the TCO follow the same pattern (Chart 8) across IT, healthcare and education domains.
Chart 8 - Total Cost of Ownership
Chart 9 - Return on Investment
ROI for all the three domains varied from 70% to little over 100%. For large business the ROI was around 74% irrespective of the domain where as small and medium scale business showed high ROI for healthcare and education domain which was around 100%. Hence adoption of cloud computing is cost effective for all businesses and is highly effective for small and medium businesses.
Deelman, et al., (2009), showed that the storage costs were insignificant when compared to the computational cost in their case study using Montage application and Amazon EC2 fee structure which proves that cloud computing is cost effective for data intensive applications. This also proves the fact that the choice of pricing plan for different applications in cloud computing is crucial as the application itself would need different types of cloud resources in different ratio (Deelman, et al., 2008).
Figure 1 - Application Centric Cloud Computing
Source: (Computing and Creeger, 2009)
2.5.1 Pay as you go
Cloud computing enables business to leave their aging costly IT infrastructure and move towards "pay as you go" model and for a starting business it transforms the capital expense to operational expense (Computing, et al., 2009). Since organisations are spending major part of their budget towards its IT infrastructure, cloud computing can help them to alter their spending ratio towards business innovation and competitive advantage. Thus the company could be application centric while leaving the hazels of acquiring and maintaining servers, applying security patches to cloud computing service provider (Sagawa, et al., 2009).
2.5.2 Speed / Agility
Be it cloud computing or in-house software systems, software upgrades are inevitable as the software requirement always changes due to the changes in. In in-house software system, prior to the upgrade, complete testing had to be done which would involve setting up of application servers, database servers, other ancillary's setup and browser access for their business partners. This would need at least 3 to 6 months of wait time or even more. In cloud computing, when an upgrade is released, consumers can directly log on to test environment and start testing after which it can be consumed immediately. Hence the business buyer would most of the time choose cloud computing as it delivers software much faster than the traditional software (Cusumano, 2009).
2.5.3 Greener IT
IT infrastructure in enterprises has physical barriers for the systems and hence the computer resources are not utilised to the full extent at all times although a huge capital is spent on it. Cloud computing will allow optimal utilization of computer resources thus moving towards greener IT (Cunsolo, et al., 2009).
2.5.4 High Performance Computing
Many industries such as scientific computing, medical research, video graphics etc., would need HPC (High Performance Computing) infrastructure but setting up such an Infrastructure would be very expensive (Brandt, et al., 2009). Cloud computing is a boon for such companies as it allows HPC resources to be made available to consumers on pay-as-you-go basis. Based on the requirements the computing resources can be configured on the fly i.e., the resources such as processing power, memory, storage, bandwidth can be scaled up or down on a large scale.
2.6 Current Developments in Cloud Computing
Figure 2 - Cloud@Home
Source: (Cunsolo, et al., 2009)
Cunsolo, et al., (2009), has proposed an innovative computing paradigm Cloud@Home. According to his proposal, cloud can be built from heterogeneous and independent nodes and hence any one can share their own computer resources for useful projects. This is similar to world computer grid technology (www.worldcommunitygrid.org) powered by IBM where users can donate their computer resources for scientific researches. Cloud@Home opens the cloud computing world to individuals or communities where users can voluntarily support scientific and academic research projects or alternatively sell the available computer resources during its idle time (Cunsolo, et al., 2009).
2.6.2 Microsoft Research
Since cloud is evolving, it necessitates new form of interactions and input/output technologies. Microsoft is working on two different projects "Inside the Cloud" and "Cloud Faster". In the first project, Microsoft is researching on Cloud Mouse, which is an interactive device for cloud computing, can be used as a secure key and users would be able interact with data in the cloud as if they were in the cloud. The latter project is collaboration between Bing and Windows Core Operating System Network team where the team is working on building new suite of protocols and architecture which reduces the latency inside data centres and thus increases the speed of applications in the cloud (Microsoft, 2010).
2.6.3 Other devices interacting with Cloud
The cloud services are accessed from desktops and laptops to smart phones and hand held devices such as iPad. Quickoffice recently launched cloud services for iPhone which facilitates the handheld devices to access the files from the cloud (http://news.zdnet.com/2422-19178_22-392815.html). This is designed such that it can access files from various other cloud storage providers such as Box.net, Mobile Me and Google Docs account. The other areas of development of cloud services includes online photo editing, video editing etc which eliminates need of traditional software which requires high memory capacity, processing speed etc.
Netgear has come up with a new Wi-Fi adapter which connects to the HDTV and home theatre devices. Once a device is connected to the network, it can easily communicate with the cloud. The next development would be connecting these devices to the cloud. A television connected to cloud would open channel for further developments (Portnoy, 2010).
Cloud computing is developing in various other areas such as reduction of capital investment on IT Infrastructure, optimising virtual machine and virtual image suitable for one time deployment of cloud. The complexity of developing such a virtual image would depend of the complexity of the application (Deelman, et al., 2008).
2.6.4 Towards Web 3.0
Figure 3 - Web 1.0 to Web 2.0 Transformation
Source: (O'reilly, 2005)
Web 2.0 is the current generation of Internet, which constitutes of user generated content. The Web 1.0, which was static or rather read only, is changed to Web 2.0 where users are the owner of the content and can publish their own content easily. Web 2.0 has made the collaboration and content management easy and allows millions of people to share the information. Figure 3 shows some of the transformation of Web 1.0 which lead Web 2.0, double click to Google AdSense, personal websites to blogging, screen scraping to web services, publishing to participation and stickiness to syndication. Though there is lot of development in collaboration and content management, the Internet is broken. We still have not one web; it is vast varieties of different web which doesn't have the capability to communicate to each other. Web 3.0 is the future Internet and is the prime focus of various researches to re-engineer the network to integrate different services to talk to each other (O'reilly, 2005).
2.7 What is stopping?
As discussed in Section 2.5 the benefits of cloud computing are massive and at the same time the risks involved is also high. Social media and online sharing and collaboration media are so popular that the users are willing to compromise their privacy to some extent (Cachin, et al., 2009).
The cloud resources should be accessible all time. There is no point in having a resource that is safe and secure but is not accessible. Even leading providers such as Gmail, Amazon S3, Mobile Me and Hotmail services failed at some point and had some down time. The consumers must be aware of the contracts and SLA with the service providers, which include the action that will be taken on consumer's data on late payments or termination of contract. Even worse is when the service provider go bankrupt, LinkUp is one of the storage providers which went out of business as a result of losing its client data. This also revealed that it is costly to store data for long time which is sometimes not considered by the service providers when determining SLA (Krigsman, 2008).
Integrity is nothing but ensuring that the data is not altered during the transit. The data is constantly traversing in cloud across different networks for communication and collaboration. There had been several instances where the data integrity had been compromised. According to the Amazon S3 discussion thread, due to a faulty load balancing server that was introduced, the data in transit was altered which eventually created may confusions for the customers (Chang, et al., 2008).
2.7.3 Application Requirements
Applications with great benefits and usefulness but with poor infrastructure which makes it rarely accessible or the applications with great infrastructure and is readily accessible but of no practical usefulness are good examples of bad scenarios. This is where cloud computing can help, with the pay-as-you-go cost model the resources can be scaled up or down to provide cost effective solutions (Chantry, 2009).
Figure 4 - Decision for Cloud Based on Attributes
Source: (Chantry, 2009)
The decision of whether or not to choose cloud for applications could change based on the requirements of the application. A simple analysis of attributes (Figure 4) can affect the choice of whether or not to go for cloud computing. If the application requires purely online data, cloud storage would be the best option for the application. If the application requires both online and offline data then the cost of developing additional module to synchronise the data can add up to the all round cost of the application (Chantry, 2009).
Table 5 - Concerns Surrounding Cloud Adoption
Source: (CIO Research, 2009)
According to The Hosting News research 70% of the companies who are using cloud computing are planning to move additional applications to cloud. But, there are many companies still hesitant to adopt cloud computing, the prime reason for which is Security and other issues such as integration, interoperability, portability, existing investment on IT infrastructure. According to CIO Research 59% of respondents said that vendors inadequately addressed some of their security concerns. The support from transactions in cloud based storage systems is not robust which needs to be addressed (Chantry, 2009). RSA, Novell, Trend Micro and many other companies have joined Cloud Security Alliance and working to build security into cloud.
Table 6 - Is Cloud Secure?
Source: (CIO Research, 2009)
Internet is always a ground of attack for malicious activities. The cloud computing offers a tempting target for cybercrime for various reasons. To maintain data integrity, many providers require 100% of customer's data to be placed in cloud which means that if compromised 100% of data is available to attackers. Even the leading provider's services had been compromised, recently Google Single Sign-On formerly known as Gaia was compromised by an attack which was originated from China. They also disclosed a privacy glitch in Google Docs where the documents where shared with unauthorised users. The cloud architecture is such that it has interlinks with multiple entities and compromise with any one of the weakest links would compromise all the linked entities (Kaufman, 2009).
3 Cloud Storage
Cloud storage is defined as:
"Cloud storage is typically where a business stores and retrieves data from a data storage facility via the Internet. Storing data in this way offers near unlimited storage and can provide significant cost savings as there is no need for the business to buy, run, upgrade or maintain data storage systems with unused spare capacity."
- Joint, A., E. Baker, et al. (2009)
3.2 Cloud Storage
3.2.1 Availability and Integrity
Ensuring availability and integrity of data is important as there have been cases even with leading providers such as Amazon S3 delivering inconsistent user's data which was modified during transit over the network. To ensure long term availability of data stored across distributed server, cloud storage requires secure protocols and robust verification mechanism (Cachin, et al., 2009)
Proof of retrievability (POR) was proposed by Juels and Kaliski (2007) and demonstrated by researchers in RSA Laboratories to assure that a client can retrieve a file from a server with low communication overhead (Bowers, et al., 2009). This can be used to verify the availability of the file before the transfer is initiated to ensure that the file is still there. The size of user's data could be enormous which might exceed client's memory. The researchers have practically demonstrated the encoding of files even when the file size exceeds client memory.
HAIL (High-Availability and Integrity Layer) for cloud storage is an extension of POR which assures the availability of files distributed over a set of servers by comparing and verifying the MAC generated using the encoding technique at the server side and the MAC generated by the client (Bowers, et al., 2009).
Oualha, et al., (2008) has presented a secure and self organising storage protocol which ensures low resource overhead. This mechanism is a step further to P2P file storage and sharing mechanism in which number of peers joins the file sharing network whose availability could not be guaranteed throughout. In cloud storage, the storage servers are not dynamic but to guarantee long term availability it can include a verification mechanism and this verification function can be distributed across servers to guarantee scalability (Oualha, et al., 2008).
Remote Integration Check (RIC) is another mechanism proposed by Chang, et al., (2008) which allows integrity checks with low resource overhead fairly easily than the latter models. The models discussed so far has no capability of verifying dynamic data stored in the storage i.e., the data which are updated frequently. Erway, et al., (2009) has proposed dynamic data possession model for cloud storages which needs frequent updates such as current versioning system (CVS).
Performing integrity checks can sometimes become overhead for some clients. In this case it can be delegated to 3rd party as defined in Public key verifiability mechanism (Wang, et al., 2009). This option can be chosen only if the data stored in the cloud storage is not private as the information is visible to 3rd party auditor. To overcome this difficulty, researchers from HP Labs have proposed privacy preserving audit and extracting digital content solution (Shah, et al., 2008), using which auditors can still verify the integrity of data without actually looking into real data. Homomorphic Identification Protocols (Ateniese, et al., 2009) is similar solution from Microsoft researchers whose communication complexity is independent of the length of file.
3.2.2 Towards Greener IT
Last but not least; power consumption in cloud storage is crucial as it runs on the large scale. One step towards the greener IT, Harnik, et al., (2009) presented a model which powers down the low utilised resources but ensuring the reliability of data. These features can be included as it contributes to the greener IT and hence greener world.
3.3 Why Security?
In SaaS model, the developer should always assume that intruders have full access to the client as anyone including intruders can buy the software. Though they are not supplied with source code, they still have access to binaries using which they can exploit the vulnerabilities. Hence there should always be a verification mechanism to verify client requests before execution (Viega, 2009).
The communication between cloud services and consumers can be secured using SSL. Since the technology is too familiar, users usually ignore the warning messages displayed by browser. Attackers can exploit this vulnerability to gain access to the machine. Security researchers have demonstrated such type of exploitation in cloud based services. Even the leading giants such as Google's services has been exploited using such vulnerabilities (discussed in Section 2.7.4) On the other hand, a flaw in indexing system design of Zoho has resulted in security vulnerability where one user can read others documents. Also there are other XSS and CSRF attacks which were successful on cloud which makes it vulnerable to attacks (Rajasekar and Imafidon, 2009).
3.4 Authentication and Access
There are different authentication mechanisms for different services. The most commonly used mechanisms are Open Id, Open Auth, and User Request Token. The Open Id and Open Auth mechanism is usually used in mobile devises where the authentication information cannot be stored, or have it firewalled as done in regular PC. Yahoo and Google use User Request Token mechanism for authentication where as Amazon AWS uses a custom mechanism which mirrors the Open Id and Open Auth mechanisms and in addition to it, the calling program signs the outbound header elements using HMAC-SHA1 algorithm. Recently Single-Sign-On (SSO) is being used by many providers which allow access to all the services provided by single service provider without having to authenticate multiple times (Christensen, 2009).
2FA (Two Factor Authentication) is one other authentication mechanism which requires two identities or proof which user knows (PIN or Password) / has (Hardware Token, Mobile Phone, Smartcard). Though this mechanism is more secure than the other type of authentication, handling tokens or smartcards could be a burden to users. In this scenario, mobile phones or smart phones can act as a proof if software which generates tokens similar to hardware tokens is installed on it (Abraham, 2009).
3.5 Tempting Target for Cybercrime
To maintain data integrity, many providers require 100% of customer's data to be placed in cloud which means that if compromised 100% of data is available to attackers. Internet is always a ground of attack for malicious activities as it is easily accessible. Since cloud computing is also accessed through internet and the resources in it are valuable, it is the tempting target for cybercrime. Leading providers such as Google and Amazon have existing infrastructure to deflect cyber attacks, but this might not be the case with all providers. The cloud architecture is such that it has interlinks with multiple entities and compromise with any one of the weakest links would compromise all the linked entities (Kaufman, 2009).
The cloud community watching services analyses the cloud activities constantly to detect and prevent newly injected viruses and malicious activities. Active participation of many organisations in this community will help them to curb the malicious activities more effectively (Hawthorn, 2009).
3.6 Exploitation of Vulnerabilities
3.6.1 Network / Resource Based
126.96.36.199 Denial of Service
It is the most popular attack in Network Security where the user is abstained from getting the normal service from the service provider with the help of other attacks such as ICMP Flood, SYN Flood, UDP Flood and Smurf attack. It can also be performed by increasing load on CPU, primary memory, network to slow down or eventually crash the system. Distributed Denial of Service (DDoS) attack is based on DoS which consists of three layers, controller layer, broker layer and attacker layer. In DDoS, the actual attack is made from the broker layer by receiving commands from the controller layer. Since it involves different layers attacker information can be hidden easily (Liu, 2009).
Since the attack can be performed using various other attacks, both detection and prevention steps can be taken. Leu and Li, (2009) has proposed a DoS/DDoS intrusion detection system which uses cumulative sum algorithm to detect the attacks. Kompella, et al., (2007) proposed a novel data structure called partial completion filter which can detect claim and hold attack which is not handled in the former system.
188.8.131.52 Buffer Overflow
Figure 5 - Buffer Overflow Attack on Activation Records
Source: (Crown, et al., 2000)
Buffer Overflow is the most common vulnerabilities for past two decades where an attacker seeks for partial or total control of a host. This is caused when exceptions are not handled properly. For example: out of bound exceptions and type exceptions when not handle properly can be exploited to move the control to a function introduced by an attacker and the results are endless (Cowan, et al., 2000). Figure 5 shows an example of buffer overflow attack on an activation record where the attacker can carefully passing the input string such that the buffer overflows and the return address is pointed to the attack code.
Figure 6 - Simulation of Buffer Overflow
Source: (http://nsfsecurity.pr.erau.edu/bom/Smasher.html, 2002)
A simple buffer overflow attack is simulated in a website (Figure 6) where the input string is allocated 10 characters. When the input exceeds 10 characters, buffer overflow exception occurs which is exploited and the control is moved to the attackers function "DontCallThisFunction()".
The buffer overflow vulnerabilities can be avoided by taking little extra care during development of software. Further down, buffer overflow can be prevented even at the Kernel level (Speirs, 2005) and Hardware level (Chen and Yan, 2006) as well.
184.108.40.206 Virtual Machine Based Rootkit
Virtualisation is an important aspect of cloud computing where the software, operating system and all related components are packaged together such that it is independent of the hardware (Mergen, et al., 2006). This is facilitated by multiplexing the system with a small privileged kernel known as hypervisor. Virtual Machine Based Rootkit (VMBR) is a new type of malware which is similar to hypervisor, installs underneath the operating system layer and hoist the operating system to virtual machine. Hence, it is difficult to detect VMBR's state by the software running on the operation system. Vitrol and Subvert are other rootkits that use this technique (King and Chen, 2006). VMBR allows other malicious software or services to run on it which is protected by the operating system. According to King and Chen (2006), the best way to detect VMBR is to control the layer beneath it with the help of a secure hardware or bootable media.
220.127.116.11 Side Channel
Although complex cryptographic algorithms are devised for security, the weakness in implementation is exploited to break the security. The side channel attack exploits the unintended data leakage such as power consumption, timing information (Figure 7) to break the keys (Potlapally, et al., 2007).
Figure 7 - Leakage of side channel information.
Source: (Potlapally, et al., 2007)
Lee, et al., (2007) proposed Lock and Key technique which includes a test security controller that randomises the sub chains when accessed by an unauthorised user and hence reducing the predictability. An attacker has to break many layers of security in order to the access the scan chain in order to exploit it.
18.104.22.168 Man in the Middle
There are different variants of man in the middle attack exists. One of the variants is where an attacker having a device with two wireless cards can launch this attack. First he sends de-authentication frames of legitimate user to the service station. Legitimate user will be disconnected from the service station and start searching for access point in the same channel. Now attacker can use one of his wireless cards to act as service station and connect the legitimate user, mean while use the other wireless card to get connected to the actual service station using legitimate user MAC address (Syahputri and Hasibuan, 2009). Thus man in the middle attack can be launched successfully.
Figure 8 - Man in the Middle Attack for HTTPS Communication
Source: (Callegati, et al., 2009)
Once the attack is successfully launched, attacker can even attack HTTPS communication (Figure 8). In this scenario, the user will be displayed a security certificate warning which most of the users ignore and hence the system is compromised (Callegati, et al., 2009).
22.214.171.124 Replay Attack
Replay attack is where an attacker is able to capture the network traffic and replay it at a later time to gain access to the unauthorised resources even if it is encrypted. It has more effect on the Dynamic Rights Management (DRM) content where the rights over the resource changes over the time or number of times/ bandwidth used. If the attack is on the DRM content, the user not only looser privacy but also cost involved in the dynamic rights. For example, if user is accessing a file from the cloud storage service such as Amazon S3 where he is charged based on the amount of data transferred, the replay attack will eat up user's money. Imad, et al., (n.d) have proposed a solution against the replay attack which give users flexibility to use and manage the DRM content in any of the devices they own (Abbadi and Alawneh, 2009).
126.96.36.199 Resource Exhaustion
"Resource-exhaustion vulnerability is a specific type of fault that causes the consumption or allocation of some resource in an undefined or unnecessary way, or the failure to release it when no longer needed, eventually causing its depletion"
- Antunes, J., N. Neves, et al. (2008)
As stated in the definition, resource exhaustion vulnerability can be exploited to cause Denial of Service (DoS) attacks. This can be caused by bad design or inefficient utilization of resources on the service side and resource leakage where the resources are not released or destroyed from memory after use (Antunes, et al., 2008). Hence it is difficult to observe and identify the cause unless it is closely monitored.
Figure 9 - Predator's Architecture
Source: (Antunes, et al., 2008)
Antunes, et al., (2008) has proposed methodology to detect resource exhaustion vulnerability using which implemented Predator, a black box testing tool to identify the resource exhaustion vulnerabilities of a system. The operations of the tool involve attack generation and injection campaigns (Figure 9 shows the architecture of the tool). Incorporating this methodology in software development life cycle (SDLC) can reduce this vulnerability (Antunes, et al., 2008).
188.8.131.52 Byzantine Failure
In cloud storage, many nodes participate to complete an activity. For instance there could be many redundant servers involved and also multiple users accessing single source. In this scenario, any of the nodes i.e., servers or users participating can fail arbitrarily as a result of crash or malicious activity which is known as Byzantine failure (Driscoll, et al., 2003). System can be made robust by implementing threshold cryptography (Cachin and Tesaro, 2005) to ensure the system is tolerant to Byzantine failure.
Recently Wang, et al., (2009) proposed Agreement Protocol for cloud computing (APCC) which involves two processes Interactive Consistency Process and the Agreement Process. The Interactive Consistency Process is executed at the server nodes which shares and stores the initial message among them. The server nodes then aggregate the results and transmit the message to client nodes. The Agreement Process is executed at the client nodes to receive the agreed message.
3.6.2 Browser Based
Cross Site Scripting (XSS) can be used to inject malicious code into the client machine by exploiting the client side script vulnerability in the website (Kieyzun, et al., 2009). Thus an attacker can introduce his own script and impersonate user credentials to perform malicious activity in the website such as session Hi-jacking and also craft phishing sites. A user called Samy added more than 1 million buddies to his My Space account by exploiting XSS Vulnerability (Levy and Arce, 2006). Even Google has suffered from XSS vulnerability in their online spreadsheet application using which the user cookie can be stolen which is valid for other sub-domains also. From the server point of view detecting and preventing XSS attack is a difficult task as it is done at the client end for which server has not much control. Wurzinger, P., C. Platzer, et al. (2009) has proposed a server side solution SWAP (Secure Web Application Proxy) to detect and prevent XSS. It has a reverse proxy which intercepts the HTML response from client and validates it for XSS attack (Wurzinger, et al., 2009).
184.108.40.206 SQL Injection
Similar to XSS, SQL Injection is also a vulnerability which can be used to inject malicious database scripts when user inputs are not properly validated. This can generally be prevented by passing user inputs as parameters and avoiding query building based on the user input. At times there will be scenarios where building queries based on user inputs are unavoidable. In these cases, vulnerable user input validation must be performed to prevent SQL Injection. SQL Injection is very dangerous as it can be used to change values of multiple records and can even be used to delete the whole table (Fu and Qian, 2008).
For example, consider the following SQL statement which updates a value based on email id.
UPDATE user_information SET credit = '£1000000' WHERE email_id = '$email'
If user passes value firstname.lastname@example.org' OR 'x'='x for $email. Then 'x'='x' condition is always true and hence credit will be set to '£1000000' for all users.
A program designed to damage the machine is called malware (Preda, et al., 2008). The web browsers are more susceptible to malwares as it supports extension of 3rd party programs through "Add-on" or "Plug-in" capability. Ter, et al., (2008) has demonstrated a malware program which is capable of capturing sensitive information such as passwords even when the communication is done using SSL. This is possible because the information is captured even before the communication begins when the data is encrypted for submission. Some of the Internet security program may detect and warn some of the malicious activities of malware programs such as submitting hidden data to remote server, but not many users are not aware of the technical details and tend to ignore the warning (Ter, et al., 2008).
220.127.116.11 XML Wrapping
Web Services is a key technology to implement SOA especially is very useful for implementing interoperable and platform independent services. XML is the underlying mark up language used to communicate between server and client. XML signatures facilitate the unauthorised modification and origin authentication for the XML documents (McIntosh and Austel, 2005).
Figure 10 - SOAP Message with signed SOAP body
Source: (Gruschka and Iacono, 2009)
Figure 10 shows a SOAP message with a signed body which can be moved to different wrapper message without altering the signatures as shown in Figure 11. Hence the result SOAP message is still valid producing valid hash (Gruschka and Iacono, 2009). This is called XML Wrapping attack which according to Gruschka and Iacono (2009) can be mitigated using SOAP message security validation and XML schema validation but the formal proof of safety is missing.
Figure 11 - SOAP message after attack
Source: (Gruschka and Iacono, 2009)
3.6.3 Social Network
18.104.22.168 Sybil Attack
Figure 12 - Sybil Attack
Source: (Yu, et al., 2008)
In a Sybil attack (Yu, et al., 2008), a malicious user acquires multiple identities and pretends to be distinct users and tries to create a relationship with honest users. Even if one honest user is compromised, malicious user will gain special privileges which can be used for attacks. Cloud storage is widely used in social networking such as Facebook, My Space, Orkut, Bebo where users can store their files such as documents, photos and videos and share it easily with their network. The relationship between the honest user and the malicious user is called attack edge (Figure 12) which can even be used for social engineering.
Figure 13 - (a) Example Scenario of Vanish (b) Vanish Firefox Plug-in for Gmail
Source: (Geambasu, et al., 2009)
Vanish (Geambasu, et al., 2009) is a proposed system which increases privacy by self destructing data. Using this system, a message can be encrypted using a random key which is stored in the distributed hash table (DHT). These keys will be destructed from DHT after user specified interval and hence the data is lost forever. User can de-encrypt the data using the key before it is destructed forever. (Figure 13 - a) shows an example scenario where Vanish implementation could be helpful. This seems to be a solution for P2P based storages and also has been simulated for Gmail using Firefox Plug-in (Figure 13 - b) but in its current form it is not adequately protected against Sybil attacks (Wolchok and Hofmann, 2010). Scott, et al., (2010) has demonstrated defeating vanish using Sybil attack.
22.214.171.124 Social Intersection Attacks
Social Intersection attacks can be effectively launched in social networking environment. It can be used to identify the original owner of the shared anonymous data object with just two compromised users in a group (Puttaswamy, et al., 2009). Figure 14 shows an example scenario of social intersection attack where user C has 6 friends in the social network. Compromised friends A and B perform the attack by intersecting their respective social circles, yielding C and hence finding the original source. It is hard to detect this kind of attack as it is performed passively and become more powerful with increase in number of compromised users. Krishna, et al., (n.d.) has proposed a solution for this attack where the service provider can build a number of anonymous nodes around a user and hence highly reducing the probability of identifying the originating source.
Figure 14 - The social intersection attack
Source: (Puttaswamy, et al., 2009)
126.96.36.199 Collusion Attack
Figure 15 - Example of Key Tree used in Secret Key Multiplication
Source: (Raphael, 2009)
Secret Key Multiplication (SKM) group re-keying scheme is used in group collaboration, where multiple users participate in a group discussion which might involve sharing of various resources such as text, files and even hardware resources. Figure 15 shows example key tree structure of SKM group re-keying scheme; according to this scheme a subset key is generated for each group from a master key and in turn each user in a group is given a private key. It is assumed that the users in each group keep their key secret. Collusion attack is performed by combining information among two or more users and gain access to resources that the attacker is not supposed to have. Using collusion attack, it is possible to obtain even the high level key which will give attacker access to other groups which is not related to the attacker. Raphael, et al., (2009) proved SKM group re-keying scheme to be vulnerable to collusion attacks.
3.7 Archives / Backups
Security and Privacy of Archives / Backups is as important as the regular data. Troncoso, et al., (2008) proposed Secure Long Term Archival System (SLTAS) to ensure the eternal availability of digital signature. This system also includes the proof of digital signature creation date which cannot be altered.
3.8 Programming Languages
AURA (A Programming Language for Authorisation and Audit) (Jia, et al., 2008) is a domain specific security oriented programming language which incorporates authorisation logic as part of its type system to enforce security and access control. BOOM (Berkeley Orders of Magnitude) (Alvaro, et al., 2010) is a declarative programming language which can be used to build highly scalable applications for distributed system using its data centric design style.
4 Google Docs, ADrive and Zumo - Review
In this section, various features of the cloud storage service providers Google Docs, ADrive and Zumo Drive are reviewed. The page speed score is captured using the Google's page speed plug-in for fire fox.
|Criteria||Google Docs||ADrive||Zumo Drive|
|Account||Google Account||Valid email id||Valid email id|
|Authentication||Single-sign-on||Traditional username and password||Traditional username and password|
|Session Timeout||Doesn't expire unless user logs out explicitly||Logs out user when there is no activity for 20 Minutes or more||Doesn't expire unless user logs out explicitly|
|Uploader||Browser upload functionality allows multiple files to be selected from a folder. But, folders cannot be selected to upload.||Provides java uploader for browser using which folder with subfolders can be uploaded directly. Also allows files to be imported directly from other providers using url.||Browser upload functionality allows multiple files to be selected from a folder. But, folders cannot be selected to upload.|
|Downloader||Only one file can be downloaded at a time.||Multiple files even from multiple folders can be downloaded using java downloader.||Multiple files can be compressed and downloaded.|
|Data API||Data API is available through which||Data API is not available||Data API is not available|
|Client Manager||Google doesn't provide client managers but 3rd Party software such as Gladinet is available to manage files||Client Manage is not available in basic edition.||Client Manager is available.|
|Multiple Sessions||Allows multiple session to be opened||Multiple sessions not allowed in basic edition||Allows multiple sessions|
|Captcha||No captcha verification required||User should enter captcha characters to login in basic edition. This security level can be easily broken as the captcha is just a combination of 5 digit numbers whose pattern is constant.||No captcha verification required.|
|Page Speed Score||89/100||70/100||80/100|
|SSL||SSL is not enabled by default. Hence sniffers can easily get access to files if accessed via public networks.||No SSL support for basic edition but available for other editions.||SSL is enabled by default.|
|FTP||No FTP support||FTP support is available but not sFTP hence files in transit can be sniffed easily.||No FTP support|
|Encryption||No encryption facility is integrated with storage.||No encryption facility is integrated with storage.||No encryption facility is integrated with storage.|
|Multiple File Versions||All the versions of the files are maintained and even the file difference can be viewed only for file extensions like documents.||Multiple file version support is available but retrieval is not available for basic edition.||Multiple file versions can be maintained and retrieved.|
5 Cloud Storage Architecture
5.1 Security Components
Encryption is the traditional way of security measure for protecting files, but it introduces computational overhead as the data has to be encrypted to store it and decrypted for processing. Research is being carried out on Homomorphic encryption (Saran, 2009) which allows data to be processed in its encrypted form without revealing the information. But the research is in initial stage and hence it is not ready for implementation. It is desirable to at least reduce the overhead of encryption by encrypting only confidential data and using optimal encryption techniques (Shmueli, et al., 2009; Hewitt, 2008).
Figure 16 - Open ID Mechanism
Source: (Google, 2010)
Login Key which is a combination of user id and password is generally used to prove the identity of the user. Open ID mechanism is used by many applications today. This is advantageous from both perspectives i.e., service provider and the consumer. The service providers don't have to worry about storing sensitive user information such as user name and password for authentication while users no longer have to remember multiple keys for multiple service providers. The users have the liberty to choose their own Open ID service provider. It can act as centralised authentication for multiple services which user use and reduces key management hazels and user can efficiently manage one single key using various key changing policies. Figure 16 shows the Open Id Mechanism provided by Google. Two factor authentication applied to open authentication would be highly effective as users can trust on service provider for authentication and thus reducing over cost involved in buying hardware token for different service providers (Hewitt, 2008).
Master Key can be used for encrypting files that are uploaded. User can provide the master key to encrypt the files which are being uploaded using one way hashing techniques. Thus the user is always in control of confidential data and not the storage service provider. User can be provided with an option to use different keys for different group of files to increase granularity. Even when one key is compromised, only part of the storage will be compromised and the effect can be minimal (Shmueli, et al., 2009).
5.1.2 Integrity and Availability
HTTPS protocol protects data in transit during communication between client and server thus ensuring Integrity. Homomorphic encryption (Ateniese, et al., 2009), a research of Microsoft can be used to perform Integrity checks with low resource overhead as this type of encryption does not require decryption for verification.
Availability of a file can be verified using proof of retrievability before it is downloaded. The encrypted files can also be verified using this mechanism (discussed in Section 3.2.1) with low performance overhead. It supports the verification of dynamic files which are updated frequently along various versions of the file.
Encryption causes performance overhead which can be avoided by selecting files to be encrypted which has sensitive information. Mechanism to select encryption of files at various levels such as folder level and file level allows users to select only necessary files to be encrypted and reduces performance overhead. Also optimal encryption technique with optimal performance overhead can be used.
Transferring large files across network can be optimised by compressing the file before it is uploaded. Factors such as client computing speed, network speed and percentage of compression that can be achieved on different file types can be used to decide whether a file can be compressed or not before uploading. Document, text, bit map files may give higher percentage or compression than jpeg and mpeg files. Hence fair decision should be taken to compress the file before uploading and should be transparent to the user.
5.1.3 Intrusion Detection/Prevention Systems
Providing security for cloud services requires more than authentication using passwords and confidentiality in data transmission. Vieira, et al., (2009) have proposed a solution for intrusion detection in cloud computing. The solution consists of two kinds of analysis behavioural analysis and knowledge analysis. In behavioural analysis, the data mining techniques were used to recognise expected behaviour or a sever deviation of behaviour and in knowledge analysis security policy violations and attack patterns were analysed to detect or prevent intrusion.
Antivirus scanning can be done on the cloud to reduce the risk of malicious activities. It is an expensive operation and doing it once ahead of time for benefit of many could be advantageous, and with the power of cloud more anti-virus engines can be employed to make more efficient. The challenge here is bridging the gap between the threat release and the virus signature release (Walsh, 2009). Although antivirus scanning is an expensive operation, it should be repeated with the release of new virus signatures.
Firewalls could be implemented as a virtual machine image running in its own processing compartment or at the hardware level at each gateway in "out of band" firewall management channels (Sloan, 2009).
5.2.1 File Streaming
Cloud storage acts as a centralised storage for files thus eliminating dependency of client machine and the location. User can access the files from anywhere; all that is required is a dumb terminal which connects to Internet. Though Internet is ubiquitous, at current internet speed uploading a file larger than 1 Gigabyte is hectic. Hence most of the media files audio / video can be played without having to wait for the full file to download. The encrypted files cannot be streamed without decryption which causes performance overhead.
System can be designed such that the decryption does not interrupt the streaming of media. For this purpose, processors designed to facilitate streaming can be used to improve performance (Erez and Dally, 2009). Protocols like Real Time Streaming Protocol (RTSP) which is an open source streaming media protocol can be used to stream audio / video. Although it supports secure transmission of stream over the Internet (Zhang, et al., 2009) it cannot read encrypted files. A middleware can be placed between the encrypted files and the streaming server without compromising privacy.
5.2.2 File Handling Capability
The file handling capabilities such as viewing photos, excel sheets, documents with in the browser gives excellent usability to the users without having to worry about the client software installed on the dumb terminal and also the cost involved in purchasing the client software. This can be facilitated using extension facility in browser known as "Add-on" or "Plug-in". These extensions are vulnerable to attacks as discussed in Section 188.8.131.52 and hence proper security measures can be taken at granular levels to mitigate the risk.
Many users prefer cloud computing because of its ability to collaborate and share information easily (discussed in Section 2.3.3). The cloud storage can have the facilities to collaborate and share files with 3rd party applications. SOA is a powerful architecture through with interoperability can be achieved and web service is a way of implementation of SOA. In SOA the components are loosely coupled and hence interoperability is made possible.
Implementing these facilities also increase complexity on security perspective. The files are exposed to 3rd party applications, and hence the security mechanism for collaboration and sharing has to be robust. XML Signature and XML Encryption can be used to secure XML communication through web services (Geuer-Pollmann, et al., 2005) and also mechanism to counter XML Wrapping has to be implemented (discussed in Section 184.108.40.206).
5.2.4 Synchronisation and Versioning
The files uploaded to cloud storage are required to be edited or modified for so many reasons and editing these documents using cloud applications is not always possible. Hence these files have to be modified on the client side and synchronised with the online storage. This can be done manually by replacing the document using browser utilities, but a middleware integrated with operating system can be more user friendly with less user clicks. This middleware can be built to use the web services provided to collaborate and share the files (Geuer-Pollmann, et al., 2005). Facilities to maintain different file versions can be provided to recover previous version of the file if required. Also this can be made configurable by user to optimise the storage utilisation which eventually saves cost. A layered model for file versioning has been proposed which can be used to implement a versioning in cloud storage (Kaur and Singh, 2009). Similar model can be used to implement file versioning in cloud storage.
5.3 Data Security
The organisations using cloud computing can maintain their own data backups even if the providers backs up data for the organisation. This will help continuous access to their data even at the extreme situations such as data providers going bankrupt or disaster at data centre etc (Viega, 2009).
Mowbray, M. and S. Pearson (2009) has proposed a client based privacy manager to eliminate the fear of data leakage and loss of privacy in cloud computing. In the paper, they have presented a scenario of salesforce.com which can undergo a security threat; theft of sales data and various ways that an intruder can gain knowledge based on the un-encrypted data. The threats include the collection of personal information and getting inappropriate access to the information. Based on this scenario a set of requirements was derived which include the minimization of personal and sensitive data used in cloud and maximising security protection of data. Finally the overall architecture for client-based privacy data manager has been depicted (Mowbray and Pearson, 2009). Wang, et al., (2009) have proposed a model in which public verifiability is enforced can be used to audit the data by third party auditor audits without intervening with user's time to ensure the data security. These models can be implemented to ensure data security.
5.4 Legal Issues
The key legal issues in cloud with respect to sourcing arrangements are DPA (Data Protection Act 1998), duties of confidentiality and database right. For instance, in the method of storing large volume of data in cloud, the servers could spread across the world. It is debatable whether the informed consent can actually be given in this vague situation. Similarly there are intricacies over confidentiality and database rights as well (Joint, et al., 2009).
It is perfectly possible to use cloud-computing in UK in a legal compliant and low risk manner. This would require alteration in operating model which could erode the benefits of cloud computing if not considered in early stages and if contractual or operational management is not properly adopted, there could be significant increase of operational risk (Joint, et al., 2009).
A news article published by Computer Fraud & Security (August, 2009) (Anon., 2009) indicates that the data might be subject to search and seizure by government agencies if not specific contracts are made between the service providers. When Google was asked how this situation would be handled, they said that their customers would be notified about any legal order it receives. This can be mitigated by storing encrypted data whose key is possessed by consumers, thus the data cannot be exposed without the users consent.
5.5 Architecture Diagram
Figure 17 - Cloud Storage Architecture
6 Cloud Storage Implementation
Tb drive is a cloud storage application developed to demonstrate the security features as presented in the devised architecture (Section 5). This is implemented with reasonable security features on a simple web hosting service where unlimited storage is provided as part of the hosting plan and is inexpensive compared to present storage services. It is hosted in techbizarre.com domain, purchased for the purpose of this demonstration.
The application is developed using ASP.Net 3.5 with C# as server side script language, AJAX and Visual Studio 2010 IDE. MS SQL 2008 is used for storing data. IIS 7.0 is used as the web server to handle client requests and the developed application is tested using different browsers such as Fire Fox, Internet Explorer, Google Chrome and Safari under Mac OS X and Windows platforms.
6.2 Analysis & Design
This web application is designed to meet the functionality and security requirements, based on the analysis done in previous chapters. ASP.Net application service is used to implement the access controls for the web application. Figure 18 shows the Tb drive database diagram and Figure 19 shows the Tb drive Class diagram. The File class and Folder class represents each instance of a file and folder.
Folders are stored in database using HierarchyID, new data type available in SQL Server 2008. This facilitated displaying folder structure in Tree View control with having to write much of code. FileInfo and FolderInfo class fetches the properties of existing files and folders. FolderInfoDataSource is an enumerated class to feed tree view control which fetches the List of Folders for the logged in user. Thus a user cannot access other user files or folders. Generic class includes methods to generate random number and random string.
6.2.1 Database Diagram
Figure 18 - Tb Drive Database Diagram
6.2.2 Class Diagram
Figure 19 - Tb Drive Class Diagram
6.3.1 Database Setup
This section lists the steps to setup database in MS SQL Server 2008 for Tb Drive web application.
Open SQL Server Management Studio.
The following login screen is displayed.
Figure 20 - SQL Server Login
Enter the database server name, login id and password.
220.127.116.11 SQL Login Creation
Figure 21 - SQL Server Login Creation
Right click on Login under Security object as shown in Figure 21.
Click on New Login to create a new SQL Server login.
Figure 22 - SQL Login Properties Setup
Enter the login name.
Select SQL Server authentication as shown in the Figure 22 and enter the password.
Leave the master database as default database which should be later changed to the target database after its creation.
Click on Ok button.
SQL Login is successfully created.
18.104.22.168 SQL Database Creation
Figure 23 - New Database Creation
Right click on the database object and click on New Database as shown in the Figure 23.
Figure 24 - Database Creation Dialog
Enter the database name and click on the owner browser button on the top right corner.
Figure 25 - Assign SQL Login to Database
Check the SQL Login that was created earlier and click on Ok button to select the db owner. Though this can be done later, setting it up now eliminates the owner mapping steps.
Figure 26 - Select db owner for Database
Selected login will be displayed in the resulting dialog as shown in Figure 26.
Click Ok button.
Figure 27 - Database Properties Setup
The selected login will be displayed in the database creation dialog.
Click Ok button.
Figure 28 - Set tbdrive as default database for the SQL Login
Open the SQL Login properties page and set the tbdrive database that was created in earlier step as default database.
Click on Ok button.
22.214.171.124 ASP.Net Application Services Registration
Go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\" and execute "aspnet_regsql.exe" utility. The following wizard will be launched.
Figure 29 - Application Services Registration Wizard
Click Next to continue wizard as shown in Figure 29.
Figure 30 - Configure SQL Server for application services
Select Configure SQL Server for application services.
Click Next to continue (Figure 30).
Figure 31 - SQL Credentials for application services
Enter the database server name.
Select SQL Server authentication mode and enter SQL login credentials that was created.
Figure 32 - Confirm Application Services Settings
Dialog (Figure 32) to confirm setting will be displayed.
Click Next button.
Figure 33 - Application services registration confirmation
Dialog as shown in Figure 33 is displayed upon successful registration.
Figure 34 - Application Services Tables
Application registration service creates the tables as displayed in Figure 34.
Figure 35 - Application Services Stored Procedures
Application registration service creates the stored procedures as displayed in Figure 35.
Figure 36 - Files Entity
Create tb_Files table to hold user files as displayed in Figure 36.
Figure 37 - Folders Entity
Create tb_Folders table which holds files as displayed in Figure 37.
Figure 38 - Tb Drive Stored Procedures
Create the Tb Drive stored procedures as displayed in Figure 38.
The database scripts can be found in the Appendix Section of this document.
6.3.2 Web Application Setup
This section lists the steps to create Tb Drive web application using Visual Studio 2010.
Figure 39 - Visual Studio 2010 IDE
Open Visual Studio 2010. The IDE is opened as shown in Figure 39.
Figure 40 - Create New Web Application
From file menu select new website. The following dialog is displayed.
Figure 41 - Select Web Application Template
Select ASP.Net Web Site project template.
Enter the name for the web site.
Click Ok button.
Figure 42 - GUI Design
Use photo editing tool such as Adobe Photoshop (Figure 42) to design GUI items such as Logo, Icon, Header, Footer and buttons.
Figure 43 - Cascading Style Sheet (CSS)
CSS is important element while designing a web application. It is a specification used to describe how the html elements should be displayed to a browser.
CSS is used such that it is compatible with almost all browsers.
Figure 44 - Master Page Design
Add new Master Page.
Master includes Open ID login elements.
Figure 44 shows the Master Page design.
Figure 45 - Server Side Script for Master Page (C#)
Figure 45 shows the server side script for Master Page written in C#.
This page includes logic to authenticate users using Open ID.
New accounts are created automatically when new users login to the website.
Figure 46 - TB Drive Design
Figure 46 shows the Tb Drive design page.
This page includes functionality to create new folders, upload / download files, encrypt / decrypt files.
Figure 47 - TB Drive Server Side Script showing page level authentication
Figure 47 shows the server side script for Tb Drive page.
The highlighted code in Figure 47 checks if the user is authenticated and displays the files associated with the user.
Figure 48 - TB Drive - Master Key
Figure 48 shows the settings page of Tb Drive.
User can reset his master key in this page.
This is a costly operation as this reset involves the decryption of already encrypted files and then encryption of files with new master key.
Figure 49 - FolderInfoDatasourceView class showing LINQ query
Figure 49 shows a special class designed to feed the data for Tree View control.
This Class uses LINQ query on the object in memory to filter child folders.
6.3.3 Domain Setup
This section lists the steps to set up a domain using hosting control panel. These steps have to be performed after purchasing the domain name from a registrar. If the domain name is purchased from the same web hosting provider, Name Servers will be set to the web hosting servers by default. Otherwise it has to be configured manually. This scenario assumes that the domain is purchased from the web hosting service provider.
Figure 50 - Parallel Plesk Panel Login
Launch web hosting control panel website. Login screen will be displayed.
Enter the credentials and click Log In.
Figure 51 - Parallel Plesk Home Page
Home Page is launched which has all the available tools to configure a website.
Click Domains link.
Domain management tool will be launched listing domains associated with the login.
Figure 52 - Domain Management Page
Click Create Domain Link.
Create Domain Page will be displayed.
Figure 53 - Domain Creation
Enter the domain name and select WWW checkbox to enable www prefix in the website url.
Select Web Site Hosting, enter FTP username and password.
Click Finish to create the domain.
6.3.4 Website Publishing
This section lists the steps to publish website to remote web server.
Figure 54 - Website Publishing
When the coding is completed, click Build > Publish Web Site.
A dialog prompting publish path will be displayed (Figure 55).
Figure 55 - Local publishing path
Enter the publish path and click Ok button.
Web site is published in the given path.
Figure 56 - FileZilla FTP Utility
To upload files to remote web server any ftp utility can be used. For this implementation FileZilla a freeware is used to upload files.
Click File Site Manager.
Dialog as shown in Figure 56Figure 57 is displayed.
Figure 57 - Create Connection to remote web server
Click New Site and enter the domain name.
On the right hand side (Figure 57), enter Host IP address.
Select Logon Type as Normal and enter user name and password.
Click Connect button.
Figure 58 - Publish files to remote web server
Select the published files and upload it to the remote web server.
6.3.5 Version Control
Version control is a vital task which is to be performed on any kind of project. There is always necessity to roll back to previous version when something goes wrong. For this implementation Visual Source Safe is used to control the versions of project files.
Figure 59 - Visual Source Safe (VSS) for version control
6.4.1 Login Mechanism
Figure 60 - Tb drive login page
Tb drive doesn't store password to authenticate users, instead implements Open ID mechanism to authenticate user. Implementation accepts Google and Yahoo Open Ids for login to Tb drive and demonstrates simplest account creation process. Users can start using the application readily and securely without having to fill out sign up forms. User simply has to click on the Open ID provider logo as shown in Figure 60. Application will be redirected to Open ID service provider, Google in this example is shown in the Figure 61. Open ID service provider will ask for user name and password for authentication mentioning the requesting domain (techbizarre.com) name in the authentication page.
Figure 61 - Open ID service provider authentication page
Once the user enters valid user name and password, Open ID service provider will display all the details requested by the relying party (techbizarre.com) requesting approval from the user. If the user approves the details to be shared, the requested information will be sent to relying party. User can also choose to remember the association; hence this step can be skipped in future.
Open ID provides only authentication and don't support user session which has to be taken care by the relying party. Single-Sign-On generally known as SSO is another mechanism which facilitates even the session to be taken care by the service provider and have its own advantages and disadvantages. Though Open ID doesn't facilitate session management, the service is provided free of charge where as cost is involved to avail SSO facility form 3rd party. If Open Id is implemented, application can accept services from many service providers and hence more audience where as when SSO is implemented, the service is restricted to one service provider.
Figure 62 - Open ID service provider requesting user approval
6.4.2 Storage Organisation
Figure 63 - Tb drive
In cloud storage, files can be stored in two different ways file system and database. Both has advantages and disadvantages; the file system requires full permission on the disk which is difficult in a web hosting scenario than maintaining own server. The storage system requires a binding between the application layer and storage layer which is loosely coupled when the files are stored in the file system and there is no concrete relationship between the files and the application layer where privacy and confidentiality is driven.
The binding between the application layer and storage layer can be made strong when the storage is implemented in database server. This may hinder other support such as FTP services for the storage facility.
Considering the web hosting storage scenario of Tb drive, database is chosen for storage layer but storage using File system is also demonstrated without the feature of encryption.
Hierarchyid data type available in SQL Server 2008 is used to implement the folder structure in Tb drive which allows easy mapping of folder structure to tree view control.
6.4.3 File Encryption & Decryption
Figure 64 - Requesting Master Key to encrypt / decrypt
Since the files are stored in 3rd party server, this might break user's privacy (discussed in Section 5.4). Hence the ability to encrypt files can be provided to users. Tb drive demonstrates symmetric encryption of files using Rijndael encryption algorithm. Individual files can be encrypted using a key. Tb drive accepts a key string from user which internally is converted to key and vector combination to encrypt files. Since symmetric encryption is used, user has to provide the same key to decrypt the files. If the key is lost, the data is lost forever.
6.4.4 Key Management
Figure 65 - Key Management
Since loss of key can lead to data lost forever, key management is crucial in cloud storage. To mitigate this risk, Tb drive stores one way hashed master key supplied by user in the database and uses the master key to encrypt files. Since Tb drive only stores hashed master key, unless user provides the master key data cannot be decrypted. Thus users can safely store data in Tb drive having full access with them.
6.5 Critical Analysis & Lessons Learnt
The initial analysis on vulnerabilities in cloud storage helped to understand various ways of exploiting it and become aware of the current research work on securing the vulnerable areas.
Security should be implemented at each and every layers defined in ISO/OSI Model and at very granular level. The attacks such as denial of service attack, buffer overflow attack, man in the middle attack exists for ages, still there is no concrete mechanisms to counter these attacks. Even the strongest encryption available is vulnerable to side channel attacks.
Cloud service relies on the network which is not always secure. Network sniffers can easily gain sensitive information by monitoring network payloads.
Simple flaws in coding such as not destroying objects in memory that is no longer required can be easily exploited to decrease the performance of the system and eventually crash the system. One of the other coding flaws can lead to SQL injection through which an attacker can access complete database.
Attacks through social networks are recent ones which are being exploited as this doesn't require any extra burden for an individual to initiate. Users are naive enough to give away sensitive and personal information in social network websites which can then be used to break the password using forget password facility that every service provider provides.
Users are addicted to simplicity and are lazy to manage multiple passwords. One of the solutions to overcome this could be using Open ID as implement in this project but this increases the risks of XSS and phishing attacks. Hence general awareness on Do's and Don'ts of cloud computing or Internet surfing should be made.
A famous phrase exists "Words spoken cannot be taken back"; similarly Data given away to cloud cannot be taken back. No one knows how many backup exists for the data stored in cloud. A user can upload unencrypted data assuming to encrypt it after uploading into the cloud. Even before the data is encrypting, it could have been backed up but leaving a safe feeling for the user.
Even the leading service such as Google, Zoho, Nirvanix has failed at some point exposing customer data and even losing the data. Hence users should have their own backup mechanisms for critical data.
Google recently suffered an attack which was originated from China in which attacker exploited the weakest link "People" by sending a malicious script through messenger. According Google, the exploiter was able to access only small portion of the server where the source code of some of the Google applications is executed. If the files were encrypted as implemented in this project, the attacker would have gained nothing from the server.
6.5.2 Project Evaluation & Lessons Learnt
Open ID is implemented for authentication in this project which increases simplicity and usability which if used properly could be reliable. But at the same time, it is vulnerable to phishing attacks which can only be defended by educating people about knowing the domain from it is originating.
ASP.Net Membership service simplified the implementation of session security and assumed to be safe from attacks. If any of the vulnerability in the Membership assembly is exploited then the assumption is flawed.
After the implementation realised that if there is an attack originated by the service provider itself, he can take full control of data by impersonating user authentication. Encryption can protect sensitive data and even this can attacked by monitoring user's session from server side.
As a student it is hard to avail access to storage servers used for commercial purpose. Hence storage provided by web hosting service was utilised to implement this project which had limitations on server capacity and performance.
Implementation helped to understand and learn the following:
* The architecture of a web application including the different tiers involved in it.
* The process of domain registration and basics about DNS servers and Naming Servers.
* Usage of web hosting control panel.
* Importance of standards and coding best practices enforce in most of the software companies.
* The new SQL data type "HeirarchyID" and its ability to bind with TreeView control in ASP.Net
* The implementation of Rijndael encryption algorithm using C#.Net and also helped to understand converting user entered password to encrypting key and vector pairs.
* The importance of CSS and XHTML in achieving compatibility across various browsers and operating systems.
6.6 Future Work
* Interoperability can be extended by providing Data API layer and enable communication between other cloud storage service providers.
* Open authentication can be implemented to increase interoperability between different applications.
* Online editing for some file types such as documents, spread sheets, photos, videos can be provided to increase usability.
* Two-Factor authentication can be implemented to increase the security.
* Client manager can be developed to manage encryption and decryption from client machine and hence completely eliminating the possibility of breaking security from server side.
* Can be integrated with speciality applications such as Youtube, Gmail, Google docs, Zoho, Blogs etc.
* Themes can be made customisable so that user can change look and feel to their likings.
Cloud computing is liked for its ease of access, sharing, collaboration and most of all it is "money saver". It is a boon for start up small companies as it converts the capital expense to operational expense. It eliminates the worries of maintaining and securing IT infrastructure and increase speed and agility of software development life cycle.
According to CIO survey, 70% of companies who have started using cloud computing are moving additional apps to cloud. But for most of the companies security is the key which is hindering cloud adoption. The decision to adopt cloud purely depends on the application requirements and benefits to risk ratio. If the benefits are high compared to risk, cloud can be adopted otherwise it is still a red signal.
Storing files in cloud storage is like storing valuables in a Bankers Vault, data is stored in someone else computer which means someone else have full access to users data. Vulnerabilities exist throughout life cycle of cloud storage from the client machine, connecting network to the servers. Based on the research done in this dissertation, it is evident that people are the weakest link.
Though cloud storage looks like new technology, it has only evolved from existing technology such as grid computing, P2P file sharing, virtualisation and hence the vulnerabilities exists as it existed in other technologies. In addition to the existing vulnerabilities, Social Networking introduces new kind of vulnerabilities where people, the weakest link are exploited. The study revealed that easily exploited vulnerability exists in browsers in the form of "plug-ins", "add-ons" and toolbars for convenient access of services which can be a spyware or malware. These risks can be mitigated by adhering to standards and best practices as discussed earlier in this paper but cannot be fully eliminated.
The implementation of cloud storage revealed that encryption can help to some extent to protect privacy but involves legal issues as it has legal issues depending on the location of the data centres, service providers and the consumers.
The Open ID login mechanism implemented in this project improves usability but it is prone to phishing attacks. Hence users should be made aware of these common attacks by educating them.
Cloud computing is very popular because of the power it has brought for sharing and collaboration. This is made possible as the different cloud applications talk to each other and most of communication happens through web services. XML is the key for this communication to happen between web service and client application. Security in XML communication is achieved through XML signatures which are vulnerable to XML Wrapping attack and still there is no formal proof for defending this attack exists.
There are many legal issues involved in the implementation of cloud storage. The storage servers / data centres could be in one country, service provider could in a different country and user could be elsewhere. Which entity abides which countries law is still a question and everything should be governed through contracts and service level agreements.
All the cloud computing activities purely depends on Internet through which client connects to cloud service provider. If the network breaks down then the service is rendered inaccessible which breaks the Availability leg of CIA triad.
Last but not least, there is a general assumption at the basic level of all security mechanisms that brute force attack would take considerable time to break it. Considering the power that cloud computing with distributed technology can bring to the computing, breaking the keys used currently is not far from now! This is a flaw in the low level assumption which could collapse entire security of cloud.
|2FA||Two Factor Authentication|
|APCC||Agreement Protocol for Cloud Computing|
|CIA||Confidentiality, Integrity and Availability|
|CSRF||Cross Site Request Forgery|
|CSRF||Cross Site Request Forgery|
|CVS||Current Versioning System|
|DNS||Domain Naming Service|
|DPA||Data Protection Act 1998|
|DRM||Dynamic Rights Management|
|HMAC||Hash based Message Authentication Code|
|HPC||High Performance Computing|
|HTML||Hyper Text Markup Language|
|HTTP||Hyper Text Transfer Protocol|
|HTTPS||Secure Hyper Text Transfer Protocol|
|IaaS||Infrastructure as a Service|
|MAC||Message Authentication Code|
|PaaS||Platform as a Service|
|RIC||Remote Integrity Check|
|ROI||Return on Investment|
|SaaS||Software as a Service|
|SHA1||Secure Hash Algorithm|
|SHHTP||Secure Hyper Text Transfer Protocol|
|SKM||Secret Key Multiplication|
|SLA||Service Level Agreement|
|SOA||Service Oriented Architecture|
|SQL||Structured Query Language|
|SSL||Secure Socket Layer|
|SSL||Secure Socket Layer|
|TCO||Total Cost of Ownership|
|XML||eXtensible Markup Language|
|XSS||Cross Site Scripting|
 Abbadi, I. and M. Alawneh (2009), "Replay Attack of Dynamic Rights within an Authorised Domain," Secureware 2009: 148-154.
 Abraham, D. (2009). "Why 2FA in the cloud?" Network Security 2009(9): 4-5.
 Alvaro, P., T. Condie, et al. (2010). "BOOM Analytics: Exploring Data-Centric, Declarative Programming for the Cloud." EuroSys 2010: 13-16
 Anonymous (2009). "Data in the cloud might be seized by government agencies without you knowing." Computer Fraud & Security 2009(8): 1.
 Antunes, J., N. Neves, et al. (2008). "Detection and Prediction of Resource-Exhaustion Vulnerabilities", ISSRE 2008: 87-96.
 Ateniese, G., S. Kamara, et al. "Proofs of Storage from Homomorphic Identification Protocols." Advances in Cryptology-ASIACRYPT 2009: 319-333.
 U.S Census Bureau. (2010). "World Population Summary." International Data Base from http://www.census.gov/ipc/www/idb/worldpopinfo.php. [Accessed 30 March 2010]
 Bowers, K., A. Juels, et al. (2009). Hail: A high-availability and integrity layer for cloud storage, Proceedings of the 16th ACM conference on Computer and communications security 2009: 187-198.
 Bowers, K., A. Juels, et al. (2009). Proofs of retrievability: Theory and implementation, Proceedings of the 2009 ACM workshop on Cloud computing security 2009: 43-54.
 Brandt, J., A. Gentile, et al. (2009). "Resource monitoring and management with OVIS to enable HPC in cloud computing environments", IPDPS 2009: 1-8.
 Cachin, C. and S. Tessaro (2005). "Optimal resilience for erasure-coded Byzantine distributed storage." Distributed Computing 2005: 497-498.
 Cachin, C., I. Keidar, et al. (2009). "Trusting the Cloud." ACM SIGACT News 40(2).
 Callegati, F., W. Cerroni, et al. (2009). "Man-in-the-Middle Attack to the HTTPS Protocol." IEEE Security & Privacy 7(1): 78-81.
 Chang, E. and J. Xu (2008). "Remote integrity check with dishonest storage server." Computer Security-ESORICS: 223-237.
 Chantry, D. (2009). "Mapping Applications to the Cloud." TechEd Special Edition 19: 2-9.
 Chen, Z. and X. Yan (2006). "Hardware Solution for Detection and Prevention of Buffer Overflow Attacks in CPU Micro-architecture." RESEARCH AND PROGRESS OF SSE 26(2): 214.
 Christensen, J. (2009). "Using RESTful web-services and cloud computing to create next generation mobile applications", Proceeding of the 24th ACM SIGPLAN conference companion on Object oriented programming systems languages and applications 2009: 627-634
 Computing, D. and M. Creeger (2009). "Cloud Computing: An Overview." Distributed Computing 7(5).
 Cowan, C., P. Wagle, et al. (2000). "Buffer overflows: Attacks and defenses for the vulnerability of the decade", DARPA Information Survivability Conference & Exposition 2000 (2):1119
 Cunsolo, V., S. Distefano, et al. (2009). "Cloud@Home: Bridging the Gap between Volunteer and Cloud Computing", LNCS 2009: 423-432.
 Cusumano, M. (2009). "An analysis of the cloud computing platform", Massachusetts Institute of Technology.
 Deelman, E., G. Singh, et al. (2008). "The cost of doing science on the cloud: the montage example", Proceedings of the 2008 ACM/IEEE conference on Supercomputing 2008(50).
 Dikaiakos, M., D. Katsaros, et al. (2009). "Cloud Computing: Distributed Internet Computing for IT and Scientific Research." IEEE Internet Computing 13(5): 10-13.
 Driscoll, K., B. Hall, et al. (2003). "Byzantine fault tolerance, from theory to reality." Computer Safety, Reliability, and Security: 2003: 235-248.
 Erez, M. and W. Dally (2009). "Stream Processors." Multicore Processors and Systems 2009: 231-270.
 Erway, C., A. K¸pÁ¸, et al. (2009). "Dynamic provable data possession", Proceedings of the 16th ACM conference on Computer and communications security 2009: 213-222.
 Fu, X. and K. Qian (2008). "SAFELI: SQL injection scanner using symbolic execution", Proceedings of the 2008 workshop on Testing, analysis, and verification of web services and applications 2008: 34-39.
 Geambasu, R., T. Kohno, et al. (2009). "Vanish: Increasing data privacy with self-destructing data" Unisex Security Symposium 2009(18) 299-350.
 Geuer-Pollmann, C. and J. Claessens (2005). "Web services and web service security standards." Information Security Technical Report 10(1): 15-24.
 Grossman, R. (2009). "The Case for Cloud Computing." IT PROFESSIONAL 11(2): 23-27.
 Gruschka, N. and L. Iacono (2009). "Vulnerable Cloud: SOAP Message Security Validation Revisited", IEEE International Conference on Web Services 2009: 625-631.
 Harnik, D., D. Naor, et al. (2009). "Low power mode in cloud storage systems", IEEE International Symposium on Parallel & Distributed Processing 2009: 1-8.
 Hawthorn, N. (2009). "Finding security in the cloud." Computer Fraud & Security 2009(10): 19-20.
 Hewitt, C. (2008). "ORGs for scalable, robust, privacy-friendly client cloud computing." IEEE Internet Computing 12(5): 96-99.
 Jia, L., J. Vaughan, et al. (2008). Aura: A programming language for authorization and audit, ACM.
 Joint, A., E. Baker, et al. (2009). "Hey, you, get off of that cloud?" Computer Law and Security Review: The International Journal of Technology and Practice 25(3): 270-274.
 Juels, A. and B. Kaliski Jr (2007). "PORs: Proofs of retrievability for large files", Proceedings of the 14th ACM conference on Computer and communications security 2007: 584-597.
 Kaufman, L. M. (2009). "Data Security in the World of Cloud Computing." IEEE Security and Privacy 7(4): 61-64.
 Kaur, P. and H. Singh (2009). "A layered structure for uniform version management in component based systems." ACM SIGSOFT Software Engineering Notes 34(6): 1-7.
 Kieyzun, A., P. Guo, et al. (2009). "Automatic creation of SQL injection and cross-site scripting attacks", Proceedings of the 2009 IEEE 31st International Conference on Software Engineering 2009: 199-209.
 King, S. and P. Chen (2006). "SubVirt: Implementing malware with virtual machines", IEEE Symposium on Security 2006:1-14.
 Kompella, R., S. Singh, et al. (2007). "On Scalable Attack Detection in the Network." IEEE/ACM TRANSACTIONS ON NETWORKING 15(1).
 Lee, J., M. Tehranipoor, et al. (2007). "Securing Designs against Scan-Based Side-Channel Attacks." IEEE transactions on dependable and secure computing 4(4): 325-336.
 Leu, F. and Z. Li (2009). "Detecting DoS and DDoS Attacks by Using an Intrusion Detection and Remote Prevention System", IEEE Conference and Exposition 2009:1-15.
 Levy, E. and I. Arce (2006). "New threats and attacks on the world wide web." IEEE Security & Privacy 2006:234-266.
 Liu, W. (2009). "Research on DoS Attack and Detection Programming", IITA 2009:207-210.
 McIntosh, M. and P. Austel (2005). XML signature element wrapping attacks and countermeasures, Proceedings of the 2005 workshop on Secure web services 2005:20-27.
 Mergen, M., V. Uhlig, et al. (2006). "Virtualization for high-performance computing." ACM SIGOPS Operating Systems Review 40(2): 11.
 Microsoft (2010). "Cloud Faster." Projects Research. from http://research.microsoft.com/en-us/projects/cloudfaster/default.aspx. [Accessed April 13, 2010].
 Microsoft (2010). "Inside the Cloud." Projects Research. from http://research.microsoft.com/en-us/projects/cloudmouse/default.aspx. [Accessed April 13, 2010].
 Mowbray, M. and S. Pearson (2009). "A client-based privacy manager for cloud computing", Proceedings of the Fourth International ICST Conference on Communication System Software and Middleware 2009(5).
 Napper, J. and P. Bientinesi (2009). "Can cloud computing reach the top500?", Proceedings of the combined workshops on UnConventional high performance computing workshop plus memory access workshop 2009(8).
 O'reilly, T. (2005). "What is web 2.0." Design patterns and business models for the next generation of software 30: 2005.
 Oualha, N., M. Önen, et al. (2008). "A security protocol for self-organizing data storage", IFIP International Federation for Information Processing 2008:675-679.
 Potlapally, N., A. Raghunathan, et al. (2007). "Aiding side-channel attacks on cryptographic software with satisfiability-based analysis." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 15(4): 465-470.
 Preda, M., M. Christodorescu, et al. (2008). "A semantics-based approach to malware detection." ACM Transactions on Programming Languages and Systems (TOPLAS) 30(5): 25.
 Puttaswamy, K., A. Sala, et al. (2009). "StarClique: guaranteeing user privacy in social networks against intersection attacks", Proceedings of the 5th international conference on Emerging networking experiments and technologies 2009:157-168.
 Rajasekar, N. C. and C. Imafidon (2009). Security Implications of Cloud Computing. School of Computing and Technology. London, University of East London. MSc.
 Raphael, C. (2009). "COLLUSION ATTACKS ON SECRET KEYS MULTIPLICATION (SKM) GROUP RE-KEYING SCHEME PROPOSED AT CITA03."
 Rash, W. (2009). Is cloud computing secure? Prove it. tech in-depth, eWeek. 2009: 8-10.
 Sagawa, C., H. Yoshida, et al. (2009). "Cloud Computing Based on Service-Oriented Platform." FUJITSU Sci. Tech. J 45(3): 283-289.
 Saran, C. (2009). Cryptography breakthrough could secure cloud services. Computer Weekly. 2009: 20.
 Shah, M., R. Swaminathan, et al. (2008). "Privacy-Preserving Audit and Extraction of Digital Contents", HP Labs Technical Report 2008(32).
 Shmueli, E., R. Vaisenberg, et al. (2009). "Database EncryptionñAn Overview of Contemporary Challenges and Design Considerations." SIGMOD Record 38(3): 29.
 Sloan, K. (2009). "Security in a virtualised world." Network Security 2009(8): 15-18.
 Speirs, W. (2005). "Making the kernel responsible: a new approach to detecting & preventing buffer overflows", Proceedings of the Third IEEE International Workshop on Information Assurance 2005: 21-32.
 Office of National Statistics. (2009). "Internet Access.", from http://www.statistics.gov.uk/cci/nugget.asp?ID=8 [Accessed 17 March 2010].
 Syahputri, R. and M. Hasibuan (2009). "Security in Wireless LAN Attacks and Countermeasures", SNATI 2009:54-78.
 Ter Louw, M., J. Lim, et al. (2008). "Enhancing web browser security against malware extensions." Journal in Computer Virology 4(3): 179-195.
 Troncoso, C., D. De Cock, et al. (2008). "Improving secure long-term archival of digitally signed documents", Proceedings of the 4th ACM international workshop on Storage security and survivability 2008:27-36.
 Vaquero, L., L. Rodero-Merino, et al. (2008). "A break in the clouds: towards a cloud definition." ACM SIGCOMM Computer Communication Review 39(1): 50-55.
 Viega, J. (2009). "Cloud Computing and the Common Man." Computer 42(8): 106-108.
 Vieira, K., A. Schulter, et al. (2009). "Intrusion Detection Techniques in Grid and Cloud Computing Environment." IT PROFESSIONAL 2009(8).
 Walsh, P. J. (2009). "The brightening future of cloud security." Network Security 2009(10): 7-10.
 Wang, Q., C. Wang, et al. (2009). "Enabling public verifiability and data dynamics for storage security in cloud computing." Computer Security-ESORICS 2009: 355-370.
 Wang, S., K. Yan, et al. (2009). "Achieving high efficient agreement with malicious faulty nodes on a cloud computing environment", Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human 2009:468-473.
 Weiss, A. (2007). "Computing in the clouds." COMPUTING 16.
 Wolchok, S., O. Hofmann, et al. (2010). "Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs", Citeseer.
 Wurzinger, P., C. Platzer, et al. (2009). "SWAP: Mitigating XSS attacks using a reverse proxy", ICSE Workshop on Software Engineering for Secure Systems 2009:33-39.
 Yu, H., M. Kaminsky, et al. (2008). "SybilGuard: Defending Against Sybil Attacks via Social Networks." IEEE/ACM TRANSACTIONS ON NETWORKING 16(3).
 Zhang, W., J. Cao, et al. (2009). "Block-Based Concurrent and Storage-Aware Data Streaming for Grid Applications with Lots of Small Files", Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid 2009:538-543.
(1) Cloud Security Alliance (2009, April). "Security Guidance for Critical Areas of Focus in Cloud Computing." from http://www.cloudsecurityalliance.org/guidance/csaguide.pdf. [Accessed Nov 25, 2009].
(2) Cruz, A. (2009). "Current Gmail Outage." The Official Google Blog. from http://googleblog.blogspot.com/2009/02/current-gmail-outage.html. [Accessed 10 March 2010].
(3) Ferdowsi, A. 2008. Thread: S3 data corruption? . [Discussion Forums] Available at: http://developer.amazonwebservices.com/connect/thread.jspa?threadID=22709 [Accessed 10 March 2010].
(4) Foley, M.-J. (2010). "Microsoft to demo new cloud-computing advances at research showcase." zdnet. from http://blogs.zdnet.com/microsoft/?p=5438. [Accessed 1 March 2010].
(5) International, S. (2010). "The Future Internet: Service Web 3.0 Video." from http://www.sti2.org/service-web-3-0-the-future-internet-mov-large. [Accessed 14 April 2010].
(6) Krigsman, M. (2008). "MediaMax / The Linkup: When the cloud fails." IT Project Failures. from http://www.zdnet.com/blog/projectfailures/mediamax-the-linkup-when-the-cloud-fails/999. [Accessed 15 March 2010].
(7) MARKOFF, J. (2010). "Cyberattack on Google Said to Hit Password System." The Newyork Times. from http://www.nytimes.com/2010/04/20/technology/20google.html. [Accessed 20 April 2010].
(8) McLaughlin, L. (2010). "What You Hope to Gain From the Cloud." Cloud Computing Survey: IT Leaders See Big Promise, Have Big Security Questions. from http://www.cio.com/article/455832/Cloud_Computing_Survey_IT_Leaders_See_Big_Promise_Have_Big_Security_Questions?page=3&taxonomyId=3112. [Accessed 14 April 2010].
(9) Microsoft (2010). "Cloud Faster." Projects Research. from http://research.microsoft.com/en-us/projects/cloudfaster/default.aspx. [Accessed 13 April 2010].
(10) Microsoft (2010). "Inside the Cloud." Projects Research. from http://research.microsoft.com/en-us/projects/cloudmouse/default.aspx. [Accessed 13 April 2010].
(11) Miller, R. (2009). "Downtime for Hotmail." from http://www.datacenterknowledge.com/archives/2009/03/12/downtime-for-hotmail/. [Accessed 15 March 2010].
(12) Naraine, R. (2010). "Googler ships exploit to defeat ASLR+DEP." from http://blogs.zdnet.com/security/?p=5573. [Accessed 1 March 2010].
(13) Office for National Statistics (2009). "Internet Access." from http://www.statistics.gov.uk/cci/nugget.asp?ID=8. [Accessed 17 March 2010].
(14) Portnoy, S. (2010). "Netgear unveils new Powerline, Wi-Fi adapters to connect HDTV, home theater devices to home network." zdnet. from http://blogs.zdnet.com/home-theater/?p=2802. [Accessed 1 March 2010].
(15) Rios, B. (2008). "Google XSS." from http://xs-sniper.com/blog/2008/04/14/google-xss/. [Accessed 12 April 2010].
(16) STI International, 2010. The Future Internet: Service Web 3.0 Video. [Video] Available at: http://www.sti2.org/service-web-3-0-the-future-internet-mov-large [Accessed 14 April 2010].
(17) The Amazon S3 Team (2008). "Amazon S3 Availability Event." from http://status.aws.amazon.com/s3-20080720.html. [Accessed 10 April 2010].
(18) The Hosting News (2010). "Cloud Computing Adoption Survey Results Released." from http://www.thehostingnews.com/cloud-computing-adoption-survey-results-released-12517.html. [Accessed 16 April 2010].
(20) Zdnet (2010). "Macworld 2010: Quickoffice launches cloud services on iPhone." from http://news.zdnet.com/2422-19178_22-392815.html. [Accessed 5 March 2010].
(21) Zdnet (2010). "RSA president calls on security industry to adopt cloud technologies." from http://news.zdnet.com/2422-19178_22-399479.html?tag=nl.e539. [Accessed 5 March 2010].