Threats of cloud computing and methods of protecting them. Your approaches to eliminating cloud computing vulnerabilities

When Eric Schmit, now the head of Google, first used the term "cloud" in relation to a distributed web computing system, he hardly knew that it was one of those words that often appears in legends. In almost all the myths of the peoples of the world, divine beings live very close to the sky - on the clouds. As a result, the term "cloud computing" has become a favorite among marketers because it gives room for creativity. We will also try to verbalize these myths and understand how organically they combine with IT.

Death of Merlin

One of the characters in the cycle of legends about King Arthur and his Round Table is the magician and wizard Merlin, who helped Arthur in his reign. It is significant that Merlin ended up being imprisoned in the clouds. He, wanting to show off to the young sorceress and show his magical power, built a castle out of clouds and invited his passion to inspect it. However, the sorceress turned out to be cunning and imprisoned the magician in his own cloud castle. After this, no one saw Merlin, so it is believed that he died somewhere there - in the cloud castle he himself built.

Now “IT wizards” have also built a whole mythology around distributed computing, so in order not to be imprisoned in these “castles”, you should first understand what these clouds are, that is, to separate marketing from cutlets.

Initially, there was only one cloud - it was this symbol that traditionally denoted the Internet. This cloud denoted the collection of all computers connected by the IP protocol and having their own IP address. Over time, server farms began to be allocated to the Internet, which were installed by providers and on which web projects were based. At the same time, to ensure high load and fault tolerance, the largest web systems became multi-level and distributed.

In a typical system, the following levels could be distinguished: a reverse proxy, which also acts as a load balancer and SSL decryptor, the web server itself, then the application server, the DBMS and the storage system. Moreover, at each level there could be several elements performing the same functions, and therefore it was not always clear which components were used to process user requests. And when it’s unclear, then these are the clouds. Therefore, they began to say that user requests are executed somewhere in the “cloud” from a large number of servers. This is how the term “cloud computing” came into being.

Although initially cloud computing was associated with publicly available web projects - portals, as distributed fault-tolerant web systems developed, they began to be used to solve internal corporate problems. It was a time of boom for corporate portals, which were based on web technologies developed in public systems. At the same time, enterprise systems began to consolidate into data centers that were easier and cheaper to maintain.

However, it would be ineffective to allocate a separate server for each element of the cloud - not all elements of the cloud are loaded equally, so the virtualization industry began to develop in parallel. In public clouds, it turned out to be quite popular because it made it possible to differentiate access rights and ensured the quick transfer of an element of a distributed system to another hardware medium. Without virtualization, cloud computing would be less dynamic and scalable, which is why clouds now typically consist of virtual machines.

Cloud computing is mainly associated with the rental of applications, defining three types of such services: IaaS - infrastructure as a service, PaaS - platform as a service and SaaS - software as a service. Sometimes “security as a service” services are also shortened to SaaS, however, in order not to confuse cloud security services with software rental, it is better to call it ISAAC - Information Security as a Cloud. Such services are also beginning to be provided. However, application outsourcing should not be confused with cloud computing, since clouds can be internal, public and hybrid. Each of these types of clouds has its own characteristics when organizing a security system.

Three steps of Vishnu

God Vishnu in Hindu mythology is famous for the fact that it was he who conquered the space for human life with the help of three steps: the first was made on earth, the second in the clouds, and the third in the highest abode. According to the Rig Veda, it was by this action that Vishnu won all these spaces for people.

Modern IT is also taking a similar “second step” - from the ground to the clouds. However, in order not to fall from these clouds while still on earth, you should take care of safety. In the first part, I examined the structure of the cloud in such detail to make it clear what threats exist for cloud computing. From the above, the following classes of threats should be distinguished:

    Traditional attacks on software. They are associated with the vulnerability of network protocols, operating systems, modular components and others. These are traditional threats, to protect against which it is enough to install an antivirus, firewall, IPS and other components discussed. It is only important that these protection tools are adapted to the cloud infrastructure and work effectively in virtualization conditions.

    Functional attacks on cloud elements. This type of attack is related to the layering of the cloud, a general security principle that the overall protection of a system is equal to the protection of the weakest link. Thus, a successful DoS attack on a reverse proxy installed in front of the cloud will block access to the entire cloud, despite the fact that all communications within the cloud will work without interference. Similarly, an SQL injection passed through an application server will provide access to system data, regardless of the access rules in the data storage layer. To protect against functional attacks, for each layer of the cloud you need to use protection tools specific to it: for a proxy - protection against DoS attacks, for a web server - page integrity control, for an application server - an application level screen, for a DBMS layer - protection against SQL - injections, for the storage system - backup and access control. Individually, each of these protective mechanisms has already been created, but they are not collected together to comprehensively protect the cloud, so the task of integrating them into a single system must be solved during the creation of the cloud.

    Attacks on the client. This type of attack has been developed in the web environment, but it is also relevant for the cloud, since clients connect to the cloud, usually using a browser. It includes attacks such as Cross Site Scripting (XSS), web session interceptions, password theft, "man in the middle" and others. Protection against these attacks has traditionally been strong authentication and the use of an encrypted connection with mutual authentication, but not all cloud creators can afford such wasteful and, as a rule, not very convenient means of protection. Therefore, in this area of ​​information security there are still unsolved problems and space for creating new means of protection.

    Virtualization threats. Since the platform for cloud components has traditionally been virtual environments, attacks on the virtualization system also threaten the entire cloud as a whole. This type of threat is unique to cloud computing, so we will look at it in detail below. Solutions for some virtualization threats are now beginning to emerge, but this industry is quite new, so no established solutions have yet been developed. It is quite possible that the information security market will soon develop means of protection against this type of threat.

    Complex threats to the clouds. Cloud control and management is also a security issue. How to ensure that all cloud resources are counted and there are no uncontrolled virtual machines, no unnecessary business processes are running, and the mutual configuration of cloud layers and elements is not disrupted. This type of threat is associated with the manageability of the cloud as a single information system and the search for abuse or other violations in the operation of the cloud, which can lead to unnecessary costs for maintaining the functionality of the information system. For example, if there is a cloud that allows you to detect a virus in it based on the submitted file, then how to prevent the theft of such detectors? This type of threat is the most high-level and, I suspect, there is no universal means of protection for it - for each cloud its overall protection must be built individually. The most general risk management model, which still needs to be correctly applied to cloud infrastructures, can help with this.

The first two types of threats have already been sufficiently studied and protection measures have been developed for them, but they still need to be adapted for use in the cloud. For example, firewalls are designed to protect the perimeter, but in the cloud it is not easy to allocate a perimeter to an individual client, which makes protection much more difficult. Therefore, firewall technology needs to be adapted to the cloud infrastructure. For example, Check Point is currently actively working in this direction.

A new type of threat for cloud computing is virtualization problems. The fact is that when using this technology, additional elements appear in the system that can be subject to attack. These include a hypervisor, a system for transferring virtual machines from one node to another, and a virtual machine management system. Let's take a closer look at what attacks the listed elements can be subject to.

    Attacks on the hypervisor. The actual key element of a virtual system is the hypervisor, which ensures the division of physical computer resources between virtual machines. Interfering with the operation of the hypervisor can lead to the fact that one virtual machine can access the memory and resources of another, intercept its network traffic, take away its physical resources, and even completely displace the virtual machine from the server. So far, few hackers understand exactly how the hypervisor works, so there are practically no attacks of this type, but this does not guarantee that they will not appear in the future.

    Transferring virtual machines. It should be noted that a virtual machine is a file that can be launched for execution on different cloud nodes. Virtual machine management systems provide mechanisms for moving virtual machines from one node to another. However, you can even steal a virtual machine file and try to run it outside the cloud. It is impossible to remove a physical server from a data center, but a virtual machine can be stolen over the network without physical access to the servers. True, a separate virtual machine outside the cloud has no practical value - you need to steal at least one virtual machine from each layer, as well as data from the storage system to restore a similar cloud, however, virtualization completely allows the theft of parts or the entire cloud. That is, interference in the mechanisms for transferring virtual machines creates new risks for the information system.

    Attacks on control systems. The huge number of virtual machines that are used in clouds, especially in public clouds, requires management systems that can reliably control the creation, migration and disposal of virtual machines. Interference with control systems can lead to the appearance of invisible virtual machines, blocking some machines and substituting unauthorized elements into cloud layers. All this allows attackers to obtain information from the cloud or capture parts of it or the entire cloud.

It should be noted that for now all the threats listed above are purely hypothetical, since there is practically no information about real attacks of this type. At the same time, when virtualization and clouds become quite popular, all these types of attacks may become very real. Therefore, they should be kept in mind at the design stage of cloud systems.

Over the seventh heaven

The Apostle Paul claimed to know a man who was caught up into the seventh heaven. Since then, the phrase “seventh heaven” has been firmly established to denote heaven. However, not all Christian saints were honored to visit even the first heaven, nevertheless, there is no person who would not dream of looking at the seventh heaven with at least one eye.

Perhaps it was this legend that prompted the creators of Trend Micro to name one of their security projects clouds Cloud Nine - ninth cloud. This is clearly higher than seventh. However, now a wide variety of things are named by this name: songs, detective stories, computer games, however, it is possible that the name was inspired by the Christian legend of Paul.

However, so far Trend Micro has only published information that Cloud Nine will be associated with data encryption in the cloud. It is data encryption that allows you to protect against most threats to data in the public cloud, so such projects will now be actively developed. Let's imagine what protection tools might still be useful to reduce the risks described above.

First of all, you need to ensure reliable authentication of both cloud users and its components. To do this, you can most likely use ready-made single authentication systems (SSO), which are based on Kerberos and the mutual equipment authentication protocol. Next, you will need identity management systems that allow you to configure user access rights to various systems using role management. Of course, you will have to tinker with defining the roles and minimum rights for each role, but once you set up the system, it can be used for quite a long time.

When all participants in the process and their rights are defined, it is necessary to monitor compliance with these rights and detect administration errors. This requires event processing systems from cloud element protection tools and additional security mechanisms such as firewalls, antiviruses, IPS and others. True, it is worth using those options that can work in a virtualization environment - this will be more effective.

In addition, it is also worth using some kind of fraud machine that would allow you to detect fraud in the use of clouds, that is, reduce the most difficult risk of interference in business processes. True, now there is most likely no fraud machine on the market that would allow working with clouds, however, technologies for detecting cases of fraud and abuse have already been developed for telephony. Since a billing system will have to be implemented in the clouds, the fraud machine should be connected to it. Thus, it will be possible to at least control threats to cloud business processes.

What other defense mechanisms can be used to protect clouds? The question remains open for now.

There are several methods for building a corporate IT infrastructure. Deploying all resources and services on a cloud platform is just one of them. However, prejudices regarding the security of cloud solutions often become an obstacle on this path. In this article we will understand how the security system works in the cloud of one of the most famous Russian providers - Yandex.

A fairy tale is a lie, but there is a hint in it

The beginning of this story can be told like a famous fairy tale. There were three admins in the company: the eldest was a smart guy, the middle one was this and that, the youngest was just... an Enikey trainee. I added users to Active Directory and twisted the tails of the Cisco systems. The time has come for the company to expand, and the king, that is, the boss, called on his admin army. I wish, he says, new web services for our clients, our own file storage, managed databases and virtual machines for software testing.

The youngest immediately proposed creating his own infrastructure from scratch: purchasing servers, installing and configuring software, expanding the main Internet channel and adding a backup one to it - for reliability. And the company is calmer: the hardware is always at hand, something can be replaced or reconfigured at any time, and he himself will have an excellent opportunity to upgrade his admin skills. They calculated it and shed tears: the company couldn’t bear such expenses. Large businesses can do this, but for medium and small businesses it is too expensive. It’s not just necessary to purchase equipment, equip a server room, install air conditioners and set up a fire alarm; you also need to organize shifts in order to monitor order day and night and repel network attacks from dashing people on the Internet. And for some reason the admins didn’t want to work at night and on weekends. If only for double payment.

The senior admin looked thoughtfully at the terminal window and suggested placing all services in the cloud. But then his colleagues began to scare each other with horror stories: they say that the cloud infrastructure has insecure interfaces and APIs, poorly balances the load of different clients, which can cause your own resources to suffer, and is also unstable to data theft and external attacks. And in general, it’s scary to transfer control over critical data and software to strangers with whom you haven’t eaten a peck of salt or drunk a bucket of beer.

The average guy came up with the idea of ​​placing the entire IT system in the provider’s data center, on its channels. That's what they decided on. However, several surprises awaited our trio, not all of which were pleasant.

Firstly, any network infrastructure requires the mandatory presence of protection and security tools, which, of course, have been deployed, configured and launched. Only the cost of the hardware resources they use, as it turns out, must be paid by the client himself. And a modern information security system consumes considerable resources.

Secondly, the business continued to grow and the initially built infrastructure quickly hit the scalability ceiling. Moreover, to expand it, simply changing the tariff was not enough: many services in this case would have to be transferred to other servers, reconfigured, and some even completely redesigned from scratch.

Finally, one day, due to a critical vulnerability in one of the applications, the entire system crashed. The admins quickly raised her from backup copies, but it was not possible to quickly understand the reasons for what happened, because they forgot to set up backup for logging services. Valuable time was lost, and time, as the saying goes, is money.

Calculating expenses and summing up the results led the company's management to disappointing conclusions: the administrator who from the very beginning proposed using the IaaS cloud model - “infrastructure as a service” - was right. As for the security of such platforms, this is worth talking about separately. And we will do this using the example of the most popular of such services - Yandex.Cloud.

Security in Yandex.Cloud

Let's start, as the Cheshire Cat advised the girl Alice, from the beginning. That is, from the issue of delimitation of responsibility. In Yandex.Cloud, as in any other similar platforms, the provider is responsible for the security of the services provided to users, while the responsibility of the client himself includes ensuring the correct operation of the applications he develops, organizing and delimiting remote access to allocated resources, and configuring databases and virtual machines, control over logging. However, for this he is provided with all the necessary tools.

The security of Yandex's cloud infrastructure has several levels, each of which implements its own security principles and uses a separate arsenal of technologies.

Physical layer

It's no secret that Yandex has its own data centers that serve its own security departments. We are talking not only about video surveillance and access control services designed to prevent outsiders from entering server rooms, but also about climate control systems, fire extinguishing and uninterruptible power supply. Stern security guards will be of little use if the rack containing your servers is one day flooded with water from fire sprinklers or if they overheat after an air conditioning failure. This definitely won’t happen to them in Yandex data centers.

In addition, the Cloud hardware is physically separated from the “big Yandex”: they are located in different racks, but in exactly the same way undergo regular routine maintenance and replacement of components. Hardware firewalls are used at the border of these two infrastructures, and a software Host-based Firewall is used inside the Cloud. In addition, Top-of-the-rack switches use an ACL (Access Control List) access control system, which significantly improves the security of the entire infrastructure. Yandex regularly scans the Cloud from the outside looking for open ports and configuration errors, so potential vulnerabilities can be recognized and eliminated in advance. For employees working with Cloud resources, a centralized authentication system using SSH keys with a role-based access model has been implemented, and all administrator sessions are logged. This approach is part of the Secure by default model universally used by Yandex: security is built into the IT infrastructure at the stage of its design and development, and is not added later when everything is already put into operation.

Infrastructure level

At the “hardware-software logic” level, Yandex.Cloud uses three infrastructure services: Compute Cloud, Virtual Private Cloud and Yandex Managed Services. And now about each of them in a little more detail.

Compute Cloud

This service provides scalable computing power for various tasks, such as hosting web projects and high-load services, testing and prototyping, or temporary migration of IT infrastructure for the period of repair or replacement of own equipment. You can manage the service via the console, command line (CLI), SDK or API.

Compute Cloud security is based on the fact that all client virtual machines use at least two cores, and overcommitment is not used when allocating memory. Since in this case only the client code is executed on the kernel, the system is not susceptible to vulnerabilities such as L1TF, Specter and Meltdown or side-channel attacks.

In addition, Yandex uses own assembly Qemu/KVM, in which everything unnecessary is disabled, leaving only the minimum set of code and libraries necessary for the operation of hypervisors. In this case, processes are launched under the control of tools based on AppArmor, which, using security policies, determines which system resources and with what privileges a particular application can gain access. AppArmor running on top of each virtual machine reduces the risk of a client application being able to access the hypervisor from the VM. To receive and process logs, Yandex has built a process for delivering data from AppArmor and sandboxes to its own Splunk.

Virtual Private Cloud

The Virtual Private Cloud service allows you to create cloud networks used to transfer information between various resources and their connection to the Internet. Physically, this service is supported by three independent data centers. In this environment, logical isolation is implemented at the multiprotocol interconnection level - MPLS. At the same time, Yandex constantly fuzzes the interface between SDN and the hypervisor, that is, from the side of virtual machines a stream of incorrectly formed packets is continuously sent to the external environment in order to receive a response from SDN, analyze it and close possible gaps in the configuration. Protection against DDoS attacks is automatically enabled when creating virtual machines.

Yandex Managed Services

Yandex Managed Services is a software environment for managing various services: DBMS, Kubernetes clusters, virtual servers in the Yandex.Cloud infrastructure. Here the service takes on most of the security work. All backups, backup encryption, Vulnerability management and so on are provided automatically software Yandex.Cloud.

Incident Response Tools

To respond to information security incidents in a timely manner, it is necessary to identify the source of the problem in a timely manner. Why is it necessary to use reliable monitoring tools that should work around the clock and without failures? Such systems will inevitably consume resources, but Yandex.Cloud does not pass on the cost of computing power for security tools to platform users.

When choosing tools, Yandex was guided by another important requirement: in the event of successful exploitation of a 0day vulnerability in one of the applications, the attacker should not go beyond the application host, while the security team should immediately learn about the incident and react as necessary.

Last but not least, we wanted all tools to be open source. These criteria are fully met by the AppArmor + Osquery combination, which it was decided to use in Yandex.Cloud.

AppArmor

AppArmor, already mentioned above, is a proactive security software tool based on customizable security profiles. Profiles use access control technology based on Mandatory Access Control (MAC) privacy labels, implemented using LSM directly in the Linux kernel itself starting with version 2.6. Yandex developers chose AppArmor for the following reasons:

  • lightness and speed, since the tool is based on part of the Linux OS kernel;
  • it is an open source solution;
  • AppArmor can be deployed very quickly on Linux without having to write any code;
  • Flexible configuration is possible using configuration files.

Osquery

Osquery is a system security monitoring tool developed by Facebook and is now successfully used in many IT industries. Moreover, the tool is cross-platform and open source.

Using Osquery, you can collect information about the status of various components operating system, accumulate it, transform it into a standardized JSON format and forward to the selected recipient. This tool allows you to write and send standard SQL queries to the application, which are stored in the rocksdb database. You can configure the frequency and conditions under which these requests are executed or processed.

Standard tables already implement many features, for example, you can get a list of processes running on the system, installed packages, the current set of iptables rules, crontab entities, etc. Out of the box, support for receiving and parsing events from the kernel audit system is implemented (used in Yandex.Cloud to process AppArmor events).

Osquery itself is written in C++ and is distributed as open source; you can modify them and either add new tables to the main code base or create your own extensions in C, Go or Python.

A useful feature of Osquery is the presence of a distributed query system, with which you can run queries in real time against all virtual machines located on the network. This can be useful, for example, if a vulnerability is discovered in a package: with one request you can get a list of machines on which this package is installed. This feature is widely used when administering large distributed systems with complex infrastructure.

conclusions

If we return to the story told at the very beginning of this article, we will see that the fears that forced our heroes to abandon the deployment of infrastructure on a cloud platform turned out to be unfounded. At least when it comes to Yandex.Cloud. The security of the cloud infrastructure created by Yandex has a multi-level layered architecture and therefore provides a high level of protection against most threats known today.

At the same time, due to savings on routine maintenance of hardware and payment for resources consumed by monitoring and incident prevention systems, which Yandex undertakes, the use of Yandex.Cloud significantly saves money for small and medium-sized businesses. Of course, it will not be possible to completely abandon the IT department or the department responsible for information security (especially if both of these roles are combined in one team). But Yandex.Cloud will significantly reduce labor costs and overhead costs.

Since Yandex.Cloud provides its clients with a secure infrastructure with all the necessary security tools, they can focus on business processes, leaving the tasks of maintenance and hardware monitoring to the provider. This does not eliminate the need for ongoing administration of VMs, databases and applications, but such a range of tasks would have to be solved in any case. In general, we can say that Yandex.Cloud saves not only money, but also time. And the second, unlike the first, is an irreplaceable resource.

Coursework in the discipline

Software and hardware for information security

"Information security in cloud computing: vulnerabilities, methods and means of protection, tools for auditing and investigating incidents."

Introduction

1. History and key development factors

2. Definition of cloud computing

3. Reference architecture

4. Service Level Agreement

5. Methods and means of protection in cloud computing

6. Security of cloud models

7. Security audit

8. Incident investigation and forensics in cloud computing

9. Threat model

10. International and domestic standards

11. Territorial affiliation of data

12. State standards

13. Security tools in cloud technologies

14. Practical part

Conclusion

Literature

Introduction

The growing speed of the spread of cloud computing is explained by the fact that for little money, in general, the customer gets access to the most reliable infrastructure with the required performance without the need to purchase, install and maintain expensive computers. The system reaches 99.9%, which also allows saving on computing resources . And what’s also important is practically unlimited scalability. By purchasing regular hosting and trying to jump in over your head (during a sharp surge in load), there is a risk of getting the service dropped for several hours. In the cloud, additional resources are provided on the first request.

The main problem of cloud computing is the unguaranteed level of security of the processed information, the degree of security of resources and, often, a completely absent regulatory framework.

The purpose of the study will be to review the existing market for cloud computing and tools for ensuring security in it.

cloud computing security information

1. History and key development factors

The idea of ​​what we now call cloud computing was first proposed by J. C. R. Licklider in 1970. During these years he was responsible for the creation of ARPANET (Advanced Research Projects Agency Network). His idea was that every person on earth would be connected to a network from which he would receive not only data but also programs. Another scientist, John McCarthy, expressed the idea that computing power would be provided to users as a service. At this point, the development of cloud technologies was suspended until the 90s, after which a number of factors contributed to its development.

The expansion of Internet bandwidth in the 90s did not allow for a significant leap in development in cloud technology, since practically no company or technology of that time was ready for this. However, the very fact of the acceleration of the Internet gave impetus to the rapid development of cloud computing.

2. One of the most significant developments in this area was the emergence of Salesforce.com in 1999. This company became the first company to provide access to its application through the website. In fact, this company became the first company to provide its software on the principle of software as a service (SaaS).

The next step was the development of a cloud web service by Amazon in 2002. This service allowed storing information and performing calculations.

In 2006, Amazon launched a service called Elastic Compute cloud (EC2), a web service that allowed its users to run their own applications. Amazon EC2 and Amazon S3 were the first cloud computing services available.

Another milestone in the development of cloud computing occurred with the creation by Google of the Google Apps platform for web applications in the business sector.

Virtualization technologies, in particular software that allows the creation of virtual infrastructure, have played a significant role in the development of cloud technologies.

The development of hardware contributed not so much rapid growth cloud technologies, how accessible this technology is for small businesses and individuals. As for technical progress, the creation of multi-core processors and increasing the capacity of information storage devices played a significant role in this.

2. Definition of cloud computing

As defined by the US National Institute of Standards and Technology:

Cloud computing (Cloud computing) (EnglishCloud- cloud; computing Computing is a model for providing ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be quickly provisioned and released with minimal management effort and need. interaction with the service provider (service provider).

The cloud model supports high service availability and is described by five essential characteristics, three service models and four deployment models.

Programs are launched and display the results of their work in a standard web browser window on a local PC, while all applications and their data necessary for operation are located on a remote server on the Internet. Computers performing cloud computing are called "computing cloud". In this case, the load between computers included in the “computing cloud” is distributed automatically. The simplest example of cloud computing is p2p networks.

To implement cloud computing, intermediate software products created using special technologies are used. They serve as an intermediate link between the equipment and the user and provide monitoring of the status of equipment and programs, uniform load distribution and timely allocation of resources from the common pool. One such technology is virtualization in computing.

Virtualization in Computing- the process of representing a set of computing resources, or their logical combination, which provides any advantages over the original configuration. This is a new virtual look at resources components, not limited by implementation, physical configuration or geographic location. Typically, virtualized resources include computing power and data storage. Scientifically, virtualization is the isolation of computing processes and resources from each other.

An example of virtualization is symmetric multiprocessor computer architectures, which use more than one processor. Operating systems are typically configured to expose multiple processors as a single processor unit. This is why software applications can be written for a single logical ( virtual) computing module, which is much easier than working with a large number of different processor configurations.

For particularly large and resource-intensive calculations, grid computing is used.

Grid Computing (grid - lattice, network) is a form of distributed computing in which a “virtual supercomputer” is presented in the form of clusters of network-connected, loosely coupled, heterogeneous computers working together to perform a huge number of tasks (operations, jobs).

This technology is used to solve scientific and mathematical problems that require significant computing resources. Grid computing is also used in commercial infrastructure to solve time-consuming problems such as economic forecasting, seismic analysis, and the development and study of the properties of new drugs.

From the point of view of a network organization, a grid is a consistent, open and standardized environment that provides a flexible, secure, coordinated sharing of the computing and storage resources that are part of this environment within one virtual organization.

Paravirtualization is a virtualization technique that provides virtual machines with a software interface similar, but not identical, to the underlying hardware. The purpose of this modified interface is to reduce the time spent by the guest operating system performing operations that are much more difficult to perform in a virtual environment than in a non-virtualized environment.

There are special "hooks" that allow the guest and host systems to request and confirm the completion of these complex tasks that could be performed in a virtual environment, but at a much slower rate.

Hypervisor ( or Virtual Machine Monitor) - in computers, a program or hardware circuit that provides or allows the simultaneous, parallel execution of several or even many operating systems on the same host computer. The hypervisor also provides isolation of operating systems from each other, protection and security, resource sharing between different running OSes, and resource management.

The hypervisor can also (but is not obligated to) provide operating systems running under its control on the same host computer with the means to communicate and interact with each other (for example, through file sharing or network connections) as if these operating systems were running on different physical computers.

The hypervisor itself is in some ways a minimal operating system (microkernel or nanokernel). It provides operating systems running under its control with a virtual machine service, virtualizing or emulating the real (physical) Hardware specific machine, and manages these virtual machines, allocating and releasing resources for them. The hypervisor allows independent “switching on,” rebooting, “switching off” any of the virtual machines running a particular OS. However, an operating system running in a virtual machine running a hypervisor can, but does not have to, “know” that it is running in a virtual machine and not on real hardware.

Cloud service models

Options for providing computing power vary greatly. Everything related to Cloud Computing is usually called the word aaS, which simply stands for “as a Service,” that is, “as a service,” or “as a service.”

Software as a Service (SaaS) - The provider provides the client with a ready-to-use application. Applications are accessible from a variety of client devices or through thin client interfaces such as a web browser (such as webmail) or program interfaces. The consumer does not manage the underlying cloud infrastructure, including networks, servers, operating systems, storage systems, and even individual application settings, with the exception of some user-defined application configuration settings.

Under the SaaS model, customers pay not to own the software itself, but to rent it (that is, use it through a web interface). Thus, in contrast to the classical software licensing scheme, the customer incurs relatively small recurring costs and does not need to invest significant funds to purchase the software and support it. The periodic payment scheme assumes that if the software is temporarily not needed, the customer can suspend its use and freeze payments to the developer.

From a developer's point of view, the SaaS model allows you to effectively combat unlicensed use of software (piracy), since the software itself does not reach end customers. In addition, the SaaS concept often reduces the cost of deploying and implementing information systems.

Rice. 1 Typical SaaS scheme

Platform as a Service (PaaS) - the provider offers the client a software platform and tools for designing, developing, testing and deploying user applications. The consumer does not manage the underlying cloud infrastructure, including networks, servers, operating systems, and storage systems, but does have control over the deployed applications and possibly some configuration parameters of the hosting environment.

Rice. 2 Typical PaaS scheme

Infrastructure as a Service (IaaS). - the provider offers the client computing resources for rent: servers, storage systems, network equipment, operating systems and system software, virtualization systems, resource management systems. The consumer does not control the underlying cloud infrastructure, but has control over operating systems, storage systems, deployed applications, and perhaps limited control over the selection of network components (for example, a host with firewalls).

Rice. 3 Typical IaaS scheme

Additionally There are such services as:

Communications as a Service (Com-aaS) - it is understood that communication services are provided as services; Usually this is IP telephony, mail and instant communications (chat, IM).

Cloud data storage- the user is provided with a certain amount of space for storing information. Since information is stored distributed and duplicated, such storage facilities provide a much greater degree of data safety than local servers.

Workplace as a Service (WaaS) - the user, having at his disposal insufficient powerful computer, can buy computing resources from the supplier and use his PC as a terminal to access the service.

Antivirus cloud- infrastructure that is used to process information received from users in order to promptly recognize new, previously unknown threats. Cloud antivirus does not require any unnecessary actions from the user - it simply sends a request regarding a suspicious program or link. When a danger is confirmed, all necessary actions are performed automatically.

Deployment Models

Among the deployment models, there are 4 main types of infrastructure

Private cloud - infrastructure intended for use by one organization, including several consumers (for example, divisions of one organization), possibly also clients and contractors of this organization. A private cloud may be owned, operated and operated by the organization itself or by a third party (or some combination thereof), and may physically exist within or outside the owner's jurisdiction.

Rice. 4 Private cloud.

Public cloud - infrastructure intended for free use by the general public. A public cloud can be owned, operated, and operated by commercial, academic, and government organizations (or some combination thereof). The public cloud physically exists in the jurisdiction of the owner - the service provider.

Rice. 5 Public cloud.

Hybrid cloud - it is a combination of two or more different cloud infrastructures (private, public or public) that remain unique entities but are linked together by standardized or proprietary data and application technologies (for example, short-term use of public cloud resources to balance load between clouds).

Rice. 6 Hybrid cloud.

Community cloud - a type of infrastructure intended for use by a specific community of customers from organizations that have common objectives (for example, mission, security requirements, policies, and compliance with various requirements). A community cloud may be cooperatively owned, managed and operated by one or more community organizations or a third party (or some combination thereof), and may physically exist within or outside the jurisdiction of the owner.

Rice. 7 Description of cloud properties

Basic properties

NIST in its document `The NIST Definition of Cloud Computing` defines the following characteristics of clouds:

Self-service on demand (On-demand self-service). The consumer has the opportunity to access the provided computing resources unilaterally as needed, automatically, without the need to interact with employees of each service provider.

Broad network access. The provided computing resources are available over the network through standard mechanisms for various platforms, thin and thick clients ( mobile phones, tablets, laptops, workstations, etc.).

Resource pooling. The provider's computing resources are pooled to serve many consumers using a multi-tenant model. Pools include a variety of physical and virtual resources that can be dynamically assigned and reassigned according to consumer requests. There is no need for the consumer to know exact location resources, but you can specify their location at a higher level of abstraction (for example, country, region, or data center). Examples of this type of resource include storage systems, computing power, memory, throughput networks.

Rapid elasticity. Resources can be elastically allocated and released, in some cases automatically, to quickly scale with demand. For the consumer, the possibilities for providing resources are seen as unlimited, that is, they can be appropriated in any quantity and at any time.

Measured service. Cloud systems automatically manage and optimize resources using measurement tools implemented at the abstraction level for various types of services ((for example, management external memory, processing, bandwidth, or active user sessions). The resources used can be tracked and controlled, providing transparency for both the provider and the consumer using the service.

Rice. 8 Structural scheme cloud server

Advantages and disadvantages of cloud computing

Advantages

· requirements for PC computing power are reduced (an indispensable condition is only access to the Internet);

· fault tolerance;

· safety;

· high speed of data processing;

· reducing costs for hardware and software, maintenance and electricity;

· saving disk space(both data and programs are stored on the Internet).

· Live migration - transferring a virtual machine from one physical server to another without stopping the operation of the virtual machine and stopping services.

· At the end of 2010, in connection with DDoS attacks against companies that refused to provide resources to WikiLeaks, another advantage of cloud computing technology became clear. All companies that opposed WikiLeaks were attacked, but only Amazon turned out to be insensitive to these impacts, since it used cloud computing. (“Anonymous: serious threat or mere annoyance”, Network Security, N1, 2011).

Flaws

· dependence of the safety of user data on companies providing cloud computing services;

· constant connection to the network - to gain access to cloud services, you need a constant connection to the Internet. However, nowadays this is not such a big drawback, especially with the advent of technology. cellular communication 3G and 4G.

· software and its modification - there are restrictions on software that can be deployed on the clouds and provided to the user. The software user has limitations in the software used and sometimes does not have the opportunity to customize it for his own purposes.

· confidentiality - the confidentiality of data stored on public clouds is currently causing a lot of controversy, but in most cases experts agree that it is not recommended to store the most valuable documents for a company on a public cloud, since there is currently no technology that would guaranteed 100% confidentiality of stored data, which is why the use of encryption in the cloud is mandatory.

· reliability - with regard to the reliability of stored information, we can say with confidence that if you have lost information stored in the “cloud”, then you have lost it forever.

· security - the “cloud” itself is a fairly reliable system, however, upon penetration, an attacker gains access to a huge data storage. Another disadvantage is the use of virtualization systems, which use standard OS kernels such as Linux, Windows as a hypervisor etc., which allows the use of viruses.

· high cost of equipment - to build its own cloud, a company needs to allocate significant material resources, which is not beneficial for newly created and small companies.

3. Reference architecture

The NIST Cloud Computing Reference Architecture contains five main actors. Each actor plays a role and performs actions and functions. The reference architecture is represented as sequential diagrams with increasing levels of detail.

Rice. 9 Conceptual diagram of the reference architecture

Cloud Consumer- a person or organization maintaining a business relationship and using the services of Cloud Providers.

Cloud Consumers are divided into 3 groups:

· SaaS - uses applications to automate business processes.

· PaaS - develops, tests, deploys and manages applications deployed in a cloud environment.

· IaaS - creates and manages IT infrastructure services.

Cloud Provider- the person, organization or entity responsible for the availability of a cloud service to Cloud Consumers.

· SaaS - installs, manages, maintains and provides software deployed on cloud infrastructure.

· PaaS - provides and manages cloud infrastructure and middleware. Provides development and administration tools.

· IaaS - provides and maintains servers, databases, and computing resources. Provides the cloud structure to the consumer.

The activities of Cloud Providers are divided into 5 main typical actions:

Deployment of services:

o Private cloud - one organization is served. The infrastructure is managed both by the organization itself and by a third party and can be deployed either by the Provider (off premise) or by the organization (on premise).

o Shared cloud - the infrastructure is shared by several organizations with similar requirements (security, regulatory compliance).

o Public cloud - the infrastructure is used by a large number of organizations with different requirements. Off premise only.

o Hybrid cloud - infrastructure combines various infrastructures based on similar technologies.

Service management

o Service level - defines the basic services provided by the Provider.

§ SaaS - an application used by the Consumer by accessing the cloud from special programs.

§ PaaS - containers for Consumer applications, development and administration tools.

§ IaaS - computing power, databases, fundamental resources on top of which the Consumer deploys its infrastructure.

o Level of abstraction and resource control

§ Management of the hypervisor and virtual components necessary to implement the infrastructure.

o Level of physical resources

§ Computer equipment

§ Engineering infrastructure

o Availability

o Confidentiality

o Identification

o Security monitoring and incident handling

o Security policies

Privacy

o Protection of the processing, storage and transmission of personal data.

Cloud Auditor- a participant who can perform an independent assessment of cloud services, maintenance of information systems, performance and security of cloud implementations.

Can make its own assessment of security, privacy, performance, etc. in accordance with approved documents.

Rice. 10 Activities of the Provider

Cloud Broker- the entity that manages the use, performance and provision of cloud services, as well as establishing the relationship between Providers and Consumers.

With the development of cloud computing, the integration of cloud services may be too complex for the Consumer.

o Service intermediation - expanding a given service and providing new opportunities

o Aggregation - combining various services to provide the Consumer with

Cloud Telecom Operator- an intermediary providing connection services and transport (communication services) for the delivery of cloud services from Providers to Consumers.

Provides access via communication devices

Provides connection level in accordance with SLA.

Among the five actors presented, the cloud broker is optional, because cloud consumers can receive services directly from the cloud provider.

The introduction of actors is due to the need to study the relationships between subjects.

4. Service Level Agreement

Service Level Agreement - a document describing the level of service delivery expected by the client from the supplier, based on indicators applicable to this service, and establishing the responsibility of the supplier if the agreed indicators are not achieved.

Here are some indicators that appear in one form or another in operator documents:

ASR (Answer Seizure Ratio) - a parameter that determines the quality of a telephone connection in a given direction. ASR is calculated as the percentage of the number of telephone connections established as a result of calls to the total number of calls made in a given direction.

PDD (Post Dial Delay) - a parameter that defines the period of time (in seconds) that elapsed from the moment of the call until the telephone connection was established.

Service Availability Ratio- the ratio of the interruption time in the provision of services to the total time when the service should be provided.

Packet loss rate- the ratio of correctly received data packets to the total number of packets that were transmitted over the network over a certain period of time.

Time delays when transmitting information packets- the period of time required to transmit a packet of information between two network devices.

Reliability of information transfer- the ratio of the number of erroneously transmitted data packets to the total number of transmitted data packets.

Periods of work, time to notify subscribers and time to restore services.

In other words, service availability of 99.99% means that the operator guarantees no more than 4.3 minutes of communication downtime per month, 99.9% - that the service may not be provided for 43.2 minutes, and 99% - that the interruption can last more than 7 hours. In some practices, there is a limitation of network availability and a lower value of the parameter is assumed - during non-working hours. Different types of services (traffic classes) also have different indicator values. For example, for voice the most important thing is the latency indicator - it should be minimal. But it requires low speed, plus some packets can be lost without loss of quality (up to about 1% depending on the codec). For data transmission, speed comes first, and packet loss should tend to zero.

5. Methods and means of protection in cloud computing

Confidentiality must be ensured throughout the entire chain, including the cloud solution provider, the consumer, and the communications between them.

The Provider's task is to ensure both physical and software integrity of data from attacks by third parties. The consumer must put in place “on its own territory” appropriate policies and procedures that exclude the transfer of access rights to information to third parties.

The problems of ensuring the integrity of information in the case of using individual "cloud" applications can be solved - thanks to modern database architectures, systems Reserve copy, integrity checking algorithms and other industrial solutions. But that's not all. New challenges may arise when it comes to integrating multiple cloud applications from different vendors.

In the near future, for companies needing a secure virtual environment, the only option will be to create a private cloud system. The fact is that private clouds, unlike public or hybrid systems, are most similar to virtualized infrastructures that the IT departments of large corporations have already learned to implement and over which they can maintain full control. Information security deficiencies in public cloud systems pose a serious problem. Most hacking incidents occur in public clouds.

6. Security of cloud models

The level of risk in the three cloud models is very different, and the way security issues are addressed also differs depending on the level of interaction. The security requirements remain the same, but in different models, SaaS, PaaS or IaaS, the level of control over security changes. From a logical point of view, nothing changes, but the possibilities of physical implementation are radically different.

Rice. 11. The most current cybersecurity threats

in the SaaS model, the application runs on cloud infrastructure and is accessible through a web browser. The client does not control the network, servers, operating systems, storage, or even some application capabilities. For this reason, in the SaaS model, the primary security responsibility falls almost entirely on the vendors.

Problem number 1 is password management. In the SaaS model, applications reside in the cloud, so the main risk is using multiple accounts to access applications. Organizations can solve this problem by unifying accounts across cloud and on-premises systems. When using a single sign-on system, users gain access to workstations and cloud services using one account. This approach reduces the likelihood of stranded accounts that are susceptible to unauthorized use after employees leave.

As explained by CSA, PaaS involves customers building applications using programming languages ​​and tools supported by the vendor, and then deploying them on cloud infrastructure. As in the SaaS model, the customer cannot manage or control the infrastructure - networks, servers, operating systems or storage systems - but does have control over application deployment.

In a PaaS model, users must pay attention to application security as well as issues related to API management, such as permission verification, authorization, and validation.

Problem number 1 is data encryption. The PaaS model is inherently secure, but the risk lies in insufficient system performance. The reason is that encryption is recommended when communicating with PaaS providers, which requires additional processing power. However, in any solution, the transmission of sensitive user data must be carried out over an encrypted channel.

While customers here do not control the underlying cloud infrastructure, they do have control over operating systems, data storage, and application deployment, and perhaps limited control over the choice of networking components.

This model has several built-in security capabilities without protecting the infrastructure itself. This means that users must manage and secure operating systems, applications, and content, typically through APIs.

If this is translated into the language of security methods, then the provider must provide:

· reliable access control to the infrastructure itself;

· infrastructure fault tolerance.

At the same time, the cloud consumer takes on many more security functions:

· firewalling within the infrastructure;

· protection against network intrusions;

· protection of operating systems and databases (access control, protection against vulnerabilities, control of security settings);

· protection of end applications (anti-virus protection, access control).

Thus, most of the protection measures fall on the shoulders of the consumer. The provider can provide standard recommendations for protection or ready-made solutions, which will simplify the task for end consumers.

Table 1. Division of security responsibilities between the client and the service provider. (P - supplier, K - client)


Enterprise server

Application

Data

Runtime

Middleware

operating system

Virtualization

Servers

Data warehouses

network hardware



7. Security audit

The tasks of a Cloud Auditor are essentially no different from the tasks of an auditor of conventional systems. Cloud security audit is divided into Vendor audit and User audit. The User's audit is carried out at the User's request, while the Supplier's audit is one of the most important conditions for doing business.

It consists of:

· initiation of the audit procedure;

· collection of audit information;

· analysis of audit data;

· preparation of the audit report.

At the stage of initiating the audit procedure, the issues of the auditor’s powers and the timing of the audit must be resolved. The mandatory assistance of employees to the auditor must also be stipulated.

In general, the auditor conducts an audit to determine the reliability

· virtualization systems, hypervisor;

· servers;

· data warehouses;

· network equipment.

If the Supplier uses the IaaS model on the server being checked, then this test will be sufficient to identify vulnerabilities.

When using the PaaS model, additional checks must be made

· operating system,

middleware,

· runtime environment.

When using the SaaS model, vulnerabilities are also checked

· data storage and processing systems,

· applications.

A security audit is performed using the same methods and tools as auditing regular servers. But unlike a regular server, in cloud technologies the hypervisor is additionally checked for stability. In cloud technologies, the hypervisor is one of the core technologies and therefore its audit should be given special importance.

8. Incident investigation and forensics in cloud computing

Information security measures can be divided into preventive (for example, encryption and other access control mechanisms) and reactive (investigations). The proactive aspect of cloud security is an area of ​​active research, while the reactive aspect of cloud security has received much less attention.

Investigation of incidents (including investigation of crimes in information sphere) is a well-known section of information security. The objectives of such investigations are usually:

Proof that the crime/incident occurred

Recovering the events surrounding the incident

Identification of offenders

Proof of the involvement and responsibility of offenders

Evidence of dishonest intent on the part of the offenders.

A new discipline - computer forensics (or forensics) has emerged due to the need for forensic analysis of digital systems. The goals of computer technical expertise are usually as follows:

Recovering data that may have been deleted

Recovering events that occurred inside and outside digital systems related to the incident

Identification of digital system users

Detects the presence of viruses and other malicious software

Detecting the presence of illegal materials and programs

Hacking passwords, encryption keys and access codes

Ideally, computer forensics is a kind of time machine for the investigator, which can travel at any time into the past of a digital device and provide the researcher with information about:

people who used the device at a certain moment

user actions (for example, opening documents, accessing a website, typing data into a word processor, etc.)

data stored, created and processed by a device at a specific time.

Cloud services replacing stand-alone digital devices should provide a similar level of forensic readiness. However, this requires overcoming the problems associated with resource pooling, multi-tenancy and elasticity of the cloud computing infrastructure. The main tool in incident investigation is the audit log.

Audit logs—designed to track user login history, administrative tasks, and data changes—are an essential part of a security system. In cloud technologies, the audit trail itself is not only a tool for conducting investigations, but also a tool for calculating the cost of using servers. Although an audit trail does not eliminate security gaps, it does allow you to look at what is happening with a critical eye and formulate proposals for correcting the situation.

Creating archives and backups is important, but cannot replace a formal audit trail that records who did what, when. An audit trail is one of the primary tools of a security auditor.

The service agreement usually specifies which audit trails will be maintained and provided to the User.

9. Threat model

In 2010, CSA conducted an analysis of the main security threats in cloud technologies. The result of their work was the document "Top threats of Cloud Computing v 1.0" in which the most complete this moment the threat model and the intruder model are described. A more complete, second version of this document is currently being developed.

The current paper describes attackers for three service models: SaaS, PaaS, and IaaS. 7 main attack vectors have been identified. For the most part, all types of attacks considered are attacks inherent in ordinary, “non-cloud” servers. Cloud infrastructure imposes certain features on them. For example, in addition to attacks on vulnerabilities in the software part of servers, attacks on the hypervisor, which is also their software part, are added.

Security Threat #1

Unlawful and dishonest use of cloud technologies.

Description:

To obtain resources from an IaaS cloud provider, the user only needs to have a credit card. Ease of registration and allocation of resources allows spammers, virus authors, etc. use the cloud service for your own criminal purposes. Previously, this type of attack was observed only in PaaS, but recent studies have shown the possibility of using IaaS for DDOS attacks, hosting malicious code, creating botnet networks, and more.

Examples of services were used to create a botnet network based on the "Zeus" Trojan horse, store the code of the "InfoStealer" Trojan horse and post information about various MS Office and AdobePDF vulnerabilities.

In addition, botnet networks use IaaS to manage their peers and send spam. Because of this, some IaaS services were blacklisted, and their users were completely ignored by mail servers.

· Improved user registration procedures

· Improving credit card verification procedures and monitoring the use of payment instruments

· Comprehensive study of the network activity of service users

· Monitoring master blacklists to see if the cloud provider's network appears there.

Affected service models:

Security Threat #2

Insecure Application Programming Interfaces (APIs)

Description:

Cloud infrastructure providers provide users with a set of software interfaces for managing resources, virtual machines or services. The security of the entire system depends on the security of these interfaces.

Anonymous access to the interface and transmission of credentials in clear text are the main characteristics of insecure software interfaces. Limited capabilities for monitoring API usage, lack of logging systems, and unknown relationships between various services only increase the risk of hacking.

· Perform an analysis of the cloud provider's security model

· Ensure that strong encryption algorithms are used

· Ensure that strong authentication and authorization methods are used

· Understand the entire chain of dependencies between different services.

Affected service models:

Security Threat #3

Insiders

Description:

The problem of unauthorized access to information from within is extremely dangerous. Often, the provider does not have a monitoring system for employee activity, which means that an attacker can gain access to client information using his official position. Since the provider does not disclose its recruitment policy, the threat can come from either an amateur hacker or an organized criminal structure that has infiltrated the ranks of the provider's employees.

There are currently no examples of this type of abuse.

Implementation of strict equipment procurement rules and use of appropriate tamper detection systems

Regulating the rules of hiring employees in public contracts with users

Creation of a transparent security system, along with the publication of security audit reports of the provider’s internal systems

Affected service models:

Rice. 12 Example of an insider

Security Threat #4

Vulnerabilities in cloud technologies

Description:

IaaS service providers use the abstraction of hardware resources using virtualization systems. However, the hardware may not be designed with shared resources in mind. In order to minimize the impact of this factor, the hypervisor controls the virtual machine's access to hardware resources, however, even in hypervisors there can be serious vulnerabilities, the use of which can lead to escalation of privileges or gaining unauthorized access to physical equipment.

In order to protect systems from such problems, it is necessary to introduce mechanisms for isolating virtual environments and failure detection systems. Users of the virtual machine should not have access to shared resources.

There are examples potential vulnerabilities, as well as theoretical methods for bypassing isolation in virtual environments.

· Implementation of the most advanced methods for installing, configuring and protecting virtual environments

· Use of tamper detection systems

· Application of reliable authentication and authorization rules for administrative work

· Tightening requirements for the timing of patches and updates

· Conducting timely scanning and vulnerability detection procedures.

Security Threat #5

Data loss or leakage

Description:

Data loss can happen due to thousands of reasons. For example, deliberate destruction of the encryption key will result in the encrypted information being unrecoverable. Deletion of data or part of data, unauthorized access to important information, changing records or failure of the media are also examples of such situations. In a complex cloud infrastructure, the probability of each event increases due to the close interaction of components.

Incorrect application of authentication, authorization and audit rules, incorrect use of encryption rules and methods, and equipment failure can lead to data loss or leakage.

· Use of a reliable and secure API

· Encryption and protection of transmitted data

· Analysis of the data protection model at all stages of the system operation

· Implementation of a reliable encryption key management system

· Selection and acquisition of only the most reliable media

· Ensuring timely data backup

Affected service models:

Security Threat #6

Theft of personal data and unauthorized access to the service

Description:

This type of threat is not new. Millions of users encounter it every day. The main target of attackers is the username (login) and its password. In the context of cloud systems, password and username theft increases the risk of using data stored in the provider's cloud infrastructure. This way, the attacker has the opportunity to use the victim’s reputation for his activities.

· Prohibition on transfer of accounts

· Use of two factor authentication methods

· Implementation of proactive monitoring of unauthorized access

· Description of the cloud provider's security model.

Affected service models:

Security Threat #7

Other vulnerabilities

Description:

The use of cloud technologies to conduct business allows the company to focus on its business, leaving the cloud provider to take care of the IT infrastructure and services. When advertising its service, the cloud provider strives to show all the capabilities, while revealing implementation details. This can pose a serious threat, since knowledge of the internal infrastructure gives an attacker the opportunity to find an unpatched vulnerability and attack the system. In order to avoid such situations, cloud providers may not provide information about the internal structure of the cloud, however, this approach also does not contribute to increasing trust, since potential users are not able to assess the degree of data security. In addition, this approach limits the ability to timely find and eliminate vulnerabilities.

· Amazon's refusal to conduct an EC2 cloud security audit

· A vulnerability in the processing software that led to a breach of the security system of the Hearthland data center

· Log Data Disclosure

· Full or partial disclosure of system architecture and details of installed software

· Using vulnerability monitoring systems.

Affected service models:

1. Legal basis

According to experts, 70% of security problems in the cloud can be avoided if you correctly draw up a service agreement.

The basis for such an agreement could be the Cloud Bill of Rights.

The Cloud Bill of Rights was developed back in 2008 by James Urquhart. He published this material on his blog, which aroused so much interest and controversy that the author periodically updates his “manuscript” in accordance with realities.

Article 1 (partial): Customers own their data

· No manufacturer (or supplier) shall, in any interaction with customers of any plan, discuss the rights to any data uploaded, created, generated, modified or otherwise to which the customer has rights.

· Manufacturers must initially provide a minimum ability to access customer data even at the stage of developing solutions and services.

· Customers own their data, which means they are responsible for ensuring that the data complies with regulations and laws.

· Because regulatory compliance issues regarding data use, security, and compliance are critical, it is necessary for the customer to determine the geographic location of its own data. Otherwise, manufacturers must provide all guarantees to users that their data will be stored in accordance with all rules and regulations.

Article 2: Producers and Customers jointly own and manage the service levels in the system

· Manufacturers own and must do everything to meet the level of service for each client individually. All necessary resources and efforts made to achieve the proper level of service in working with clients must be free for the client, that is, not included in the cost of the service.

· Clients, in turn, are responsible for and own the level of service provided to their own internal and external clients. When using manufacturer solutions to provide your own services, the client’s responsibility and the level of such service should not entirely depend on the manufacturer.

· If it is necessary to integrate manufacturer and customer systems, manufacturers should offer customers the ability to monitor the integration process. If the client has corporate standards for information systems integration, the manufacturer must comply with these standards.

· Under no circumstances should manufacturers close customer accounts for political statements, inappropriate speech, religious comments, unless this violates specific legal provisions, does not constitute an expression of hatred, etc.

Article 3: Manufacturers Own Their Interfaces

· Manufacturers are not required to provide standard interfaces or interfaces with open source, unless otherwise stated in the agreements with the client. Manufacturers have rights to interfaces. If the manufacturer does not consider it possible to provide the client with the ability to modify the interface in a familiar programming language, the client can purchase services from the manufacturer or third-party developers to modify the interfaces in accordance with their own requirements.

· The client, however, has the right to use the purchased service for his own purposes, as well as expand its capabilities, replicate and improve it. This paragraph does not relieve customers from the liability of patent law and intellectual property rights.

The above three articles are the basics for customers and manufacturers in the cloud. You can read their full text in open access on the Internet. Of course, this bill is not a complete legal document, much less an official one. Its articles can be changed and expanded at any time, just as the bill can be supplemented with new articles. This is an attempt to formalize “ownership” in the cloud in order to somehow standardize this freedom-loving area of ​​​​knowledge and technology.

Relationships between the parties

Today, the best expert in the field of cloud security is the Cloud Security Alliance (CSA). The organization has released and recently updated guidance that includes hundreds of considerations and considerations to take into account when assessing risks in cloud computing.

Another organization whose activities address aspects of security in the cloud is the Trusted Computing Group (TCG). She is the author of several standards in this and other areas, including the widely used Trusted Storage, Trusted Network Connect (TNC), and Trusted Platform Module (TPM) today.

These organizations have jointly developed a number of issues that the customer and provider must work out when concluding a contract. These questions will allow you to solve most of the problems when using the cloud, force majeure circumstances, changing cloud service providers and other situations.

1. Security of stored data. How does the service provider ensure the safety of stored data?

The best measure to protect data located in storage is the use of encryption technologies. The provider must always encrypt customer information stored on its servers to prevent unauthorized access. The provider must also permanently delete data when it is no longer needed or will be required in the future.

2. Data protection during transmission. How does the provider ensure the safety of data during transmission (within the cloud and on the way from/to the cloud)?

Transmitted data must always be encrypted and accessible to the user only after authentication. This approach ensures that this data cannot be modified or read by anyone, even if they access it through untrusted nodes on the network. These technologies have been developed over "thousands of man-years" and have led to the creation of robust protocols and algorithms (eg TLS, IPsec and AES). Providers should use these protocols rather than invent their own.

3. Authentication. How does the provider know the authenticity of the client?

The most common authentication method is password protection. However, providers seeking to offer greater security to their customers are turning to more powerful tools such as certificates and tokens. Along with using more tamper-proof authentication tools, providers must be able to work with standards such as LDAP and SAML. This is necessary to ensure interaction between the provider and the client’s user identification system during authorization and determination of the powers issued to the user. Thanks to this, the provider will always have up-to-date information about authorized users. The worst case scenario is when the client provides the provider with a specific list of authorized users. As a rule, in this case, difficulties may arise when dismissing an employee or moving him to another position.

4. Isolation of users. How are one customer's data and applications separated from other customers' data and applications?

The best option: when each client uses an individual virtual machine (VM) and virtual network. The separation between VMs and, therefore, between users is provided by the hypervisor. Virtual networks, in turn, are deployed using standard technologies such as VLAN (Virtual Local Area Network), VPLS (Virtual Private LAN Service) and VPN (Virtual Private Network).

Some providers place all customer data in a single software environment and, through changes in its code, try to isolate customer data from each other. This approach is reckless and unreliable. First, an attacker could find a hole in non-standard code that allows him to gain access to data that he should not see. Secondly, an error in the code can lead to one client accidentally “seeing” the data of another. Recently, both cases have occurred. Therefore, to separate user data, using different virtual machines and virtual networks is a more reasonable step.

5. Regulatory issues. To what extent does the provider follow the laws and regulations applicable to the cloud computing industry?

Depending on the jurisdiction, laws, regulations and any special provisions may vary. For example, they may prohibit the export of data, require the use of strictly defined security measures, compatibility with certain standards, and the ability to be audited. Ultimately, they can demand that government departments and courts can access the information if necessary. A provider’s careless attitude to these points can lead to significant expenses for its clients due to legal consequences.

The provider is obliged to follow strict rules and adhere to a unified strategy in the legal and regulatory spheres. This includes user data security, export, compliance, auditing, data retention and deletion, and information disclosure (the latter is especially true when information from multiple clients may be stored on the same physical server). To find out, clients are strongly advised to seek help from specialists who will study this issue thoroughly.

6. Response to incidents. How does the provider respond to incidents, and how involved are its customers likely to be in the incident?

Sometimes not everything goes according to plan. Therefore, the service provider is obliged to adhere to specific rules of conduct in case of unforeseen circumstances. These rules must be documented. Providers must be involved in identifying incidents and minimizing their consequences by informing users about the current situation. Ideally, they should regularly provide clients with information in as much detail as possible about the issue. Additionally, it is up to customers to assess the likelihood of security issues occurring and take appropriate action.

10. International and domestic standards

The evolution of cloud technologies is outpacing efforts to create and modify required industry standards, many of which have not been updated for many years. Therefore, legislation in the field of cloud technologies is one of the most important steps towards ensuring security.

IEEE, one of the largest standards development organizations, announced the launch of a special project in the field of cloud technologies, the Cloud Computing Initiative. This is the first cloud standardization initiative to be launched at an international level - until now, cloud standards have been primarily handled by industry consortia. The initiative currently includes 2 projects: IEEE P2301 (tm), “Draft Guide for Ensuring Portability and Interoperability of Cloud Technology Profiles”, and IEEE P2302 (tm) - “Draft Standard for Ensuring Interoperability and Distributed Interoperability (Federation) of Cloud Systems "

Within the IEEE Standards Development Association, 2 new working groups have been created to work on the IEEE P2301 and IEEE P2302 projects, respectively. IEEE P2301 will contain profiles of existing and emerging standards in the areas of applications, portability, management and interoperability interfaces, as well as file formats and operating conventions. The information in the document will be logically structured in accordance with the various target audience groups: vendors, service providers and other interested market participants. It is expected that when completed, the standard will be able to be used in the procurement, development, construction and use of cloud products and services based on standard technologies.

The IEEE P2302 standard will describe the basic topology, protocols, functionality, and management techniques needed to interoperate different cloud structures (for example, interoperability between a private cloud and a public cloud such as EC2). This standard will enable cloud product and service providers to benefit from economies of scale while providing transparency to service and application users.

ISO is preparing a special standard dedicated to cloud computing security. The main focus of the new standard is solving organizational issues related to clouds. However, due to the complexity of ISO approval procedures, the final version of the document should be released only in 2013.

The value of the document is that not only government organizations (NIST, ENISA), but also representatives of expert communities and associations such as ISACA and CSA are involved in its preparation. Moreover, one document contains recommendations for both cloud service providers and their consumers - client organizations.

The main objective of this document is to describe in detail the best practices associated with the use of cloud computing from an information security point of view. At the same time, the standard does not concentrate only on technical aspects, but rather on organizational aspects, which must not be forgotten when moving to cloud computing. This includes the division of rights and responsibilities, the signing of agreements with third parties, the management of assets owned by different participants in the cloud process, personnel management issues, and so on.

The new document largely incorporates materials developed earlier in the IT industry.

Australian Government

After several months of brainstorming, the Australian Government released a series of guidelines for the transition to cloud computing. On February 15, 2012, these guidelines were posted on the blog of the Australian Government Information Management Office (AGIMO).

To make it easier for companies to migrate to the cloud, recommendations have been prepared for best practice use of cloud services in light of compliance with the requirements of the Better Practice Guides for Financial Management and Accountability Act 1997. The guidelines generally address financial, legal and privacy issues.

Guides talk about the need to continuously monitor and control the use of cloud services through daily analysis of accounts and reports. This will help you avoid hidden “cheats” and becoming dependent on cloud service providers.

The first guide is called "Cloud Computing and Privacy for Australian Government Agencies" (Privacy and Cloud Computing for Australian Government Agencies, 9 pages). This document places particular emphasis on privacy and security issues in data storage.

In addition to this guide, Negotiating the Cloud - Legal Issues in Cloud Computing Agreements (19 pages) has also been prepared to help you understand the provisions included in cloud computing agreements. agreement.

The final third guide, Financial Considerations for Government Use of Cloud Computing (6 pages), examines the financial issues a company should consider if it decides to use cloud computing in its business operations.

In addition to those addressed in the guidance, there are a number of other issues that need to be addressed when using cloud computing, including issues related to government administration, procurement, and business management policies.

Public discussion of this analytical document provides an opportunity for stakeholders to consider and comment on the following problematic issues:

· Unauthorized access to classified information;

· Loss of access to data;

Failure to ensure data integrity and authenticity, and

· Understanding of the practical aspects associated with providing cloud services.

11. Territorial affiliation of data

Various countries have a number of regulations that require sensitive data to remain within the country. And although storing data within a certain territory, at first glance, is not a difficult task, providers cloud services often cannot guarantee this. In systems with a high degree of virtualization, data and virtual machines can be moved from one country to another for various purposes - load balancing, ensuring fault tolerance.

Some major players in the SaaS market (such as Google, Symantec) can guarantee that data will be stored in the relevant country. But these are rather exceptions; in general, fulfillment of these requirements is still quite rare. Even if the data remains in the country, customers have no way to verify this. In addition, we should not forget about the mobility of company employees. If a specialist working in Moscow is sent to New York, then it is better (or at least faster) for him to receive data from a data center in the USA. Ensuring this is an order of magnitude more difficult task.

12. State standards

At the moment, our country does not have any serious regulatory framework for cloud technologies, although developments in this area are already underway. Thus, by order of the President of the Russian Federation No. 146 dated February 8, 2012. determined that the federal executive authorities authorized in the field of ensuring data security in information systems created using supercomputer and grid technologies are the FSB of Russia and the FSTEC of Russia.

In connection with this decree, the powers of these services have been expanded. The FSB of Russia now develops and approves regulatory and methodological documents on ensuring the security of these systems, organizes and conducts research in the field of information security.

The service also carries out expert cryptographic, engineering-cryptographic and special studies of these information systems and prepares expert opinions on proposals for work on their creation.

The document also stipulates that the FSTEC of Russia develops a strategy and determines priority areas of activity to ensure the security of information in information systems created using supercomputer and grid technologies that process restricted data, and also monitors the status of work to ensure said security.

FSTEC ordered a study, as a result of which a beta version of the “terminal system in the field of cloud technologies” appeared.

As you can understand, this entire Terminology System is an adapted translation of two documents: “Focus Group on Cloud Computing Technical Report” and “The NIST Definition of Cloud Computing”. Well, the fact that these two documents do not really agree with each other is a separate issue. But visually it is still clear: in the Russian “Terminosystem”, the authors, to begin with, simply did not provide links to these English documents.

The fact is that for such work it is necessary to first discuss the concept, goals and objectives, and methods for solving them. There are many questions and comments. The main methodological note: you need to very clearly formulate what problem this research solves, its purpose. Let me note right away that “creating a terminology system” cannot be a goal, it is a means, but achieving what is not yet very clear.

Not to mention that a normal study should include a section "review of the current state of affairs."

It is difficult to discuss the results of the study without knowing the original formulation of the problem and how its authors solved it.

But one fundamental mistake of the Terminology System is clearly visible: it is impossible to discuss “cloud topics” in isolation from “non-cloud” ones. Outside the general IT context. But this context is precisely what is not visible in the study.

And the result of this is that in practice such a Terminological System will be impossible to apply. It can only confuse the situation even more.

13. Security tools in cloud technologies

A cloud server security system in its minimum configuration must ensure the security of network equipment, data storage, server and hypervisor. Additionally, it is possible to place an antivirus in a dedicated kernel to prevent infection of the hypervisor through a virtual machine, a data storage encryption system user information in encrypted form and tools for implementing encrypted tunneling between the virtual server and the client machine.

For this we need a server that supports virtualization. Solutions of this kind are offered by Cisco, Microsoft, VMWare, Xen, KVM.

It is also permissible to use a classic server, and provide virtualization on it using a hypervisor.

Any server with compatible processors is suitable for virtualizing operating systems for x86-64 platforms.

Such a solution will simplify the transition to computing virtualization without making additional financial investments in equipment upgrades.

Scheme of work:

Rice. 11. Example of a cloud server

Rice. 12. Server response to hardware failure

At the moment, the cloud computing security market is still quite empty. And this is not surprising. In the absence of a regulatory framework and the unknown of future standards, development companies do not know where to focus their efforts.

However, even in such conditions, specialized software and hardware systems appear that make it possible to secure the cloud structure from the main types of threats.

· Violation of integrity

Hacking the hypervisor

· Insiders

· Identification

Authentication

· Encryption

Accord-V

Hardware and software system Accord-V. designed to protect the virtualization infrastructure of VMware vSphere 4.1, VMware vSphere 4.0 and VMware Infrastructure 3.5.

Accord-V. provides protection for all components of the virtualization environment: ESX servers and the virtual machines themselves, vCenter management servers and additional servers with VMware services (for example, VMware Consolidated Backup).

The Accord-V hardware and software complex implements the following protection mechanisms:

· Step-by-step control of the integrity of the hypervisor, virtual machines, files inside virtual machines and infrastructure management servers;

· Administrator access restrictions virtual infrastructure and security administrators;

· Limiting user access within virtual machines;

· Hardware identification of all users and administrators of the virtualization infrastructure.

· INFORMATION ABOUT THE AVAILABILITY OF CERTIFICATES:

Certificate of Conformity of the FSTEC of Russia No. 2598 dated March 20, 2012 certifies that the Accord-V software and hardware system for protecting information from unauthorized access is a software and hardware means of protecting information that does not contain information constituting a state secret from unauthorized access, complies with the requirements of the governing documents "Computer facilities. Protection against unauthorized access to information. Indicators of security against unauthorized access to information" (State Technical Commission of Russia, 1992) - according to 5 security class, "Protection against unauthorized access to information. Part 1. Software for information security tools. Classification according to the level of control over the absence of undeclared capabilities" (State Technical Commission of Russia, 1999) - according to 4 level of control and technical conditions TU 4012-028-11443195-2010, and can also be used to create automated systems up to security class 1G inclusive and to protect information in personal data information systems up to class 1 inclusive.

vGate R2

vGate R2 is a certified means of protecting information from unauthorized access and monitoring the implementation of information security policies for virtual infrastructure based on VMware vSphere 4 and VMware vSphere 5.S R2 systems - a version of the product applicable for protecting information in the virtual infrastructures of public companies whose IP requirements are requirements for the use of information security systems with a high level of certification.

Allows you to automate the work of administrators in configuring and operating the security system.

Helps combat errors and abuses when managing virtual infrastructure.

Allows you to bring virtual infrastructure into compliance with legislation, industry standards and best global practices.

<#"783809.files/image017.gif"> <#"783809.files/image018.gif"> <#"783809.files/image019.gif"> <#"783809.files/image020.gif">

Rice. 13 Stated capabilities of vGate R2

Thus, to summarize, here are the main tools that vGate R2 has to protect the service provider’s data center from internal threats emanating from its own administrators:

· Organizational and technical separation of powers for vSphere administrators

· Allocation of a separate role for an information security administrator who will manage the security of data center resources based on vSphere

· Division of the cloud into security zones, within which administrators with the appropriate level of authority operate

· Monitoring the integrity of virtual machines

· The ability to receive a report on the security of the vSphere infrastructure at any time, as well as audit information security events

In principle, this is practically all that is needed to protect the infrastructure of a virtual data center from internal threats from the point of view of the virtual infrastructure. Of course, you will also need protection at the level of hardware, applications and guest OSes, but this is another problem, which can also be solved by the products of the Security Code company<#"783809.files/image021.gif">

Rice. 14. Server structure.

To ensure safety at such a facility, it is necessary to ensure safety in accordance with Table 2.

For this I suggest using software vGate R2. It will allow you to solve problems such as:

· Strengthened authentication of virtual infrastructure administrators and information security administrators.

· Protection of virtual infrastructure management tools from unauthorized access.

· Protection of ESX servers from unauthorized access.

· Mandatory access control.

· Monitoring the integrity of virtual machine configurations and trusted boot.

· Control of access of VI administrators to virtual machine data.

· Registration of events related to information security.

· Integrity monitoring and protection against unauthorized access of information security components.

· Centralized management and monitoring.

Table 2: Security Compliance for PaaS Model

Certificate of FSTEC of Russia (SVT 5, NDV 4) allows the product to be used in automated systems of security level up to class 1G inclusive and in personal data information systems (ISPDn) up to class K1 inclusive. The cost of this solution will be 24,500 rubles for 1 physical processor on the protected host.

In addition, to protect against insiders, you will need to install a security alarm. These solutions are quite widely available on the server protection market. The price of such a solution with limited access to the controlled area, an alarm system and video surveillance ranges from 200,000 rubles and above

For example, let’s take the amount of 250,000 rubles.

To protect virtual machines from virus infections, McAfee Total Protection for Virtualization will be running on one server core. The cost of the solution is from 42,200 rubles.

To prevent data loss on storage facilities, the Symantec Netbackup system will be used. It allows you to reliably backup information and system images.

The total cost of implementing such a project will be:

An implementation of such a design solution based on Microsoft can be downloaded from here: http://www.microsoft.com/en-us/download/confirmation. aspx? id=2494

Conclusion

“Cloud technologies” are one of the most actively developing areas of the IT market at present. If the growth rate of technologies does not decrease, then by 2015 they will bring more than 170 million euros per year to the coffers of European countries. In our country, cloud technologies are treated with caution. This is partly due to the rigidity of management's views, partly due to a lack of trust in security. But this type of technology, with all its advantages and disadvantages, is a new locomotive of IT progress.

The application “from the other side of the cloud” does not care at all whether you form your request on a computer with an x86 processor from Intel, AMD, VIA, or compose it on a phone or smartphone based on an ARM processor from Freescale, OMAP, Tegra. Moreover, by and large it will not matter to him whether you are running Linux operating systems Google Chrome, OHA Android, Intel Moblin, Windows CE, Windows Mobile Windows XP/Vista/7, or use something even more exotic for this. If only the request was written correctly and understandably, and your system could “master” the response received.

The issue of security is one of the main ones in cloud computing and its solution will qualitatively improve the level of services in the computer field. However, much remains to be done in this direction.

In our country, it is worth starting with a single dictionary of terms for the entire IT field. Develop standards based on international experience. Put forward requirements for protection systems.

Literature

1. Financial Considerations for Government Use of Cloud Computing - Australian Government 2010.

2. Privacy and Cloud Computing for Australian Government Agencies 2007.

Negotiating the cloud - legal issues in cloud computing agreements 2009.

Journal "Modern Science: Current Problems of Theory and Practice" 2012.

Similar works to - Information security in cloud computing: vulnerabilities, methods and means of protection, tools for auditing and incident investigation

GRIGORIEV1 Vitaly Robertovich, Candidate of Technical Sciences, Associate Professor KUZNETSOV2 Vladimir Sergeevich

PROBLEMS OF IDENTIFYING VULNERABILITIES IN THE CLOUD COMPUTING MODEL

The article provides an overview of approaches to building a conceptual model of cloud computing, as well as a comparison of existing views on identifying vulnerabilities that are inherent in systems built on the basis of this model. Keywords Keywords: cloud computing, vulnerability, threat kernel, virtualization.

The purpose of this article is to review approaches to building a conceptual model of cloud computing, given in the document “NIST Cloud Computing Reference Architecture”, and compare the views of leading organizations in this field on vulnerabilities in the conditions of this computing model, as well as the main players in the market for creating cloud systems.

Cloud computing is a model that provides convenient, on-demand network access to shared, configurable computing resources (networks, servers, data storage, applications and services), which is quickly provided with minimal management effort and interaction with the service provider. This National Institute of Standards (NIST) definition is widely accepted throughout the industry. The definition of cloud computing includes five main basic characteristics, three service models and four deployment models.

Five Key Features

Self-service on demand

Users are able to obtain, control and manage computing resources without the help of system administrators. Broad network access - computing services are provided through standard networks and heterogeneous devices.

Operational elasticity - 1T-

resources can be quickly scaled in any direction as needed.

Resource pool - 1T resources are shared by various applications and users in a decoupled manner.

Calculation of service costs - the use of IT resources is tracked for each application and user, as a rule, to provide billing for the public cloud and internal calculations for the use of private clouds.

Three service models

Software as a Service (SaaS) - Applications are typically provided as a service to end users through a web browser. There are hundreds of SaaS offerings available today, from horizontal enterprise applications to industry-specific offerings, as well as consumer applications such as email.

Platform as a Service (PaaS) - An application development and deployment platform is provided as a service to developers to create, deploy, and manage SaaS applications. Typically, the platform includes databases, middleware, and development tools, all delivered as a service over the Internet. PaaS often focuses on a programming language or API, such as Java or Python. Virtualized cluster architecture of distributed computing often serves as the basis for systems

1 - MSTU MIREA, associate professor of the Department of Information Security;

2 - Moscow State University Radioelectronics and Automation (MSTU MIREA), student.

RaaYa, since the grid structure of a network resource provides the necessary elastic scalability and pooling of resources. Infrastructure as a Service (IaaS) - servers, storage, and networking hardware are provided as a service. This infrastructure hardware is often virtualized, so virtualization, management, and operating system software are also elements of IT.

Four deployment models

Private clouds are intended for the exclusive use of one organization and are typically controlled, managed and hosted by private data centers. Hosting and management of private clouds can be outsourced to an external service provider, but often

The new cloud remains the exclusive use of one organization. Public clouds are used by many organizations (users) together, maintained and managed by external service providers.

Group clouds are used by a group of related organizations that want to share a common cloud computing environment. For example, a group might consist of different branches of the armed forces, all universities in a given region, or all suppliers of a large manufacturer.

Hybrid clouds occur when an organization uses both private and public cloud for the same application to take advantage of the benefits of both. For example, in a “rainstorm” scenario, the user organization in the case of a standard load on the application

uses the private cloud, and when the load peaks, for example, at the end of the quarter or during the holiday season, it uses the potential of the public cloud, subsequently returning these resources to the general pool when they are not needed.

In Fig. 1 presents a conceptual model of cloud computing according to the document “NIST Cloud Computing Reference Architecture”. According to the one shown in Fig. Model 1 in the standard identifies the main participants in the cloud system: cloud consumer, cloud provider, cloud auditor, cloud broker, cloud intermediary. Each participant is a person or organization performing its functions in implementing or providing cloud computing. A cloud consumer is a person or organization that maintains business interactions with other entities.

Cloud consumer

Cloud Auditor

C Audit L I Security J

I Confidentiality audit J

(Audit of services provided J

Cloud provider

Complex levels

User level

^ Service as a service ^ ^ Platform as a service ^ Infrastructure as a service)

Abstraction level

Physical layer

Cloud service

^ J Support ^ J Setup

Portability

Cloud broker

Cloud intermediary

Rice. 1. Conceptual model developed by NIST specialists

network tors and uses services from cloud providers. Cloud provider is the person, organization or whoever responsible for the availability of services provided to interested consumers. A cloud auditor is a participant who can conduct independent assessments of cloud services, services and the security of cloud implementations. A cloud broker is a participant that manages the use, performance, and delivery of cloud services to consumers, and negotiates interactions between cloud providers and cloud consumers. Cloud intermediary is an intermediary that provides communication and delivery of cloud services between cloud providers and cloud consumers.

Advantages and challenges of cloud computing

Recent surveys of specialists in the field of IT technologies show that cloud computing offers two main advantages when organizing distributed services - speed and cost. With offline access to a pool of computing resources, users can join processes of interest in a matter of minutes, rather than weeks or months later, as was previously the case. Changing computing capabilities is also fast thanks to the elastically scalable grid architecture of the computing environment. Because in cloud computing, users only pay for what they use, and the scalability and automation capabilities reach high level, the ratio of cost and efficiency of the services provided is also a very attractive factor for all participants in exchange processes.

The same surveys show that there are a number of serious considerations that are holding some companies back from moving to the cloud. Cloud computing security tops these considerations by a wide margin.

To adequately assess security in cloud systems, it makes sense to study the views of the main market players on threats in this area. We will compare existing approaches to threats in cloud systems, presented in the NIST Cloud Computing Standards Roadmap, with approaches offered by IBM, Oracle and VmWare.

US National Standards Institute Cloud Security Standard

The NIST Cloud Computing Standards Roadmap, adopted by NIST, covers possible potential types of attacks on cloud computing services:

♦ compromise of confidentiality and availability of data transmitted by cloud providers;

♦ attacks that come from the structure and capabilities of the cloud computing environment to enhance and increase the damage from attacks;

♦ unauthorized consumer access (through incorrect authentication or authorization, or vulnerabilities introduced through periodic Maintenance) to software, data and resources used by an authorized consumer of the cloud service;

♦ an increase in the level of network attacks, such as DoS, exploiting software, the development of which did not take into account the threat model for distributed Internet resources, as well as vulnerabilities in resources that were accessible from private networks;

♦ limited capabilities for data encryption in an environment with a large number of participants;

♦ portability resulting from the use of non-standard APIs that make it difficult for a cloud consumer to migrate to a new cloud provider when availability requirements are not met;

♦ attacks that exploit the physical abstraction of cloud resources and exploit flaws in records and audit procedures;

♦ attacks on virtual machines that have not been updated accordingly;

♦ attacks that exploit inconsistencies in global and private security policies.

The standard also identifies the main security tasks for cloud computing:

♦ protecting user data from unauthorized access, disclosure, modification or viewing; implies support for the identification service in such a way that the consumer has the ability to perform identification and access control policies on authorized users who have access to cloud services; This approach implies the ability of the consumer to provide access to his data selectively to other users;

♦ protection against supply chain threats; includes confirmation of the degree of trust and reliability of the service provider to the same extent as the degree of trust of the software and hardware used;

♦ preventing unauthorized access to cloud computing resources; includes creating secure domains that are logically separated from resources (for example, logically separating workloads running on the same physical server via a hypervisor in a multi-tenant environment) and using secure default configurations;

♦ development of web applications deployed in the cloud for the threat model of distributed Internet resources and embedding security functions into the software development process;

♦ protecting Internet browsers from attacks to mitigate end-user security vulnerabilities; includes taking steps to secure your internet connection personal computers based on the use of secure software, firewalls and periodic installation of updates;

♦ deployment of access control and intrusion detection technologies

tions with the cloud provider and conducting an independent assessment to verify their availability; includes, but is not limited to, traditional perimeter security measures combined with a domain security model; traditional perimeter security includes limiting physical access to the network and devices, protecting individual components from exploitation by deploying updates, setting most security settings to default, disabling all unused ports and services, using role-based access control, monitoring audit records, minimizing used privileges, using anti-virus packages and encrypting connections;

♦ defining trusted boundaries between service provider(s) and consumers to ensure that authorized responsibilities for providing security are clear;

♦ support for portability, carried out so that the consumer has the opportunity to change the cloud provider in cases where he needs to meet the requirements for integrity, availability, and confidentiality; this includes the ability to close an account at this time and copy data from one service provider to another.

Thus, the NIST Cloud Computing Standards Roadmap, adopted by NIST, defines a basic list of attacks on cloud systems and a list of main tasks that should

be resolved through application

appropriate measures.

Let us formulate the threats to the information security of a cloud system:

♦ U1 - threat (compromise, availability, etc...) to data;

♦ U2 - threats generated by the structural features and capabilities of the architecture for implementing distributed computing;

♦ U4 - threats associated with an incorrect threat model;

♦ U5 - threats associated with incorrect use of encryption (use of encryption is necessary in an environment where there are multiple data streams);

♦ U6 - threats associated with the use of non-standard APIs during development;

♦ U7 - virtualization threats;

♦ U8 - threats that exploit inconsistencies in global security policies.

IBM's perspective on cloud security issues

The Cloud Security Guidance IBM Recommendations for the Implementation of Cloud Security allows us to draw conclusions about the views on security developed by IBM specialists. Based on this document, we can expand the previously proposed list of threats, namely:

♦ U9 - threats associated with third party access to physical resources/systems;

♦ U10 - threats associated with incorrect disposal (life cycle) personal information;

♦ U11 - threats associated with violation of regional, national and international laws relating to the processed information.

IBM, Oracle and VmWare approaches to cloud computing security

The documentation provided by these companies and describing their views on ensuring security in their systems does not provide fundamentally different threats from the above.

In table Table 1 shows the main classes of vulnerabilities formulated by companies in their products. Table 1 allows us to see the lack of complete threat coverage among the companies studied and formulate the “threat core” created by companies in their cloud systems:

♦ threat to data;

♦ threats based on the structure\capabilities of distributed computing;

♦ threats associated with an incorrect threat model;

♦ virtualization threats.

Conclusion

A review of the main classes of cloud platform vulnerabilities allows us to conclude that currently there are no ready-made solutions for full cloud protection due to the variety of attacks that exploit these vulnerabilities.

It should be noted that the constructed table of vulnerability classes (Table 1), integrating the approaches of leading

Table 1. Vulnerability classes

Source Declared threats

U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 U11

NIST + + + + + + + + - - -

IBM + + + + + - + - + + +

Sun/Oracle + + + + - - + - - + -

VmWare + + + + - - + - - - -

this industry of players is not limited to the threats presented in it. For example, it does not reflect the threats associated with the blurring of boundaries between environments with different levels of data confidentiality, as well as the blurring of the boundaries of responsibility for information security between the service consumer and the cloud provider.

It becomes obvious that to implement a complex cloud system, protection must be developed for a specific implementation. Also important for the implementation of secure computing in virtual environments is the lack of FSTEC and FSB standards for cloud systems. The “core of threats” identified in the work makes sense to use when studying

solving the problem of constructing a unified model of vulnerability classes. This article is of a review nature; in the future, it is planned to analyze in detail the classes of threats associated with virtualization, and develop approaches to creating a protection system that potentially prevents the implementation of these threats.

Literature

1. Cloud Security Guidance IBM Recommendations for the Implementation of Cloud Security, ibm.com/redbooks, November 2, 2009.

2. http://www.vmware.com/technical-resources/security/index.html.

3. NIST Cloud. Computing Reference Architecture, National Institute of Standards and. Technology, Special Publication. 500-292, September 2011.

4. NIST Cloud. Computing Standards Roadmap, National Institute of Standards and. Technology, Special Publication. 500-291, July 2011.

5. http://www.oracle.com/technetwork/indexes/documentation/index.html.

2019

McAfee: 19 best practices for cloud security in 2019

The biggest concern for companies is the security of external cloud services. Thus, respondents are worried that incidents may occur at suppliers to whom business processes are outsourced, at third-party cloud services, or in the IT infrastructure where the company rents computing power. However, despite all this concern, only 15% of companies conduct third-party security compliance audits.

“Despite the fact that recent large-scale hacks have occurred inside the data center, traditional security systems still focus only on protecting the network perimeter and controlling access rights. At the same time, the negative impact of solutions for protecting physical infrastructure on the performance of virtual environments is rarely taken into account, explained Veniamin Levtsov, vice president for corporate sales and business development at Kaspersky Lab. - That’s why it’s so important to use appropriate, comprehensive protection in converged environments to ensure security virtual systems specially designed solutions. We are implementing an approach in which, regardless of the type of infrastructure, all systems are provided with a uniform level of security coverage of the entire corporate network. And this is where our technologies and modern VMware developments (such as microsegmentation) complement each other perfectly.”

2015: Forrester: Why are customers unhappy with cloud providers?

Opaque cloud

A recent Forrester Consulting study shows that many organizations feel that their cloud service providers don't provide enough information about their cloud experience, which is hurting their business.

In addition to the lack of transparency, there are other factors that reduce the enthusiasm for moving to the cloud: the level of service for customers, additional costs and adaptation during migration (on-boarding). Organizations love the cloud, but not its providers—at least not as much.

The study was commissioned by enterprise cloud hosting provider iland and was conducted during May and included infrastructure and maintenance professionals from 275 organizations in Singapore and Singapore.

“Among all the complexities of today's cloud, there are also unfortunate flaws,” writes Lilac Schoenbeck, vice president of product support and marketing at iland. “Such critical metadata is not communicated, significantly inhibiting cloud adoption, yet organizations base their growth plans on the assumption that cloud resources are limitless.”

Where is the key to achieving harmony in business relationships? Here's what VARs need to know to try to resolve issues and bring the parties to reconciliation.

Lack of attention to clients

Apparently, many cloud users do not feel that same personal approach.

Thus, 44% of respondents said that their provider does not know their company and does not understand their business needs, and 43% believe that if their organization was simply larger, then the supplier would probably pay more attention to them. In short, they feel the coldness of an ordinary transaction when buying cloud services, and they don't like it.

And one more thing: there is one practice that a third of the companies surveyed pointed out, which also instills a feeling of pettiness in the transaction - they are charged a fee for the slightest question or incomprehensibility.

Too many secrets

A supplier's reluctance to provide all the information not only irritates customers, but often costs them money.

All respondents to the Forrester survey said they experienced some financial and operational impact due to missing or proprietary data about their cloud use.

“The lack of clear data on cloud usage leads to performance issues, difficulty reporting to management the true cost of use, charges for resources that users never consume, and unexpected bills,” states Forrester.

Where's the metadata?

IT leaders responsible for cloud infrastructure in their organizations want cost and performance metrics that provide clarity and transparency, but apparently have difficulty communicating this to vendors.

Survey respondents noted that the metadata they receive about cloud workloads is typically incomplete. Nearly half of the companies responded that regulatory compliance data was not available, 44% reported a lack of usage data, 43% reported a lack of historical data, 39% reported a lack of safety data, and 33% reported a lack of billing and cost data.

Transparency issue

The lack of metadata causes all sorts of problems, respondents say. Nearly two-thirds of those surveyed reported that a lack of transparency prevents them from fully understanding the benefits of the cloud.

“The lack of transparency creates various problems, most notably the issue of usage parameters and service interruptions,” the report said.

Approximately 40% try to fill these gaps themselves by purchasing additional tools from their own cloud providers, while the other 40% simply purchase services from another provider where such transparency exists.

Regulatory Compliance

Whatever one may say, organizations are responsible for all their data, whether on local storage systems or sent to the cloud.

More than 70% of respondents to the study said their organizations are regularly audited and must demonstrate compliance wherever their data resides. And that poses a barrier to cloud adoption for nearly half of the companies surveyed.

“But the regulatory compliance aspect of you needs to be transparent to your end users. When cloud providers withhold or don't disclose this information, they prevent you from achieving this,” the report said.

Compliance issues

More than 60% of companies surveyed responded that regulatory compliance issues are limiting further cloud adoption.

The main problems are:

  • 55% of companies with such requirements said that the most difficult thing for them to do is implement adequate controls.
  • About half say they have difficulty understanding the level of compliance provided by their cloud provider.
  • Another half of respondents said it was difficult for them to obtain the necessary documentation from the provider about compliance with these requirements in order to pass the audit. And 42% find it difficult to obtain documentation of their own compliance for workloads running in the cloud.

Migration problems

The on-boarding process appears to be another area of ​​general dissatisfaction, with just over half of the companies surveyed saying they were dissatisfied with the migration and support processes their cloud vendors offered them.

Of the 51% dissatisfied with the migration process, 26% said it took too long, and 21% complained about a lack of hands-on input from provider staff.

More than half were also dissatisfied with the support process: 22% cited long wait times for a response, 20% cited insufficient knowledge of support staff, 19% cited a lengthy problem-solving process, and 18% received bills with higher-than-expected support costs.

Obstacles on the way to the cloud

Many of the companies surveyed by Forrester are being forced to hold back their cloud expansion plans due to problems they are experiencing with existing services.

At least 60% responded that a lack of transparency in usage, regulatory compliance information, and reliable support is holding them back from using the cloud more widely. If not for these problems, they would move more workloads to the cloud, respondents say.

2014

  • The role of IT departments is gradually changing: they are faced with the task of adapting to the new realities of cloud IT. IT departments must educate employees about security concerns, develop comprehensive data management and compliance policies, develop cloud adoption guidelines, and set rules about what data can and cannot be stored in the cloud.
  • IT departments are able to fulfill their mission to protect corporate data and at the same time act as a tool in the implementation of “Shadow IT”, implementing measures to ensure data security, for example, introducing the `encryption-as-a-service` approach. form of service"). This approach allows IT departments to centrally manage data protection in the cloud, providing other departments of the company with the ability to independently find and use cloud services as needed.
  • As more companies store their data in the cloud and their employees increasingly use cloud services, IT departments need to pay more attention to implementing more effective mechanisms to control user access, such as multi-factor authentication. This is especially true for companies that provide third parties and vendors with access to their data in the cloud. Multi-factor authentication solutions can be centrally managed and provide more secure access to all applications and data, whether hosted in the cloud or on a company's own hardware.

Ponemon and SafeNet Data

Most IT organizations are unaware of how corporate data is protected in the cloud, leaving companies with accounts and data at risk. confidential information its users. This is just one of the conclusions of a recent fall 2014 study conducted by the Ponemon Institute and commissioned by SafeNet. The study, titled "Challenges of Information Management in the Cloud: Global Data Security Study," surveyed more than 1,800 information technology and IT security professionals worldwide.

Among other findings, the study found that while organizations are increasingly embracing the power of cloud computing, corporate IT departments face challenges managing and securing data in the cloud. The survey found that only 38% of organizations have clearly defined roles and responsibilities for ensuring the protection of confidential and other sensitive information in the cloud. To make matters worse, 44% of enterprise data stored in the cloud is not controlled or managed by IT departments. In addition, more than two-thirds (71%) of respondents noted that they are facing increasing challenges when using traditional security mechanisms and techniques to protect sensitive data in the cloud.

As cloud infrastructures grow in popularity, so do the risks of confidential data leaks. About two-thirds of IT professionals surveyed (71%) confirmed that cloud computing is of great importance to corporations today, and more than two-thirds (78%) believe that cloud computing will continue to be relevant. in two years. In addition, according to respondents' estimates, about 33% of all their organizations' needs for information technology and data processing infrastructure can be met today using cloud resources, and over the next two years this share will increase to an average of 41%.

However, the majority of respondents (70%) agree that compliance with data confidentiality and data protection requirements in a cloud environment is becoming increasingly difficult. In addition, respondents note that the types of corporate data stored in the cloud, such as email addresses, consumer and customer data, and payment information, are most at risk of leaks.

On average, more than half of all enterprise cloud deployments are carried out by third-party departments rather than by corporate IT departments, and on average, about 44% of enterprise data hosted in the cloud is not controlled or managed by IT departments. As a result, only 19% of respondents said they were confident they knew about all the cloud applications, platforms or infrastructure services currently used in their organizations.

Along with the lack of control over the installation and use of cloud services, there was no consensus among respondents as to who is actually responsible for the security of data stored in the cloud. Thirty-five percent of respondents said that responsibility is shared between users and cloud service providers, 33% believe that responsibility lies entirely with users, and 32% believe that the cloud service provider is responsible for data security.

More than two-thirds (71%) of respondents say it is becoming increasingly difficult to protect sensitive user data stored in the cloud using traditional security tools and methods, and about half (48%) say it is becoming increasingly difficult for them to control or restrict end users access to cloud data. As a result, more than a third (34%) of IT professionals surveyed said that their organizations have already implemented corporate policies that require the use of security mechanisms such as encryption as a prerequisite for working with certain cloud computing services. Seventy-one (71) percent of respondents said the ability to encrypt or tokenize confidential or otherwise sensitive data is important to them, and 79% believe that these technologies will become increasingly important over the next two years.

When asked what their companies do to protect data in the cloud, 43% of respondents said their organizations use private networks to transfer data. Roughly two-fifths (39%) of respondents said their companies use encryption, tokenization, and other cryptographic tools to protect data in the cloud. Another 33% of respondents don't know what security solutions their organizations have in place, and 29% said they use paid security services from their cloud providers.

Respondents also believe that managing enterprise encryption keys is essential to keeping data secure in the cloud, given the increasing number of key management and encryption platforms used by their companies. Specifically, 54% of respondents said their organizations maintain control over encryption keys when storing data in the cloud. However, 45% of respondents said they store their encryption keys in software, in the same place where the data itself is stored, and only 27% store the keys in more secure environments, such as on hardware devices.

When it comes to accessing data stored in the cloud, sixty-eight (68) percent of respondents say it becomes more difficult to manage user accounts in a cloud environment, while sixty-two (62) percent of respondents said their organizations have access to the cloud is also provided for third parties. Roughly half (46 percent) of those surveyed said their companies use multi-factor authentication to protect third-party access to data stored in the cloud. About the same number (48 percent) of respondents said that their companies use multi-factor authentication technologies, including to protect their employees’ access to the cloud.

2013: Cloud Security Alliance Study

The Cloud Security Alliance (CSA), a non-profit industry organization promoting cloud security practices, recently updated its list of top threats in a report entitled "Cloud Evil: The Top 9 Threats in Cloud Services in 2013."

CSA states that the report reflects expert consensus on the most significant security threats in the cloud and focuses on threats stemming from shared cloud resources being shared and accessed by multiple users on demand.

So, the main threats...

Data theft

Theft of confidential corporate information is always a concern for organizations in any IT infrastructure, but the cloud model opens up “new, significant attack avenues,” CSA points out. “If a multi-tenancy cloud database is not properly designed, a flaw in one customer's application could allow attackers to access data not only for that customer, but for all other cloud users as well,” the CSA warns.

Any “cloud” has several levels of protection, each of which protects information from different types"attempts".

For example, physical protection of the server. Here we are not even talking about hacking, but about theft or damage to storage media. Taking a server out of the premises can be difficult in the truest sense of the word. In addition, any self-respecting company stores information in data centers with security, video surveillance and restricted access not only to outsiders, but also to the majority of company employees. So the likelihood that an attacker will simply come and take information is close to zero.

Just as an experienced traveler, fearing robbery, does not keep all his money and valuables in one place,