Overview of backup programs. Backup. Theory and practice. Summary. Incremental backups

From natural and man-made disasters, the actions of intruders. These technologies are actively used in the IT infrastructures of organizations of various industries and sizes.

Backup classification

By the completeness of the stored information

  • Full redundancy(Full backup) - create a backup archive of all system files, usually including the system state, registry, and other information needed to fully recover workstations. That is, not only files are backed up, but all the information necessary for the system to work.
  • Incremental reservation(Incremental backup) - create a backup archive from all files that have been modified after the previous full or incremental backup.
  • Differential redundancy(Differential backup) - create a backup archive from all files that have changed since the previous full backup.
  • Selective redundancy(Selective backup) - create a backup archive only from selected files.

By way of accessing the media

  • Online reservation(Online backup) - creating a backup archive on a permanently connected (directly or via the network) media.
  • Offline redundancy(Offline backup) - storage of a backup on removable media, cassette or cartridge, which must be installed in the drive before use.

Rules for working with backup systems

Using any technology Reserve copy you should adhere to some fundamental rules, the observance of which will ensure maximum data safety in the event of unforeseen situations.

Differential backup

If you do, you will probably only use it once a year. For most users, backing up commonly available files is the most important aspect of data protection, not backing up esoteric system files. Thus, you can think about which files are most important to you before completing your first set. backups... Also, since backup sets can take a while, you must protect your most important files first.

  • Pre-planning. All components of the backup infrastructure should be considered in the planning process, and all applications, servers and trends in the growth of primary data storage capacity should not be overlooked.
  • Establishment of the life cycle and calendar of operations. All tasks related to backups must be documented and performed according to a schedule. Below is a list of tasks that need to be performed daily:
    • monitoring tasks;
    • crash and success reports;
    • analysis and problem solving;
    • manipulation of ribbons and library management;
    • scheduling tasks execution.
  • Daily review of the logs of the backup process. Since every failure to create backups can lead to many difficulties, you should check the progress of the backup process at least every day.
  • Protecting the backup database or directory. Each backup application maintains its own database, the loss of which can mean the loss of backups.
  • Daily definition of the backup time window. If the execution time of tasks begins to go beyond the allotted time window, this is a sign of approaching the maximum capacity of the system or the presence of weak links in performance. Early detection of such symptoms can eliminate subsequent larger system failures.
  • Localization and preservation of "external" systems and volumes. You need to personally verify that your backups meet your expectations, relying primarily on your observations rather than program reports.
  • Centralization and automation of backups as much as possible. Bringing together multiple backup tasks into one greatly simplifies the backup process.
  • Creation and support of open reports, reports on open problems. The presence of a log of unresolved problems can help to eliminate them as soon as possible, and, as a result, optimize the backup process.
  • Including backup in the system change control process.
  • Consultations with vendors. Make sure that the implemented system fully meets the expectations of the organization.

Backup technologies

Security

Typically, backups are automatic. Elevated privileges are usually required to access data. So the backup process starts from under account with elevated privileges - this is where a certain risk creeps in. Read the article

Imagine your house is on fire and you only have one minute to save whatever you can - which files will you save first? Sentimental: Any files that contain sentimental meaning - pictures of your kids, home videos, magazines, creative projects, etc. for most users, these are the file types that have the highest priority, but that might not be the case for you. Sentimental files are irreplaceable. You can't go back in time to take home videos of your son or daughter's first steps.

Practical: These are files that contain practical value - tax returns, banking information, budget details, etc. you need these files to help you with Everyday life... Losing them will create a major headache and can complicate your financial well-being, but you can recover from the loss.

ALEXEY BEREZHNOY, System Administrator. Main areas of activity: virtualization and heterogeneous networks. Another hobby besides writing articles is the popularization of free software.

Backup
Theory and practice. Summary

To organize a backup system most effectively, you need to build a real strategy for saving and restoring information.

Backup technologies

You can very well rely on many of these files for your income, which means that they have actual financial value. Entertainment: Entertainment files include movies, music, podcasts, etc. these are files that you use just for fun. The loss of these types of files is usually temporary as they exist independently of your computer. You may need to redeem the file, but it can be replaced.

You never knowingly open and use these files. If you back up these files and your computer crashes, you can restore your operating system and installed applications with all settings. This way you don't reinstall all your apps one by one. However, recovering these files can be tricky and complex, so backing them up is usually only recommended for advanced users.

Backup (or, as it is also called, backup - from the English word "backup") is an important process in the life of any IT structure. This is a rescue parachute in the event of an unforeseen disaster. At the same time, backup is used to create a kind of historical archive of the company's business activities over a certain period of its life. Working without a backup is like living in the open - the weather can deteriorate at any moment, and there is nowhere to hide. But how to organize it correctly so as not to lose important data and not spend fantastic sums on it?

Data backup software: what we tested, what we found

We also backed up to the same external HDD and used the same data with every backup set we ran. It is important to note that your computer has different components with different specifications. You also have different data with different compression potential. Thus, you are unlikely to have the same compression speeds or speeds as we found. Because of this, we have evaluated our results to reflect the relative comparison of software running on the same computer to the same data.

Usually, articles on the topic of organizing backups deal mainly with technical solutions, and only occasionally attention is paid to the theory and methodology of organizing data storage.

This article will focus on just the opposite: the focus is on general concepts, and technical means will be touched upon as examples only. This will allow us to abstract from hardware and software and answer two main questions: "Why are we doing this?", "Can we do it faster, cheaper and more reliable?"

Ease of Use Backing up your data is something everyone should do with a computer. No one should be excluded from protecting their data because they lack technical expertise. Thus, software the backup you use for home computer should not cater to the advanced user. It should be simple and intuitive for people of all experience levels.

We recognize that ease of use is a subjectively subjective dimension. Your learning curve will depend on your experience level. However, we have rated and appreciated the ease of use from a beginner's point of view. We have counted the number of steps that were taken with each application to get started with the backup and restore setup. We appreciated how well the app was designed and how the interface guides the user from one step to the next.

Goals and objectives of backup

In the process of organizing a backup, two main tasks are set: recovery of infrastructure in case of failures (Disaster Recovery) and maintaining a data archive in order to subsequently provide access to information for past periods.

A classic example of a backup for Disaster Recovery is an image of a server system partition created by Acronis True Image.

The best backup software is clearly designed for the novice user. These applications are not inherently worse than more simple applications... They are simply intended for users with a higher technical level of experience - those who want more control than the standard beginner.

However, our review is for the newbie, which is why these products have lower ratings. Each file in the backup set must be read, processed, compressed, encrypted, and written to the target device. This is the most important factor with the performance of your backup software. The best backup software is capable of processing data quickly, so your computer's resources are not cluttered for a long time.

An example of an archive can be monthly unloading of databases from 1C, recorded on cassettes with subsequent storage in a specially designated place.

There are several factors that distinguish a quick recovery backup from an archive:

  • Data storage period. For archival copies, it is quite long. In some cases, it is regulated not only by business requirements, but also by law. Disaster recovery copies have a relatively small amount. Usually, one or two (with increased reliability requirements) backups for Disaster Recovery are created with a maximum interval of a day or two, after which they are overwritten with fresh ones. In particularly critical cases, it is also possible to update the disaster recovery backup more frequently, for example, once every few hours.
  • Fast access to data. The speed of access to a long-term archive is not critical in most cases. Usually the need to "raise data for the period" arises at the time of reconciliation of documents, return to previous version etc., that is, not in emergency mode. Another thing is disaster recovery, when the necessary data and service performance must be returned as soon as possible. In this case, the speed of access to the backup is extremely important.
  • The composition of the information being copied. The archived copy usually contains only user and business data for the specified period. The disaster recovery copy contains, in addition to this data, either system images or copies of operating system and application software settings, and other information required for recovery.

Sometimes it is possible to combine these tasks. For example, an annual set of monthly full "snapshots" file server, plus changes made during the week. True Image is suitable as a tool for creating such a backup.

We used several sets of backups to determine the average speeds. We ran full compressed backups and we ran uncompressed sets. Some kits had encryption and some kits didn't. After nearly thirty hours of testing and after collecting data, we were able to appropriately compare the speed and quality of the data backup software. C products were average, with higher grades much faster and lower grades much slower.

The most important thing is to clearly understand why the reservation is being made. Let me give you an example: a critical SQL server crashed due to a disk array failure. In stock there is a suitable Hardware, so the only solution to the problem was software and data recovery. The company's management asks an understandable question: "When will it start working?" - and is unpleasantly surprised to learn that it will take four hours to recover. The fact is that throughout the entire service life of the server, only databases were backed up regularly, without taking into account the need to restore the server itself with all the settings, including the software of the DBMS itself. Simply put, our heroes saved only databases, and forgot about the system.

Recovery speed The recovery speed is very similar to the backup speed, but vice versa. The software should convert the processed backup data and write it to the new device. This process can include decompression and decryption. As with the backup process, you should expect a significant amount of time to recover large amounts of data. The difference between the fastest speed and the slowest speed was similar to the backup speed tests.

Let me give you another example. Throughout the entire period of his work, the young specialist created a single copy of the file server under the ntbackup program. Windows control Server 2003 including data and System State to a shared folder on another computer. Due to the shortage disk space this copy was constantly overwritten. After a while, he was asked to restore the previous version of the multi-page report, which was damaged when saving. It is clear that, having no archive history with Shadow Copy turned off, he could not fulfill this request.

It simply copies data from one device to another. This means that you have to manually move files from the backup location to your computer when you need to restore them. The process is not complicated and does not take much time. However, the application still received a poor rating on these criteria because it is not actually associated with the recovery process.

The site seeks, whenever possible, to evaluate all products and services in practical tests that closely mimic the experience of a typical consumer. The software developers had no influence or influence on our testing methodology, and the methodology provided to none of them is in more detail than is available by reading our reviews. Our assessment results were not presented to companies prior to publication.

On a note

Shadow copy, literally - "shadow copy". Provides creation of snapshots of the file system in such a way that further changes to the original do not affect them in any way. Using this function, it is possible to create several blind copies of a file for a certain period of time, as well as on the fly backup copies of files opened for writing. The Volume Copy Shadow Service is responsible for the work of Shadow Copy.

Since every computer and system is different, find a company that can provide thorough and prompt help and support. You will also want to use your programs to save you the trouble of reinstalling, and you will probably want to keep many of your settings. The best software will give you different levels of selectivity to make the transition easier, such as being able to choose the types or sizes of files you want to transfer.

You may be looking for a transition from an old to a new computer, or for a transition from one operating system to a new one. Different software also offers different types of migration. Built-in migration fully integrates your old and new systems, while virtualization brings your old system to a separate virtual machine that you use along with new system... Some data transfer tools only handle certain types of migrations, while the best will fit into any situation.

System State, literally - "state of the system". System State Copy backs up critical components operating systems Windows family. This allows you to restore the previously installed system after destruction. When copying System State, the registry, boot and other files important for the system are saved, including for restoring Active Directory, Certificate Service database, COM + Class Registration database, SYSVOL directory. In UNIX-based operating systems, an indirect analogue of System State copying is saving the contents of the / etc, / usr / local / etc directories and other files necessary to restore the system state.

Additional functions Nice tool data transfer will have an easy-to-use interface that guides you through every step. If for some reason the result is not what you wanted, the software should have an easy way to reverse the migration. When you transfer programs to new computer, sometimes applications do not work due to compatibility issues. If you have such programs that you absolutely must have, you will be looking for migration software that will help keep your applications compatible.

What follows from this conclusion: you need to use both types of backup: for disaster recovery, and for archive storage. At the same time, it is imperative to determine the list of copied resources, the time to complete the tasks, as well as where, how and how long the backups will be stored.

With small amounts of data and not very complex IT infrastructure, you can try to combine both of these tasks in one, for example, make a daily full copy of all disk partitions and databases. But it is still better to distinguish between the two goals and choose the right means for each of them. Accordingly, a separate tool is used for each task, although there are universal solutions, like the same Acronis True Image package or the ntbackup program

Imaging software: what we tested, what we learned

Some offer online forums where you can solve problems yourself. While the migration process can be done manually, data transfer tools can save you time and headaches when sifting through your files and product keys. We tested our disk imaging software to see if it can create a useful image of the target file, folder, or everything hard disk... We also evaluated the virtualization parameters to see how easy it was to get information from virtual machine and from it, as well as create a virtual disk.

It is clear that when defining the goals and objectives of backup, as well as solutions for implementation, it is necessary to proceed from the requirements of the business.

There are different strategies you can use when implementing a disaster recovery task.

In some cases, a direct system recovery to bare metal is required. This can be accomplished, for example, with Acronis software True Image bundled with Universal Restore. In this case, the server configuration can be returned to service in a very short time. For example, it is quite possible to lift a partition with an operating system of 20 GB from a backup copy in eight minutes (provided that the archive copy is available over a 1 Gb / s network).

The best software packages in our ratings are those that work well, are easy to use, and have a wide variety of storage options. Below are the excerpts we learned from our testing. The best disk cloning software offers advanced and differential update options, so you have control over what exactly gets the image every time the software backs up. Good imaging software also provides backup validation, which verifies that every part of the file or folder image you created has been configured correctly and will work correctly when you need it, such as when your computer crashes.

In another case, it is more expedient to simply "return" the settings to the newly installed system, such as copying configuration files from the / etc folder in UNIX-like systems (in Windows, this roughly corresponds to copying and restoring the System State). Of course, with this approach, the server will be put into operation no earlier than the operating system is installed and the necessary settings are restored, which will take much more long term... But in any case, the decision of what to be Disaster Recovery stems from the needs of the business and resource constraints.

While computers are generally as reliable as humans, they can have a bad day and fail completely. If your computer crashes, disk imaging software can come to the rescue if you have previously created an image. Typically, these environments make it easier to share the image or distribute it to other computers as needed. If you are worried that someone is viewing or modifying your image without your permission, you can choose a program that automatically encrypts your data and allows you to enter a password for it.

The fundamental difference between backup and redundant backup systems

This is another one interest Ask that I would like to touch upon. Redundant equipment redundancy systems mean the introduction of some redundancy into the hardware in order to maintain operability in the event of a sudden failure of one of the components. A perfect example in this case- RAID-array (Redundant Array of Independent Disks). In the event of a single disk failure, information loss can be avoided and a safe replacement can be made by preserving the data due to the specific organization of the disk array itself (read more about RAID in).

I've heard the phrase: "We have very reliable equipment, there are RAID-arrays everywhere, so we don't need backups." Yes, of course, the same RAID array will save data from destruction if one hard drive fails. But here's from data corruption computer virus or it won't save you from inept user actions. It will not save RAID even if the file system crashes as a result of an unauthorized reboot.

by the way

The importance of distinguishing backups from redundant backup systems should be evaluated even when making a data backup plan, whether it concerns an organization or home computers.

Ask yourself why you are making copies. If we are talking about backup, then we mean the preservation of data in case of accidental (intentional) action. Redundant redundancy makes it possible to save data, including backups, in the event of equipment failure.

There are many low-cost devices on the market today that provide reliable redundancy using RAID arrays or cloud technologies(e.g. Amazon S3). It is recommended to use both types of information backup at the same time.

Andrey Vasiliev, CEO of Qnap Russia

Let me give you one example. There are times when events develop according to the following scenario: when a disk fails, data is restored due to the redundancy mechanism, in particular, using stored checksums. At the same time, there is a significant decrease in performance, the server freezes, control is practically lost. The system administrator, seeing no other way out, reboots the server with a cold restart (in other words, he clicks on "RESET"). As a result of this live overload, file system errors occur. The best thing to expect in this case is the long run of the disk check program in order to restore the integrity of the file system. In the worst case, you will have to say goodbye to file system and be puzzled by the question of where, how and in what time frame it is possible to restore data and server performance.

You cannot avoid backups even if you have a clustered architecture. A failover cluster, in fact, maintains the operability of the services entrusted to it in the event of a failure of one of the servers. In the case of the above problems, such as a virus attack or data corruption due to the notorious "human factor", no cluster will save.

The only thing that can act as an inferior backup replacement for Disaster Recovery is a mirrored backup server with continuous data replication from the main server to the backup (according to the Primary  Standby principle). In this case, if the main server fails, its tasks will be taken over by the backup one, and you don't even have to transfer data. But such a system is quite costly and time consuming to organize. Do not forget about the need for constant replication.

It becomes clear that such a solution is cost-effective only in the case of critical services with high requirements for fault tolerance and minimum recovery time. As a rule, such schemes are used in very large organizations with a high commodity and money turnover. And this scheme is an incomplete replacement for backup because it does not matter if data is damaged by a computer virus, inept user actions or incorrect work applications, data and software on both servers can be affected.

And, of course, no redundant backup system will solve the problem of maintaining a data archive for a certain period.

Backup window

Performing a backup places a heavy load on the redundant server. This is especially true for the disk subsystem and network connections... In some cases, when the copying process has a high enough priority, this can lead to the unavailability of certain services. In addition, copying data at the time of making changes is associated with significant difficulties. Of course, there are technical means to avoid problems while maintaining data integrity in this case, but if possible, such copying on the fly is best avoided.

The way out when solving these problems described above suggests itself: to postpone the start of the process of creating copies for an inactive period of time, when the mutual influence of backup and other running systems will be minimal. This time period is called the "backup window". For example, for an organization operating on the 8x5 formula (five eight-hour work days a week), such a "window" is usually weekends and night hours.

For systems operating according to the 24x7 formula (all week around the clock), the time of minimum activity is used as such a period, when there is no high load on the servers.

Types of backup

To avoid unnecessary material costs when organizing a backup, as well as, if possible, not go beyond the backup window, several backup technologies have been developed, which are used depending on the specific situation.

Full backup (or Full backup)

It is the main and fundamental method of creating backups, in which the selected data array is copied in its entirety. This is the most complete and reliable backup option, although it is also the most expensive. If it is necessary to save several copies of data, the total stored volume will increase in proportion to their number. To prevent such waste, compression algorithms are used, as well as a combination of this method with other types of backup: incremental or differential. And, of course, a full backup is indispensable when you need to prepare a backup for a quick system recovery from scratch.

Incremental copy

Unlike a full backup, in this case, not all data (files, sectors, etc.) are copied, but only those that have changed since the last backup. To find out the copy time, you can use different methods for example, systems running Windows operating systems use a corresponding file attribute (archive bit) that is set when a file has been modified and cleared by the backup program. On other systems, the date the file was modified may be used. It is clear that a scheme using this type of backup will be incomplete if you do not carry out a full backup from time to time. When restoring the system completely, you need to restore from the last copy created by Full backup, and then roll the data from the incremental copies one by one in the order of their creation.

What is this type of copying used for? In the case of creating archival copies, it is necessary to reduce the consumed volumes on storage devices (for example, to reduce the number of used tape media). It will also allow minimizing the execution time of backup tasks, which can be extremely important in conditions when you have to work in a tight 24x7 schedule or pump large amounts of information.

Incremental copying has one thing to be aware of. Step-by-step recovery brings back the desired deleted files during the recovery period. Let me give you an example. Let's say a full copy is performed on weekends, and an incremental copy on weekdays. A user created a file on Monday, changed it on Tuesday, renamed it on Wednesday, and deleted it on Thursday. So, with a sequential step-by-step data recovery over a weekly period, we will receive two files: with the old name on Tuesday before renaming, and with a new name created on Wednesday. This happened because different incremental copies were stored different versions the same file, and eventually all variants will be restored. Therefore, when restoring data from an archive “as is” in a sequential manner, it makes sense to reserve more disk space so that the deleted files can also fit.

Differential backup

It differs from the incremental one in that the data is copied from the last moment of the Full backup. In this case, the data is placed in the archive "on a cumulative basis". In systems of the Windows family, this effect is achieved by the fact that the archive bit is not cleared during differential copying, so the changed data is included in the archive copy until the full copy clears the archive bits.

Due to the fact that each new copy created in this way contains data from the previous one, it is more convenient for complete data recovery at the time of the disaster. For this, only two copies are needed: the full one and the last of the differential ones, so you can bring the data back to life much faster than rolling all the increments step by step. In addition, this type of copying is free from the above-mentioned features of the incremental, when, after a full recovery, old files, like the Phoenix bird, are reborn from the ashes. Less confusion arises.

But differential copying is significantly inferior to incremental copying in saving the required space. Since each new copy contains data from the previous ones, the total amount of backed up data can be comparable to a full copy. And, of course, when planning the schedule (and calculating whether the backup process will fit into a temporary "window"), you need to take into account the time it takes to create the last, "thickest" differential copy.

Backup topology

Let's consider what are the backup schemes.

Decentralized scheme

The core of this scheme is a certain common network resource(see fig. 1). For example, a shared folder or FTP server. A set of backup programs is also needed, from time to time unloading information from servers and workstations, as well as other network objects (for example, config files from routers) to this resource. These programs are installed on each server and work independently of each other. The undoubted advantage is the simplicity of the implementation of this scheme and its low cost. Standard tools built into the operating system or software such as a DBMS are suitable as copy programs. For example, it could be the ntbackup program for the Windows family, the tar program for UNIX-like operating systems, or a set of scripts containing built-in SQL server commands to dump databases into backup files. Another plus is the ability to use various programs and systems, as long as they all can access the target resource for storing backups.


The downside is the sluggishness of this scheme. Since the programs are installed independently of each other, then each has to be configured separately. It is rather difficult to take into account the peculiarities of the schedule and allocate time intervals in order to avoid competition for the target resource. Monitoring is also difficult, the copying process from each server has to be monitored separately from the others, which in turn can lead to high labor costs.

Therefore, this scheme is used in small networks, as well as in a situation where it is impossible to organize a centralized backup scheme with the available means. More detailed description this scheme and practical organization can be found in.

Centralized backup

In contrast to the previous scheme, in this case, a clear hierarchical model is used that works on the principle of "client-server". In the classic version, special agent programs are installed on each computer, and the server module of the software package is installed on the central server. These systems also have a dedicated server management console. The control scheme looks as follows: from the console we create tasks for copying, restoring, collecting information about the system, diagnostics, and so on, and the server gives the agents the necessary instructions to perform these operations.

This is how most popular backup systems work, such as Symantec Backup Exec, CA Bright Store ARCServe Backup, Bacula, and others (see Figure 2).


In addition to various agents for most operating systems, there are developments for backing up popular databases and corporate systems, for example, for MS SQL Server, MS Exchange, Oracle Database, and so on.

For very small companies, in some cases, you can try a simplified version of a centralized backup scheme without using agent software (see Figure 3). Also, this scheme can be used if a special agent for the used backup software is not implemented. Instead, the server module will use existing services and services. For example, "scoop" data from hidden shared folders on Windows servers or copy files over the SSH protocol from servers running UNIX systems. This scheme has very significant limitations associated with the problems of saving files open for writing. As a result of such actions open files will either be skipped and not included in the backup, or copied with errors. There are various workarounds for this problem, for example, rerunning the job to copy only previously opened files, but none are reliable. Therefore, such a scheme is suitable for use only in certain situations. For example, in small 5x8 organizations with disciplined employees who save changes and close files before leaving home. To organize such a truncated centralized scheme, working exclusively in Windows environment, ntbackup works well. If you need to use a similar scheme in heterogeneous environments or exclusively among UNIX computers, I recommend looking towards Backup PC (see).

Figure 4. Mixed backup scheme

What is off-site?

In our turbulent and volatile world, events can occur that can cause unpleasant consequences for the IT infrastructure and business in general. For example, a fire in a building. Or a breakthrough of a central heating battery in a server room. Or banal theft of equipment and components. One of the methods to avoid information loss in such situations is to store backups in a location remote from the main location. server equipment... At the same time, it is necessary to provide for a quick way to access the data necessary for recovery. The described method is called off-site (in other words, keeping copies off-site). Basically, two methods of organizing this process are used.

Writing data to removable media and moving it physically. In this case, you need to take care of the means of quickly delivering the media back in the event of a failure. For example, store them in a nearby building. The advantage of this method is the ability to organize this process without any difficulty. The downside is the difficulty of returning the media and the very need to transfer information to storage, as well as the risk of damaging the media during transportation.

Copying data to another location over a network channel. For example, using a VPN tunnel over the Internet. The advantage in this case is that there is no need to carry media with information somewhere, the disadvantage is the need to use a sufficiently wide channel (as a rule, this is very expensive) and to protect the transmitted data (for example, using the same VPN). The difficulties encountered in transferring large amounts of data can be significantly reduced by using compression algorithms or deduplication technology.

Separately, it should be said about security measures when organizing data storage. First of all, it is necessary to ensure that the data carriers are in a protected area, and about measures that prevent unauthorized persons from reading the data. For example, use an encryption system, conclude a non-disclosure agreement, and so on. If removable media is involved, the data on it must also be encrypted. At the same time, the marking system used should not help the attacker in analyzing the data. It is necessary to apply a faceless numbering scheme for labeling name carriers transferred files... When transferring data over the network, it is necessary (as already mentioned above) to use secure data transfer methods, for example, a VPN tunnel.

We have covered the main points when organizing a backup. The next part will cover guidelines and practical examples for creating an effective backup system.

  1. Description of backup in Windows system, including System State - http://www.datamills.com/Tutorials/systemstate/tutorial.htm.
  2. Description of Shadow Copy - http://ru.wikipedia.org/wiki/Shadow_Copy.
  3. Acronis official website - http://www.acronis.ru/enterprise/products.
  4. Description of ntbackup - http://en.wikipedia.org/wiki/NTBackup.
  5. A. Berezhnoy. Optimizing MS SQL Server. // System administrator, No. 1, 2008 - P. 14-22 ().
  6. A. Berezhnoy. We organize a backup system for a small and medium office. // System administrator, No. 6, 2009 - P. 14-23 ().
  7. Markelov A. Linux on guard for Windows. Review and installation of the BackupPC backup system. // System administrator, No. 9, 2004 - S. 2-6 ().
  8. VPN description - http://ru.wikipedia.org/wiki/VPN.
  9. Data Deduplication - http://en.wikipedia.org/wiki/Data_deduplication.

In contact with