Frequently asked questions about server hardware. Why do you need a server, what types of servers are there? What is server equipment

To better understand what modern servers are, let's briefly look at the history of their origin. Initially, all electronic data processing took place on powerful computers - mainframes; users only had a terminal to access the data. Mainframes (mainframe - main rack (English)) were powerful, general-purpose computers for simultaneous service of several thousand users. main feature their architecture is balanced, which was achieved using an additional processor at the channel level, which is synchronized with the computing processor via interrupts. When turning to the channel processor for data, the computing processor at this time switched to calculations for parallel tasks. The terminal was an alphanumeric display and keyboard that connected to the mainframe. Mainframes were supplied by several companies: Hitachi, Amdahl, IBM, etc. As a rule, their products were incompatible with each other.

Companies were locked into solutions from a single vendor who supplied all hardware and software. Computer systems were very expensive, and switching from one system to another was very painful. In 1971, Intel developed the first microprocessor (i4004), which made possible appearance personal computer - IBM PC. With the increase in power and number of PCs, there has been a gradual transition from centralized information processing to distributed (on a PC). Terminals began to be replaced by PCs, and mainframes were gradually abandoned.

However, with the increase in the number of PCs and their power, and the development of local networks, the need for centralized storage and processing of data has again arisen.

There was a need for a server for personal computers. Server is a device on the network designed to serve access to shared resources(files, printers, databases, applications, etc.).

Initially, file servers became widespread, where users stored and exchanged their data. With the growth of global computer network A new direction has emerged on the Internet - telecommunication servers (web servers, ftp, domain names, email). With the development of DBMS, due to changes in the format of data storage and access, file servers lost their popularity and were largely replaced by database servers. File servers remain to this day, but they have become of secondary importance - they are used only for storing user files and various archives. Recently, the popularity of terminal servers has grown - user PCs serve only as a terminal for displaying and entering data, and all user tasks are performed on the server. In this way, significant savings are achieved on the PC (even low-power computers are suitable for the role of a terminal), installation and support costs are reduced software, issues of confidentiality and data security are resolved.

To reduce the total cost of ownership (TCO), which includes the costs of hardware, software and equipment maintenance, many companies today are returning to centralized data processing. But now there is no longer a lock on one supplier of hardware and software; there is a wide selection of solutions from various companies on the market.

The server has become a critical element in the modern data processing infrastructure, the failure of which leads to serious temporary and, therefore, financial losses.

Server downtime can be divided into two categories: planned and unplanned. Scheduled ones are associated with performing routine maintenance on the server: preventive maintenance, modernization, etc.

By using a server built from real server components, you can reduce planned and unplanned downtime. For example, if one of the fans fails, the administrator can replace it without turning off the server. The same can be done with power supplies, if they support redundancy, with hard drives and PCI-X and PCI Express expansion cards.

Unplanned downtime occurs when a server fails. The reasons for failure can be varied, the most common being overheating of components due to stopping fans or failure of the disk subsystem due to the failure of one or more disks. Failures can also result from software failures caused by incorrect software configuration. There are still situations in Russia where the server simultaneously serves as a system administrator’s workstation, which leads to the installation of unnecessary software and various system conflicts, i.e., the reliability of the system drops catastrophically.

The reason for the failure may also be a deliberate remote attack on the server in order to paralyze its operation. Such an attack can be carried out either from local network, and from the Internet (if the local network has access to the global network).

In case of downtime of the main working server financial losses can be roughly calculated as follows:
losses = number of users working with the server * average salary per user hour * number of hours.
Losses from imperfect transactions, penalties, penalties, etc. can also be added here.

The worst option is a simple one, accompanied by data loss. Often, data can cost more than the most modern server. To prevent this situation, constant data backup to tape drives or other storage devices - CDRW, DVD-RW, etc.

In 1995, Intel, a leading supplier of microprocessors, developed Pentium processor Pro (150 MHz, 512 KB cache), positioned as a server. It differed from its desktop counterparts in its large cache and advanced architecture, partially borrowed from processors with RISC architecture. In Pentium Pro Intel for the first time included dynamic execution technology (Dynamic Execution), that is, instructions can be executed not only sequentially, but also in parallel using code branch prediction and reordered execution of instructions. This significantly increased the efficiency of the processor - the number of instructions executed per clock cycle.

The second innovation was a large built-in L2 cache. For server systems, having a larger cache is very important. Processors always operate at frequencies several times higher than the memory frequency. Half the instructions standard applications represents commands for working with memory - loading and unloading data (Load-Store). Memory is worked according to the following scheme: if the data was not found in the L1 cache, then an access to the L2 cache follows, this takes 9–16 processor cycles, if there is no data in the L2 cache, then accessing the memory takes up to 150 processor cycles cycles during which the processor waits for data. A large L2 cache increases the likelihood of fast data access, therefore increasing the efficiency of the processor.

We can say that for the first time Intel is using and testing its new advanced technologies on server processors, then these technologies are gradually spreading to desktops. This has already happened with the integrated L2 cache, dynamic execution, and hyper-threading. Next up is 64-bit memory addressing (EM64T).

The Pentium Pro was followed by other server processors: in 1998 - Intel Pentium II Xeon (400–450 MHz, 1-2 MB cache), Pentium III Xeon (700–900 MHz, 1-2 MB cache). In 2001, a server analogue of the Pentium 4, Xeon, was released, which is being developed and used today.

Thus, Intel has been developing server processors and motherboards for 9 years. Since 1999, Intel, in order to expand its server business, began to develop and produce server cases, and in 2001, for the first time, it independently developed a server chipset - the E7500. Before this, Intel and other server manufacturers motherboards used server chipsets from ServerWorks (a division of Broadcom). With the advent of the E7500 and E7501 chipsets, Intel almost completely ousted ServerWorks from the dual-processor chipset market. Today, ServerWorks chipsets are widely used only in multiprocessor systems based on Xeon MP.

Modern Intel chipsets can be divided into server and desktop. In server chipsets, PCI-X I/O buses are directly connected to the MCH (Memory Controller Hub), in desktop chipsets this is always done through the south bridge (ICH-I/O controller HUB). In chipsets for servers entry level(E7210, 875P) Gigabit Ethernet adapter is directly connected to the MCH to balance the load and offload the ICH.



Figure 1. Comparison of chipset architectures for dual-socket servers (E7500), entry-level single-socket servers (E7210) and Intel desktop chipsets (I845).

By now, server solutions from Intel have reached a degree of maturity: generally accepted open standards for individual server subsystems have appeared: IPMI (remote management), SSI (power supplies and enclosures), DMI (system management and inventory).

Now let's look at the basic requirements for the server, which have developed in this moment:

  1. Reliability
  2. Performance
  3. Controllability
  4. Extensibility

1. Reliability

Thanks to this, reliability is achieved in servers:

  • The use of special server components that undergo more thorough testing.
  • Component redundancy: redundant power supplies, fans, hard drives.
  • ECC memory allows automatic correction of single-bit errors
  • Remote management and diagnostics of the server (the ability to view temperature, fan speed, alerts about critical failures)

2. Performance

At the moment, performance is one of the most slippery server indicators. Entry-level servers may not differ in processor power, and sometimes even be inferior to conventional PCs, since some server tasks do not require much computing power.

Let's look at the most common server roles and the load on various subsystems during their execution:

Table 1. Conditional load levels on various server subsystems depending on the server role. (1 is the least load, 3 is the greatest.)

Thus, we can identify three server tasks where the processor power of modern office computer may be enough:

  • file servers
  • firewalls
  • mail servers
But with an increase in the number of users, and, accordingly, the load, a full-fledged server may be required to perform these tasks.

Let's see what happens if you install a powerful desktop as a database server. The mechanism of operation of the database server can be roughly described as follows: a request is received to the server over the network, the necessary data is loaded into RAM from the drives, and then processed. The changed data must be written to the drives, a note about the completed transaction must be made in the log, and the data must be sent back over the network. With a large number of simultaneous requests, the ability of the server to execute several application threads simultaneously, fast access to data (a large number of random access memory) and a fast and reliable disk subsystem.

The bandwidth of the desktop PCI bus is 133 Mb/s, which is easily eaten up by I/O devices.

Gigabit LAN card has a maximum throughput at 125 Mb/s, respectively, two gigabit cards operating simultaneously will already give 250 Mb/s. If we also add traffic from hard drives - in the case of IDE, up to 40–60 Mb/s, SCSI up to 60–70 Mb/s. If you use a RAID controller with several hard drives in the array, then the traffic on the bus will increase in proportion to their number. Moreover, the server must service all this traffic simultaneously. As we found out earlier, desktop chipsets have one common I/O bus, so expansion cards have to compete for bus bandwidth, which becomes a bottleneck. In turn, the server is characterized by the presence of several independent “wide” I/O buses, now PCI-X, in the future PCI Express.

So, performance in this server is ensured as follows:

  • Using two or more processors
  • Availability of several independent PCI-X or PCI Express buses
  • Ability to use large amounts of RAM

3. Controllability.

  • The ability to remotely (over the network) receive information about the temperature of processors and motherboards; fan rotation speed.
  • The administrator can set different options for receiving alerts (by e-mail, to pager, SNMP Alerts), about events occurring on the server - fans stopping, processor overheating, chassis opening, etc.
  • Remote on/off, server reboot, event log viewing, diagnostics, microcode update.

4. Expandability.

  • Possibility of using multiple processors
  • Possibility of installation large quantity memory modules
  • Several independent buses: PCI, PCI-X for installing additional expansion cards.

So, we can make sure that to fulfill all four requirements, a real, full-fledged server is needed. Installing a powerful PC as a working server gives imaginary initial savings, which are then “eaten up” by the costs of its maintenance and modernization.

Personal ComputerWork stationServer
1. Reliability
Node reservationNoYesYes
Memory usage with ECCyes (rarely used due to the high cost of memory)Yesyes (always)
2. Performance
Support for two or more processorsNoYesYes
Maximum supported amount of RAM4 GB8 GB8–16 GB
Availability of independent high-speed I/O buses1 PCI-Express slot for graphics cards + PCIAGP + PCI-X + PCI Express + PCIMultiple independent PCI-X+PCI-Express+PCI buses
3. Controllability
Remote diagnosticsCPU temperature, fan speedviewing the event log, temperature sensors, case opening
Remote controlNoNo
4. Expandability
Multiple independent PCI/PCI-X busesNoYesYes

Table 2. Comparison of PC, workstation and server capabilities

What does a modern server consist of: a description of the main components and subsystems.

Users often have a question: why do servers cost so much more than regular powerful computers? How are they different from office PCs and why real servers are better. This question can only be answered by describing the main components, the “blocks” from which the server is built. Let's try to give short description main server components and subsystems.

Cases.

There are two main types of server cases: rackmount and pedestal. Pedestal cases are standard “towers” ​​that differ from PC cases only in size, a more capacious drive cage and better cooling. Today, pedestal cases are losing popularity, their place is taken by rackmount cases. They are designed to be installed in a 19" telecommunications rack or cabinet. As a rule, rack-mount enclosures are equipped with rails that allow servers to be pulled out for service work. Such enclosures take up less space and are more convenient to maintain. Their height is measured in units (U). One unit is equal to 44.5 mm. The most common rack enclosure sizes are 1U, 2U, 4U, and 5U.

Power supplies

Server components (processors, hard drives, motherboards, etc.), due to their high performance, consume more electricity than their counterparts for office PCs. Consequently, servers require more powerful and reliable power supplies. Server processors Xeon consumes up to 120 W, SCSI hard drives up to 20 W, motherboards up to 40 W. Through simple calculations, we can come to the conclusion that the minimum power supply for single-processor systems should be 300 W, for dual-processor systems - from 400 W and higher, depending on the configuration.

In order to increase reliability, servers often use redundant power supplies. If one power source fails, an additional one comes into play without losing power. The administrator receives a message on the console about the failure of one of the sources, which gives him the opportunity to quickly replace the faulty part and restore the redundancy. Accordingly, in in this case power supplies support hot-swappable functionality without shutting down the server.

motherboards

Form factor

Server systems use motherboards of two form factors: ATX (E-ATX) and SSI. ATX is an older and more familiar standard, mainly aimed at PCs. Today, only entry-level server boards are created on its basis. SSI (Server System Infrastructure) is a special standard for server components (power supplies and cases), actively promoted by Intel. The introduction of the open SSI standard should simplify the creation of new server cases and power supplies, thereby reducing costs and the final price for the user.

The visible difference between the motherboards of the two standards lies in the different power connectors: 20-pin for ATX (E-ATX), and the new 24-pin for SSI. The size of the board also differs - SSI is always 12"x13", ATX-12"x9.8", E-ATX-12"x13". In principle, it is possible to connect an SSI power supply to an ATX board and vice versa, through special adapters, since the SSI connector is actually an ATX connector + additional contacts for 3.3V and 5V.

Supported I/O buses

One of the factors that influences the price of a motherboard is the buses it supports. Entry-level motherboards (single-processor) are characterized by the presence of a standard PCI bus, although with the release of the new Intel E7210 chipset, the PCI-X bus first appeared on single-processor motherboards. On more advanced (dual-processor) boards there are several independent PCI-X buses. In the future (late 2004–2005), all server boards will be required to use the new PCI Express serial bus. Indeed, PCI Express has many advantages:

  • Increased throughput - 200 Mb/s per channel, certified 1, 2, 4, 8, 16 and 32× channel connector options. The bus is full duplex, i.e. data can be transferred “there” and “back” simultaneously, the peak speed can reach 6.4 Gb/s.
  • Support for hot-swap expansion cards
  • Possibilities for monitoring the integrity of transmitted data (CRC) are included

Table 3. Comparative characteristics data buses

Chipset

Initially, the server chipset market was entirely owned by ServerWorks. But with the release of Intel Xeon and the release of the E7500 chipset, the leadership in the chipset market for dual-processor boards passed to Intel. At the moment, ServerWorks is present only on the market of 4-processor servers with the Grand Champion HE chipset.

At the moment, there are two chipsets from Intel on the dual-processor systems market: E7501 for the server segment and E7505 for workstations (supports AGP Pro 8x). Boards based on the new Intel E7520 and E7320 chipsets have been announced and will soon go on sale. These chipsets support DDR-2 memory (400 MHz) - peak bandwidth increases by 20% and reaches 6.4 Gb / s, power consumption is reduced by 40% compared to DDR memory. The chipsets also support the PCI Express bus.

Intel 875P and Intel E7210 chipsets are used to build single-processor systems.

CPUFSBTiresMemory types
875PPentium 4800 PCIDDR 266/333/400
E7210Pentium 4800 PCI-X 64/66DDR 266/333/400
E7500Xeon400 PCI, PCI-XDDR 200 ECC Registered
E7501Xeon533 PCI, PCI-XDDR 266 ECC Registered
E7505Xeon533 PCI, PCI-X, AGPDDR 266 ECC Registered
E7520Xeon800 PCI-X, PCI-ExpressDDR2 400 ECC Registered
E7320Xeon800 PCI-X, PCI-ExpressDDR2 400 ECC Registered

Table 4. Specifications Intel server chipsets

Control

The ability to operate-system-independent remote monitoring and control is critical for servers. Today it is possible to remotely (via a network) obtain information about the temperature of processors and motherboards; fan speed and other server parameters. The administrator can set various options for receiving alerts (by E-mail, on Pager, SNMP Alerts) about events on the server: fans stopping, processor overheating, chassis opening. It is possible to remotely turn on/off and reboot servers. Moreover, these functions are available even when the server is turned off, if it is connected to a local network or a special control network and is supplied with standby voltage. In the future, it is planned to introduce additional functions, For example, system administrators will be able to remotely (over the network) access the screen and server management console, update the BIOS and other functions.

Some manufacturers integrate functionality for remote control on motherboards (Intel). Other companies take a more flexible approach - control functions are implemented by a separately purchased daughterboard (Tyan). In the future, Intel plans to switch to a similar scheme. Moreover, Intel will have different types of daughterboards, differing in the supported remote control functionality.

RAM

Servers typically support large amounts of memory. Many applications (SQL servers, web servers, etc.) load the maximum amount of data into RAM to speed up operations. U file servers A file cache is located in RAM, speeding up access to user data. At the server terminal on Windows based Each user session is allocated at least 32 MB of RAM plus 256 MB for the operating system. It is easy to calculate that to operate a terminal server for 50 users, at least 2 gigabytes of memory are required. Typically, dual-processor boards have from 4 to 8 memory module slots. Accordingly, the maximum volume can reach 16 GB. Although in practice, using more than 4 GB of memory on 32-bit systems is not optimal. Using Physical Address Extensions (PAE) technology, 32-bit systems can use up to 64 GB of memory, but with performance losses.

All server boards support ECC memory parity. ECC memory allows you to correct single bit errors and report double ones, thereby ensuring server fault tolerance. Dual-processor servers use special register memory. The difference from the usual one is that it contains registers (buffers) that control the distribution of the signal across all memory chips. Accordingly, buffers increase the latency of working with memory, but increase the reliability of memory access, which is critical for servers. Also, thanks to the presence of registers, the chipset can support a larger number of memory slots. Thus, dual-processor servers use register memory with parity. Single-processor servers are equipped with regular memory with or without ECC support.

Processors

To build 32-bit single-processor systems today, Intel Pentium 4 is used, for dual-processor systems - Xeon DP, for four-processor and more - Xeon MP. The Intel Xeon is essentially an Intel Pentium 4, but with the SMP enabled. Xeon DP based on 130 nm technology supports 533 bus, 512 KB L2 cache and 1.2 MB L3 cache. Newly introduced 90 nm based Xeon DP. those. process (Nocona) supports 800 MHz bus, 1 MB L2 cache. Xeon DP (Nocona) supports EM64T technology, one of the features of which is a 64-bit addressing mode, which simplifies working with large amounts of RAM. The new Xeon features enhanced SpeedStep technology, which allows you to dynamically manage power and reduce processor power consumption.

Xeon MP differs from Xeon DP in its large built-in L3 cache (up to 4 MB), the use of a slower 400 MHz bus and support for 4 or more processors. U Xeon processors There are three levels of caches L1, L2 and L3. Caches operate at the core frequency, but have different operating latencies (latencies): L1 - 2–9 processor cycles (depending on data type), L2 - +7 cycles (9–16), L3 - +14 cycles (23–30 ). In fact, according to various studies, the presence of an L3 cache does not significantly improve system performance on typical tasks. A feature of the cache of Xeon processors is inclusiveness, that is, the contents of the L1 cache are contained in L2 and L3, data from L2 is duplicated in L3, which reduces the effective total capacity of the cache. Thus, to achieve the highest computing performance of the processor, you should first of all focus on the processor bus frequency and cache size different levels(a large L2 cache is preferable to an additional L3 cache due to lower latency).

Disk subsystem

Discs

Today, the market offers hard drives of three interfaces: Parallel ATA (IDE), Serial ATA (SATA), SCSI.

Parallel ATA (IDE) is the main interface for personal computers. To the benefits of this interface can be attributed low price per megabyte of information.

Serial ATA is the successor of the PATA interface. The new standard has expanded bandwidth to 150 Mb/s and uses new flat cables to connect drives. The SATA standard allows for “hot” plugging of drives; it contains a mechanism for optimizing the command queue inside the controller, which significantly speeds up I/O. Unlike the PATA interface, in the SATA standard only one device is connected to one channel. The SATA and PATA interfaces are not physically compatible, but third-party companies have developed interface converters.

The SCSI interface has traditionally been used in server systems. Its undeniable advantages include the ability to connect up to 15 devices per channel, high throughput (up to 320 Mb/s), bus arbitration technologies that reduce processor load, and optimization of the command queue. These features make SCSI an ideal interface for applications related to big amount I/O operations. Hard disks with a SCSI interface, as a rule, have a higher spindle speed - 10,000 or 15,000 rpm, which increases the speed of searching and data transfer. The disadvantages of this interface include the high cost of storage ( HDD SCSI is three to four times more expensive than SATA or PATA drives of the same capacity). The physical interface of SCSI disks comes in two types: the SCA 80-pin interface (hot-swappable) and the 68-pin interface (non-hot-swappable).

RAID controllers

RAID controllers allow you to organize from a group hard drives fault tolerant array. There are different levels of fault tolerance, but the most common are the following:

  • Level 0 (striping) - data blocks are sequentially placed on several disks, a gain in speed is achieved, but without fault tolerance. That is, if one of the hard drives fails, the user loses all information.
  • Level 1 (mirroring) - disks are paired and are an exact copy of each other; this level requires at least two disks. 50% is lost disk space, but fault tolerance is achieved
  • Level 5 (striping with parity) - data blocks plus a checksum are placed on the disks. Moreover, the checksum turns out to be “smeared” across all the disks of the array. If one of the disks fails, the data is restored based on the checksum to the replacement disk (hot spare). A minimum of three drives are required to build a Tier 5 array. For checksums, disk space equivalent to the volume of one of the drives is used (in the case of n drives, the total volume of disk space is n-1).
  • Level 0+1 or 10 (mirroring+striping) - mirroring+sequential block recording. It consists of two groups of mirrored disks, which are written to sequentially in blocks. At least 4 disks are required. Disk space loss 50%. Level 10 combines speed and reliability. Such an array can continue to function if half of the disks fail. Since the controller does not need to calculate checksums, writing to disks is much faster than with level 5.

Thus, level 0 is most often used where high data speed is required and data security is not important, for example, non-linear editing video. Level 1 is used where data needs to be stored without the use of complex hardware systems. As a rule, levels 0 and 1 are supported by all, even the cheapest RAID controllers, including those integrated on the motherboard. Level 5 seems to be optimal in terms of reliability/disk space loss. But its implementation requires a full-fledged RAID controller with hardware acceleration of checksum calculations. Due to the need to calculate checksums, this level is inferior in recording speed to level 0+1 (10). Level 10 is used where high reliability and read/write speed are needed, and disk space losses are not critical.

RAID controllers differ in the type of bus they use. As a rule, serious solutions are focused on the PCI-X bus, as the fastest at the moment. On the boards of full-fledged RAID controllers, cache memory is additionally placed; There are options with integrated and expandable memory. Cache size affects array performance, but the relationship is not linear.

There are two operating modes for the RAID controller cache: Write Through and Write Back. In the first mode, the controller does not confirm the write until the data reaches the disks; in the second, it is enough for the data to enter the cache. Accordingly, the second mode significantly speeds up write operations, but there is a danger of data loss during a power failure. To solve this problem, some models of RAID controllers, usually dual-channel, are also equipped with a built-in battery (BBU - Battery Backup Unit). In the event of a power failure or hardware reset, the RAID controller with the battery manages to flush data from the cache to the disks.

There are also cheap RAID solutions, such as Zero Channel Raid (ZCR). RAID controller of this type is an expansion card that converts the built-in SCSI channels on the motherboard into RAID channels. As a rule, ZCR boards do not contain a cache; they have low-power processors. The use of such systems is justified only for creating arrays of levels 0 and 1.

It is also possible to create a RAID array without a special RAID controller, using software. Many modern OS support this function (Windows 2000 Server, Windows 2003 Server, Redhat Linux 9, etc.). However, the operating speed of this array will be significantly lower than that of a hardware one, since CPU will be loaded to a greater extent, this will be especially noticeable at level-5. But the main problem is the low reliability of such a solution - in the event of a power failure, part of the array data will inevitably be lost.

Instead of conclusions

Thus, the server is a complex complex of various subsystems. When configuring a server, you need to start from the task for which it is intended. With different server roles, the load on the server subsystems changes. It is important to find the optimal solution, and for this it is necessary to calculate the future load on the server. This can be done independently or with the help of technical specialists from computer companies who have experience in designing server systems.

Progress does not stand still and more and more new developments in science and technology are entering our modern life. Development computer technology and the use of their products in Everyday life it simply obliges us to understand the purpose of new digital equipment. Almost everyone has heard about server equipment, but not many of us know and understand the principle of its operation and purpose.

Access to the Internet is so important today that many can no longer imagine their lives without it. Communication with relatives located on the other side of the earth, the work of the banking system of all countries of the world, mobile connection, social media, the answer to any question is all given to us today by the Internet. Access to it is provided by a huge computer network interconnected by server equipment.

Functions of service equipment

Server equipment includes the mandatory presence of a “main computer”, which is connected via routers and a computer network to other personal computers. The main purpose of server equipment is to ensure communication between any personal computer of an organization and others, transmission and storage of a large amount of information. It is very important in the operation of server equipment to ensure guaranteed preservation confidential information, completely eliminating the loss of important data. Server equipment must operate uninterruptedly and continuously 24 hours a day, even in extreme conditions. The performance of a huge number of tasks depends on the performance of the server.

Requirements for the server room

As can be seen from the main functions performed by the “main computer”, the requirements for the selection of such separate equipment are the highest. The computer must be very powerful, with high performance, and be able to work non-stop around the clock. The ideal operation of a server room would be when, at the very beginning of its setup and debugging of the software, a process for performing tasks was established that would not subsequently require the intervention of service personnel. But practical experience confirms the need to have computer scientists and programmers on staff.

Depends on server hardware effective work not only other computers, but the entire company as a whole. A failure in its operation can cause significant damage to the entire production than the cost of the server room equipment itself.

What conditions will be provided? Good work server hardware?

To ensure the basic functions of server equipment, the following conditions are required:

Availability of a separate room for installing server equipment;

Required number of motherboards, RAID arrays, hard drives;

Maintaining the temperature required for work;

Staff of specialists in the field of computer technology.

Timely improvement of server equipment is also an important condition for good performance. Enterprises specializing in the manufacture of computers and their components are developing new types of special processors, racks for hard drives, motherboards, reinforced RAM modules, special cabinets.

When choosing equipment for a server room, a big mistake would be to focus on its cost, both high and low. This approach is absolutely wrong. To ensure good server performance, you need to be guided by the functional properties of the component equipment. You cannot do this without the advice of experienced specialists.

The server is the essence of the concept.

Server (from the English to serve - to serve) is a software and hardware component of a computing system that performs service functions in response to client requests, providing him open access to certain resources.

The main task of the server is to fulfill requests from clients or programs. A server is a purely utilitarian thing that is designed to perform specific tasks. The main property of a server is the execution or solution of a particular task. That is why we advise you to first determine the task for which a specific server will subsequently be selected.

Balanced server.

A balanced server is the optimal combination of performance, quality and price that the Customer and Seller strive for. Selecting such a combination is our common task.

The task of selecting the optimal server is not trivial. We take into account many factors that the customer is not even aware of.

Often, the customer does not adequately assess the scale of the tasks or requires a server according to a certain specification, and not according to the tasks that will be assigned to it. That is why the selection of a server, as well as its production itself, is the task of professionals. Our specialists face a wide variety of tasks every day and accumulate and systematize many years of experience in this field.

Server selection tactics.

This tactic, first of all, consists of determining a number of tasks that the server will solve in the future, as well as the necessary performance margin and scaling capabilities. In addition, you need to find out whether the client needs a server with a fault-tolerant design and, finally, decide on the budget. If the assigned tasks exceed the allocated budget, then the specialists adjust the tasks or suggest increasing the budget. An essential factor is the scalability of the solution to meet ever-increasing customer requirements. This will allow you to solve the problem with minimal initial and subsequent investments, reducing the total cost of the finished solution.

The above tactics, combined with the experience and professionalism of our specialists and managers, allows our clients to receive exactly the solution they need.

So far we have talked about servers only in the context of virtualization and applications. In our articles you can often find terms such as: “ Mail server”, “Proxy server”, “Web server”, etc. That is, the main attention was paid precisely software package servicing user requests. Today's article will focus on the server hardware. We will talk about such components as a processor, hard drive, memory, consider the types of servers and understand how they differ from personal computers. The server is entrusted with such important tasks as centralized, secure data storage, access control to information and uninterrupted office operation, so you need to approach its selection responsibly.

Typical components that a server includes are usually similar to those that come with client computers, however, they are of higher quality and high performance.

Server components

Classification by type

    All of the above components are placed in the server case. Actually, they are classified according to the type of case.

  • Floor/Table
  • They are used for tasks that require a small number of servers, or just one, and when there is no specially designated room (server room) for them. Easy to assemble and upgrade.

  • Rackmount
  • Compact, designed specifically to save space. These enclosures are always manufactured to the same 19-inch width to fit into the appropriate racks. The greater the height of the rack, the more servers (Unit) can be placed in it

  • Blade servers
  • The most compact type of case. All server components are located in special chassis (chassis), close to each other. The chassis gives them access to common components, for example, power supplies and network controllers.

Using server equipment, data is securely exchanged, stored and updated within one company. The main tasks of servers are storing information, protecting against information loss, and processing data.

What is the equipment?

Server equipment is a hardware complex, which, depending on the scope of construction and architecture of application, includes server and network devices with different characteristics. It is located in a specialized dedicated place (rooms), called a server room.”

Modern Hardware companies place them in server cabinets for ease of installation and saving space. These cabinets are equipped with devices from leading industry manufacturers.

For example, Dell's technological equipment is a modular concept for assembling servers. If a server component fails, it can be easily replaced with a similar or upgraded component.

There are tower and rack-mount enclosures, with different height (width) dimensions.

For example compact

Dell PowerEdge T13 , which can work in confined spaces, but with performance that is not inferior in power to equipment in other housing options.

Looking under the cover of the case, you can find a familiar arrangement of server components.

The cases of personal computers and servers are in many ways similar to each other, but the equipment installed in them solves different problems. A computer is the tasks of its users. Server – tasks for hundreds and thousands of connected subscribers 24/7.

Installation

Ready-made hardware solutions are installed separately or in a specialized cabinet (or rack), which compactly accommodates several servers of a specific chassis.

Additionally, the furniture is equipped with small doors made of glass or plastic, allowing access to parts of the system.

Racks can be supplemented with:

    cooling system;

    power distributors;

    LED indicators.

In addition to rack-mount enclosures, server mounting can be carried out on the ceiling or on a table. It depends on the allocated space and the wishes of the customer. Installation, commissioning and software configuration are entrusted to professionals. The same applies to network equipment, which is connected with patch cords to each hardware device, and through network equipment is connected via communication channels to the global network.