By virtualizing the IT environment, you receive such benefits as its consolidation, scalability, and high availability. However, to get these features, you need to pay special attention to the most important elements of the infrastructure. First of all, you should avoid single points of failure in every aspect. With the increase in the density of virtual machines on the server, any damage (of a single component, an entire server or another device) exposes us to the domino effect and unavailability of many systems.

The chain of dependencies is very long and the costs of fulfillment of all the requirements are prohibitive. These expenses are a barrier, particularly for small and medium-sized companies, however that does not mean that they have to give up striving for a highly efficient and cost-effective infrastructure. The virtualization of the SAP environment is one of the ways to optimize the IT infrastructure maintenance costs in the company.

Hardware migration as a good opportunity for virtualization

The scale of the SAP environment is usually a reflection of the size and complexity of the company. However, regardless of the size, users’ expectations are similar: all of them would like the activities and tasks to be performed optimally and quickly. Purchasing data entry, preparation of goods issues, creation of sales reports, etc. – users expect these activities to be completed almost immediately. However, the system performance does not go hand in hand with the passing time and the proliferation of databases.

With the same physical hardware, the performance – taking into account the data growth – is bound to get worse almost by definition. And although, of course, in many cases, the optimizations of costly queries through development patches or creation of indexes are possible, in general, the only solution for improving the situation is the migration to new hardware resources. When you face this need, you should use this opportunity to end the hardware migration with virtualization.

The undoubted advantage of virtualization is the scalability. Just a few years ago, the RAM memory was so expensive that the buffer optimization was performed by kilobyte to save money. Still many historical values ​​are configured in those units or even in single bytes. However, it is difficult to imagine today physical servers with 1 GB or 4 GB of RAM. Popular smartphones have similar parameters. Physical servers with 256 GB or 512 GB of RAM are not an exception currently. Similarly, the use of additional instructions in CPUs or multiple cores led to a manifold increase in their performance.

In this situation, with proper preparation of the company’s infrastructure, you can successfully launch – using virtualization – SAP systems processing several, more than a dozen or a few hundred thousand dialog steps as well as those that convert e.g. 1.5 million dialog steps per day. In the case of small systems, there is nothing to prevent the launch of several, more than a dozen of them on a single powerful server. For the biggest systems, you can designate several servers for this purpose (one for a DB instance, a few of them for a central instance and dialog instances).

At the same time, in the case of CPU compliance, the migration between new physical servers can take place without switching them off, and in the case of the most modern CPU (EVC disabled) it may be carried out using the off and on option (a few minutes downtime).

Consolidation and virtualization of IT environments
BCC (now All for One Poland) offers the projects of migration of heterogeneous IT environments to a consolidated, unified virtual environment. The benefits for customers include:
– reduction of the required hardware,
– simplification of the architecture,
– the ability to manage architecture at a single point,
– the possibility of achieving high availability for the whole architecture.
As a result, the consolidation and virtualization of IT environments translate into reduced costs of IT maintenance and development in the company, as well as significantly easier daily administration and development of the IT infrastructure.
BCC implements projects related to the consolidation and virtualization of IT environments based on the solutions of leading global manufacturers, including VMware, Microsoft (Hyper-V). In the projects for customers, we also use the competencies and many years of experience with the use of virtualization both for our own purposes and to meet the requirements of our customers’ systems that are hosted at All for One Data Centers.

 

Several useful tips

It is recommended to use in the servers the latest generation of CPUs due to their support for virtualization (including Extended Page Tables EPT – Intel, Rapid Virtualization Indexing RVI – AMD). For CPUs with the option of HT (hyperthreading), you should consider its disabling on the server. When the HT option is enabled, it may happen that one of the virtual CPUs of a very demanding system “gets stuck" in the HT thread. If this is the case, you should choose “none" in the “Hyperthreaded Core Sharing" option for this virtual machine. Additionally, it is recommended to disable power management or to switch to the “OS controlled" option in BIOS.

In virtual machines, it is required to install the VMware drivers and tools. When planning vCPU and RAM resources for production machines you should be careful not to exceed the physical resources. For a big machine, it is recommended to assign the amount of RAM as part of the NUMA node; after exceeding the amount, it is recommended to use the vNUMA option.

Although the resources of virtual CPUs can exceed the resources of physical cores, for production systems with high utilization this results in a significant deterioration in performance since this leads to the rivalry between virtual machines competing for physical resources.

With regard to performance, you should also, or maybe first of all, keep in mind the importance of matrix resources. The scale of multiple systems increased from 100 GB through 500 GB up to over 1 TB, and even more than a dozen TB in the case of very complex systems that process large amounts of data. Also in this field, the progress is huge, but in terms of performance, this is an area that requires special care when designing and implementing the architecture. While the failure to implement the recommendations in this area for small systems with low utilization might not cause delays noticeable to the user, it may result in major performance problems of large systems with very high utilization.

It is recommended to use PVSCSI controllers (Paravirtual SCSI Controller) for virtual machines used for SAP systems. In addition, it is advantageous to separate os/swap data from the database and transaction log files and to assign them to different disks and controllers. This separation will increase the possibility of I/O queuing to additional disks and controllers as part of the operating system.
There are four controllers available to us, so it seems optimal to use the first one for os/swap, the second one for the transaction logs, and the third and fourth ones for the database.

In the network layer, it is recommended to use the vmxnet3 network adapter for virtual machines. In the case of the host, you should make sure that it is connected to switches redundantly.

In the systems with very intense traffic, it is recommend to switch a virtual machine in the “Latency Sensitivity" options from the default “Normal" to “High". This will disable the aforementioned rivalry between vCPUs and the pCPU thread. In addition, this will enable reservation for the RAM memory. This will disable coalescing interruptions and LRO (Large Receive Offload) for vmxnet3.

The implementation of these recommendations brings us closer to the optimal preparation of virtual machines for the SAP environment. And what if you have done the above steps, and the performance is not sufficient?

In such cases, and they are not unusual, what may be done is the performance analysis of the operating environment. For this purpose, you can use such tools as esxtop run on the physical server, and Performance Monitor run by the client’s vSphere (or its web-based variant ) and referring to the vCenter in a given environment. And although the purpose of this article is not to describe all of the meters since there are plenty of them, there are a few important ones that touch the roots of problems and are worth paying attention to.

Meters, reads, writes

Previously, we pointed to a possible rivalry between virtual machines for CPU resources as a potential cause of performance degradation. The meter which will help in the detection or confirmation of such a situation is “Ready" for CPU. This is a meter that specifies the time or percentage that vCPU (CPU assigned to a virtual machine) waited to be assigned to a physical CPU (one of the physical cores). The higher the value, the more serious the situation, i.e. the rivalry is greater.

The capabilities of migration of virtual machines within the infrastructure in the mode enabled by the vMotion mechanism allow for load balancing. Depending on the configuration, the system itself can suggest the optimal allocation of virtual machines or even carry out migrations while complying with predetermined rules. You should however be cautious when creating scenarios. With the growing systems and amount of processed data, the “appetite" for resources may increase to such an extent that the only solution is to further develop the environment.

In the event of RAM memory of a physical server, the swap memory usage leads to a very large performance degradation, and it is related to access time, reading and writing to the RAM memory and data store (even SSD). When comparing, for example, 1 MB sequential read from SSD and memory, you will notice that it is roughly four times longer (source: https://gist.github.com/hellerbarde/2843375). These types of memory rewriting and re-reading operations consume additional hardware resources, therefore you should not allow for the above situation to occur.

Speaking of reads and writes, let’s pay attention to – an especially important – matrix part. All data required by the program must be read, and the newly entered data must be saved. This is apparently nothing new, but it is worth remembering at this point that poorly written programs will make inoperative the most extended systems based on almost unlimited resources. The programs written optimally largely depend on the latency during reading and writing data from and to a disk. Therefore, the analysis of the data store read/write latency is often helpful in resolving performance issues.

Scalability, high availability

Many years of BCC’s (now All for One Poland) experience in the virtualization of resources confirm that companies can achieve significant benefits from this solution, primarily due to the scalability and high availability of systems. A variety of systems, growing amounts of data processed by them and users’ expectations will always create the need to improve monitoring techniques and performance.