Quantcast
Channel: Running SAP Applications on the Microsoft Platform
Viewing all 90 articles
Browse latest View live

Learnings From a Recent Large Cloud Migration Project

$
0
0

Introduction

Over the last one year Microsoft has been working with Accenture to move a large global multi-national company from an end of life AIX pSeries/DB2 platform to a pay per use consumption based Windows, SQL Server & Azure platform.

This customer has just gone live after migrating a complex SAP landscape with multiple 9TB databases from their current hosting partner to Azure. The customer is the largest Production SAP on Public Cloud customer worldwide as measured by SAPS, number of dialog steps per day and by overall landscape size and complexity.

The entire production migration of all SAP and non-SAP applications from the current hoster data center to Azure including data transfer and user validation and signoff was completed in 72 hours

To reduce administration costs, improve support and lower complexity the Operating System was changed from UNIX to Windows and the Database was changed from DB2 to SQL Server

After migrating to SQL Server and converting to Column Store the BW database size reduced from nearly 9TB to less than 3TB.

The customer runs a complex SAP landscape that would be commonly seen in a large global multi-national company:

SAP ECC 6.0 EHP7
SAP Portal
SAP Portal – External
SAP BW 7.31
SAP Business Objects
SAP Content Server
SAP SCM
SCM LiveCache
SCM Optimiser
SAP Mii
SAP Mii PCO
SAP TREX
BSI Tax Factory
SAP Solution Manager
SAP SLD
SAP NWDI
SAP PI
WWI (EH&S) & XI Adapter

 


1. Start Planning, System Survey & Patching Early

Logon Production systems and capture screenshots to establish the peak throughput, patch levels, CPU, Disk & Memory consumption and database size.

In transaction ST03 identify the performance trend (such as month end spikes) and peak dialog steps per day. Take screenshots showing the typical and peak values per hour, day, week and month.

The performance data should be broken out by each task type, in particular Dialog, Background, Update, RFC and http(s)

Collate an inventory of the SAP landscape and collect the SAP Application, version, patch levels, Database sizes and sizes of the largest tables.

SAP Note 706478 – Preventing Basis tables from increasing considerably should be reviewed to determine if system or log tables can be archived.

Identify unusually large tables as such tables can impact export and import times.

Careful attention should be paid to the non-Netweaver SAP applications such as TREX, Content Server, LiveCache and other standalone engines.

If required plan early to apply support packs to allow modern releases of SQL Server or another DBMS to be used. At the same time the kernels on the target system should be updated so that they are the same or similar to the kernel planned for the target system. For example if the ECC 6.0 system is on an old depreciated 7.40 kernel update the kernel to the latest 7.45 series kernel on the old UNIX system thereby avoiding a large change to the kernel layer during the migration weekend

After compiling an inventory of the current SAP landscape build a PowerPoint summarizing the current state, remediation actions needed such as kernels or support packs and highlight any issues (such as the customer running a desupported SAP application).

This PowerPoint should be used as the starting point for the target system sizing and solution design.

Do not rely on “calculated SAPS” (based on CPU models) or the theoretical SAPS of the Hardware platform. The current Hardware platform may either be oversized (a lot of excess capacity) and undersized (current hardware is over-utilized).

Start developing a naming convention and review other blogs on this blog site discussing sizing. The 2-tier and 3-tier SAPS for Azure VMs can be found here:

SAP Applications on Azure: Supported Products and Azure VM types

2. Use Intel Based Servers to Export UNIX Systems

When it is time to start the first test migrations use Intel based servers to export the databases.

The performance capabilities of UNIX servers are far below that of modern Intel servers, particularly with respect to the very important SAPS per thread metric. SAP Note 1612283 has more information

Running R3load on UNIX servers will significantly slow down the OS/DB Migration. It is recommended to deploy a number of two socket Intel E5v4 2667 (8 core / 3.2GHz) servers as close to the source UNIX servers as possible. Ideally the Intel servers and the UNIX servers should be on the same network switch.

Due to the very high network utilization it is not recommended to use VMWare if possible. Physical servers will provide the best performance. It is recommended to use 10 gigabit network if this is supported on the UNIX server.

Either Robocopy or the build in Migmon ftp client can be used to transfer dump files from the source R3load servers at the existing datacenter into Azure

3. Carefully Document & Plan HA/DR Solution

The Azure platform and the SQL Server AlwaysOn provide a lot of built in High Availability and Disaster Recovery solutions and features. These features are integrated into SAP solutions and documentation is provided on how to deploy them

Table showing

SAP Component HA Technology DR Technology Comment
SAP AppServer Azure Site Recovery SAP app server is not a Single Point of Failure and does not need HA
SAP ASCS SIOS SIOS + Windows Geocluster SIOS is a shared disk solution that creates a shared disk for sapmnt. See Note 1634991 for Geocluster
SQL Server AlwaysOn AlwaysOn Local HA uses AlwaysOn in Synchronous mode. DR uses AlwaysOn in Async mode
Standalone Engine File Based Scale out – no SPOF Azure Site Recovery Azure Site Recovery takes a file system consistent clone
Standalone Engine DBMS Based DBMS level replication DBMS level replication LiveCache Warm Log Shipping

4. Use Azure Site Recovery

The Azure platform includes a built in Disaster Recovery solution called Azure Site Recovery. Azure Site Recovery can replicate VMs from on-premises to Azure and also from Azure to Azure.

The Azure to Azure Disaster Recovery scenario is in development as of October 2016 and will be released in the near future.

Azure Site Recovery is discussed in this blog and this site

5. Ensure SMIGR_CREATE_DDL is Updated & Note 888210 is Reviewed Regularly

SAP Note 888210 – NW 7.**: System copy (supplementary note) is the “master” note for heterogeneous system copy.

The program SMIGR_CREATE_DDL is used to handle non-standard tables such as those found on SAP BW systems and is frequently updated. It is critically important to ensure all notes referenced in 888210 are applied on the source system before the export happens.

The OS/DB Migration process for SAP BW systems will not function correctly if SMIGR_CREATE_DDL is not up to date

Review Note 888210 throughout the project and implement all notes required into the source systems (UNIX/DB2 or Oracle). Regularly check this note for changes.

Also review Note 1593998 – SMIGR_CREATE_DDL for MSSQL

6. Use Azure Resource Manager & D Series v2 VMs

All new SAP on Azure deployments should be based on Azure Resource Manager and not the old ASM model

New faster Azure D-Series v2 VMs are now available in most Azure regions with significantly faster 2.4 GHz Intel Xeon E5-2673v3 (Haswell) processors.

These VM types are very beneficial for SAP applications because they have high SAPS/thread

New SAP SD 2-Tier and 3-Tier benchmarks for D-Series v2 are published on the SD Benchmark website

Large database servers can run on Azure G-Series including the GS5 with 32 cpu, 448GB and supporting 64 Premium Storage disks each 1TB in size and with 5,000 IOPS

New DS15v2 also supports Accelerated Networking. This is of great benefit during a migration and for busy DB server VMs

7. Use Premium Storage for Development, QAS and Production

Our general guidance is to recommend all customers to use Premium Storage for the Production DBMS servers and for non-production systems.

Premium Storage should also be used for Content Server, TREX, LiveCache, Business Objects and other IO intensive non-Netweaver file based or DBMS based applications

Premium Storage is of no benefit on SAP application servers

Standard Storage can be used for database backups or storing archive files or interface files

More information can be found in SAP Note 2367194 – Use of Azure Premium SSD Storage for SAP DBMS Instance

8. Use Latest Generally Available Windows Server Release & SQL Server Release

The Azure platform fully supports both Linux and Windows and multiple databases are supported such as Oracle, DB2, Sybase, MaxDB and Hana. Many customers prefer to move to SQL Server to reduce the number of support vendors to just two. The entire technology stack is only SAP and Microsoft, greatly reducing the support requirements

Microsoft has released a Database Trade In Program that allows customers to trade in DB2, Oracle or other DBMS and obtain SQL Server licenses free of charge (conditions apply).

The latest Magic Quadrant for Operational Database Management Systems places SQL Server in the lead.

Windows 2016 will be released by Microsoft at Ignite in September

SQL Server 2016 is already Generally Available and more information can be found in SAP Note 2201059 – Release planning for Microsoft SQL Server 2016

Windows 2012 R2 is released and fully supported by SAP for all modern Netweaver components and most standalone engines (such as LiveCache etc). See SAP Note 1732161 – SAP Systems on Windows Server 2012 (R2)

The list of supported OS/DB combinations for Azure is documented in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types

It is strongly recommended to use the latest available SQL Server Service Pack and CU. As a minimum SQL Server 2014 SP1 CU8 or SQL Server 2016 CU2 are recommended

9. Create “Flight Plan” for Migration Cutover Weekend

During the Dress Rehearsals establish the expected rate and duration for the export and import.

It is not sufficient just to establish the runtime of the migration. It is also important to plot the “Flight Plan” so that progress can be compared to a known expected value.

The Y Axis represents the number of packages completed successfully and the X Axis represents hours

If there is significant deviation away from the expected export or import progress, troubleshooting can begin very quickly rather than when the project has already exceeded the downtime window.

 Export Flight Plan Import Flight Plan

10. Use DFS-R for SAP Interface Directories

Windows Server includes a feature called Distributed File System Replication

DFS-R is a solution to provide High Availability for Interface File Systems on Azure.

DFS namespaces use the format \\DomainName\RootName and are typically too long for the SAP kernel to handle

An example might be \\corp.companyname.com\SAPInterface

To avoid this problem it is recommended to make a CNAME in Active Directory DNS

The CNAME maps the namespace to an alias the meets the maximum hostname length SAP support

For example:

\\corp.companyname.com\SAPInterface -> \\sapdfs\SAPInterface

Additional points:

  1. Typically Full Mesh replication mode is used.
  2. DFS-R is not synchronous replication
  3. Test 3rd party backup utilities with DFS file systems
  4. Carefully test the security and ACLs on DFS namespaces and resources
  5. DFS-R should not be used for high performance or high IOPS operations

11. Ensure Azure Monitoring Agents Are Deployed

Azure monitoring agents must be installed to enable SAP support to view Azure Virtual Machine properties in ST06. This is a mandatory requirement for all Azure deployments including non-production deployments.

Details can be found in SAP Note 2015553 – SAP on Microsoft Azure: Support prerequisites

The following Powershell commands are used to setup Azure monitoring extensions

Get-AzureRmVMAEMExtension

Remove-AzureRmVMAEMExtension

Set-AzureRmVMAEMExtension

Test-AzureRmVMAEMExtension

12. Start Performance Testing & Validation Early

Performance testing activities are broken into two main areas:

  1. Performance of the OS/DB Migration Export, upload to Azure and Import process
  2. Performance testing of the SAP applications

The SAP OS/DB Migration process is highly tunable and can be optimized extensively. Test cycles can be completed with different package and table split combinations and the export.html and import.html reviewed. Based on this large Global Multinational customer and other large customers we can say the following:

  1. Huge systems with multiple >8-10TB databases, many TB of R3load dump files and many complex non-SAP interfaces and 3rd party systems can be moved to Azure in about 72 hours
  2. Typical systems with multiple 3-7TB databases, 1-2 TB of R3load dump files and some interfaces and non-SAP applications can be moved in 48 hours
  3. Small to Medium sized customers with 1-3TB databases, 1TB of R3load dump files and 3-4 non-SAP applications can move to Azure in 24-36 hours

It is common to find Index Build tasks with very long runtimes. On larger migrations it is common to remove several index build packages from the downtime phase of the migration. The indexes can then be built during post-processing phase of the migration (when RFCs are being changed, TMS being setup etc)

Performance testing of the SAP applications is a more involved process for very large and complex migrations. Most customer migrations of 3-5TB sized databases do not require an extensive performance test.

Performance testing for large Global Multinationals companies begins with collecting an inventory of the top 10-30 online reports, batch jobs, BW process chains, BW reports and interface upload/download. During the migration project it is recommended to perform a “clone of production” migration (Production databases are copied to another set of servers and exported) or an actual export of production (a “Dry Run”). These full data volume copies of production can be used for performance testing. Customers with large BW landscapes should setup a clone of production BW and several source systems such as ECC.

13. Document OS/DB Migration Configuration & Design (Only Needed for >3TB Databases)

A comprehensive OS/DB Migration FAQ is published on this blog. Make sure to download the most recent version of this document.

Large and complex OS/DB migrations should have a “Migration Design Document”. Typically this is a PowerPoint. As each Migration Test is performed the Export.html and Import.html should be added as an appendix so the impacts of tunings can be reviewed.

Errors, failed packages and other abnormal events should be documented in the appendix too

Recommended parameters to include in the Migration Design Document:

  • Package Splitting and Table Splitting Design – how many splits on large tables etc
  • Number of r3load processes for export and import
  • Resource Governor memory cap – typically around 5%
  • Number of SQL Server Datafiles and the distribution onto disks
    Number and type of Premium Storage disks
  • Consider placing the SQL Server Transaction Log file on D: (temporary disk)
  • Supplementary Log file on additional P30(s)
  • Perfmon Logging recommended counter set:
  • BCP rows/sec and BCP throughput kb/sec
  • SQL Server Log Usage, Memory Usage
  • Processor Information – CPU per each individual thread
  • Disk – ms/read and ms/write for each disk
  • Collect every 45 seconds or 90 seconds and graph in Excel
  • Receiver Side Scaling – test setting this on and off on DB server
  • BCP Batch size – Review this blog and test values to determine best performance
  • Buffer Pool Extension setup and configuration – typically not used during OS/DB migration
  • Database Recovery Mode – SIMPLE
  • Transparent Data Encryption – we typically recommend importing into a TDE enabled database
  • SQL Server Settings – MAXDOP, Max/Min Memory size
  • Traceflags set during import
  • CPU consumption graph on r3load servers and database server during Export and Import
  • SMIGR CREATE DDL configuration (such as Column Store settings)

14. Support Agreements and Azure Rapid Response

Very large OS/DB Migrations or Azure projects it is strongly recommended to have a Microsoft Premier Support agreement or additionally use Azure Rapid Response

More information about Azure Support plans can be found here:

https://azure.microsoft.com/en-us/support/plans/

https://www.microsoft.com/en-us/microsoftservices/premier_support_microsoft_azure.aspx

Azure Rapid Response offers a 15 minute callback for urgent support topics.

15. Use SQL Server 2014 or 2016 Column Store for SAP BW Systems

A specific blog for SAP BW and the learnings out of this project will follow later.

SQL Server has a built in Column Store feature since SQL Server 2012.  As with other customers before, in this migration project, the customer decommissioned SAP BWA and replaced these appliances with SQL Server Column Store.

Comprehensive documentation and blogs are available on SQL Server Column Store. The SQL Server Column Store implementation is designed for large scale data warehouse deployments.

16. Use SQL Server Transparent Data Encryption, Azure Advanced Disk Encryption & Network Security Groups to Secure Solution

SQL Server supports Transparent Data Encryption and this feature is frequently used by Cloud customers. SQL Server TDE integrates with the Azure Key Management Service natively in SQL Server 2016 and via a free utility on SQL Server 2014 and earlier.

TDE guarantees that database backups are secured in addition to protecting the “at rest” data.

SQL Server TDE supports common protocols for encryption. We generally recommend AES-256

Testing on customer systems has shown that it is faster to import directly into an empty already Encrypted database than to apply TDE after the database import.

The overhead of importing into a TDE database is approximately 5% CPU

Therefore it is recommended to follow this sequence:

  1. Ensure Perform Volume Maintenance Tasks privilege is assigned to the SQL Server Service Account to allow Instant File Initialization (datafiles can be created quickly but log files need to be written to and zeroed out)
  2. Create a database of the desired size (for example a 7.2TB database a database of approximately 8TB would be created)
  3. Ensure to create a very large transaction log as during the import a lot of log space will be consumed
  4. Configure Azure Key Vault, TDE and monitor the database encryption status and percent complete. Status can be found in sys.dm_database_encryption_keys
  5. When the Encryption Status = 3, the R3load import can start
  6. When the import and post processing finished create a Backup
  7. Restore backups on replica node(s) configure AlwaysOn

The Azure platform also support Disk Encryption. This technology is similar to Windows Bitlocker and can be used to encrypt the VHDs that are used by a VM.

Note: it is not necessary or beneficial to use Azure Disk Encryption and SQL Server TDE at the same time. We recommend against storing SQL Server data and log files that have been encrypted with TDE on disks that have been encrypted with ADE. Using both SQL Server TDE and ADE can cause performance problems

The Azure networking platform supports creating ACLs on servers and subnets. Network Security Groups can be used to enforce rules to allow or disallow specific IP address and ports.

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research


SQL Server 2016 improvements for SAP (BW)

$
0
0

SAP recently released support for SQL Server 2016 for some SAP NetWeaver based systems, see https://blogs.msdn.microsoft.com/saponsqlserver/2016/10/04/software-logistics-on-sql-server-2016. SQL Server 2016 contains many new features which can be used by SAP. For SAP BW, particularly the improved columnstore features are relevant. An overview of SQL Server Columnstore in SQL Server 2014 is published here: https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/24/concepts-of-sql-server-2014-columnstore. The most important columnstore improvements in SQL Server 2016 are discussed below:

Batch mode available for more operators and with maxdop 1

Since SQL Server 2014 there are three types of parallelism used for columnstore processing: Firstly, several SQL Server queries are running in parallel for a single SAP BW query (for example a SQL query against the f-fact table and another SQL query against the e-fact table). Secondly, a single SQL query can use many CPU threads at the same time (maxdop). Thirdly, a single CPU thread can process a set of rows, typically up to 900 rows (batch mode), rather than processing row by row (row mode). SQL Server 2016 can use the batch mode for much more operators, for example the sort operator. An overview is available at https://msdn.microsoft.com/en-us/library/dn935005.aspx.
In SQL Server 2014, single-threaded queries running under MAXDOP 1 or with a serial query plan cannot use batch mode. Therefore, we recommended a relatively low value for the SAP RSADMIN parameter MSS_MAXDOP_QUERY, which controls the maxdop setting used for BW queries (see https://blogs.msdn.microsoft.com/saponsqlserver/2013/03/19/optimizing-bw-query-performance). However, in SQL Server 2016 batch mode can be used with maxdop 1. Therefore, queries are still pretty fast, even when there is only one CPU thread available during high workload.

Columnstore uses vector instructions (SSE/AVX)

In addition to intra-query parallelism (maxdop) and batch mode, SQL Server 2016 uses another kind of parallelism: CPU vector instructions (AVX or SSE). You benefit most from this feature on most modern CPU generations. You can even use this feature in virtualized environments. More recent versions of virtualization layers used for private and public cloud like Azure, are supporting SSE/AVX instructions as well. SQL Server usage of SSE/AVX is described here: https://blogs.msdn.microsoft.com/bobsql/2016/06/06/how-it-works-sql-server-2016-sseavx-support/

Rowgroup Compression also merges rowgroups

In SQL Server 2014, you can perform columnstore rowgroup compression using the SQL command ALTER INDEX REORGANIZE. In SQL Server 2016, this SQL command performs additional optimizations: It also merges small rowgroups into a larger one for improving query performance. This is documented here: https://msdn.microsoft.com/en-us/library/dn935013.aspx. In SAP BW, there is no need to run this SQL command manually. This is automatically done during BW Cube Compression and when executing an Index Repair (within a BW Process Chain).

Parallel update of sampled statistics

Update Statistics on BW fact tables can be very time consuming, in particular for a BW Flat Cube (see https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw). By default, an Update Statistics on the fact table(s) is performed during BW Cube Compression. SQL Server 2016 can use up to 16 CPU threads for running Update Statistics with default sample rate (vs. one thread in SQL Server 2014). See also https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/05/23/query-optimizer-additions-in-sql-server.

Writable Nonclustered Columnstore Index (NCCI)

In SQL Server 2016, you can create an additional, writable columnstore index on top of a table, which already has a primary key and other rowstore indexes. This feature is planned to be used in future SAP BW releases for implementing the columnstore on other BW objects besides the fact tables.
NCCIs are also useful for SAP ERP. SAP will not deliver NCCIs in standard ERP systems. However, customers can create a NCCI for some tables of their ERP system on their own. A detailed description and recommendations for such a project will be published in a BLOG next year.

This list of SQL Server 2016 features is by far not complete. For Example, we did not mention additional indexes on top of a Clustered Columnstore Index (CCI) or the new SQL Server Query Store. The intention of this blog was to give an overview of the features, which justify a database upgrade to SQL Server 2016 for SAP BW.

Simplified and faster SAP BW Process Chains

$
0
0

Based on customer feedback over the course of the last 12 months we implemented several improvements in SAP BW. The code changes improve SAP platform migrations of very large SAP BW instances from other DBMS to SQL Server significantly. Furthermore, they simplify and accelerate SAP BW process chains for very large BW systems. The most important improvements are related to columnstore rowgroup compression, which is described below. Other recent improvements will be discussed in a separate BLOG.

Customers, who use the columnstore on Microsoft SQL Server 2014 (or newer), can now benefit from performance improvements and additional functionality. The new code is delivered in upcoming SAP BW Support Packages (SPs) and as a correction instruction in SAP Note 2329540 – Rowgroup Compression Framework for SAP BW / SQL Server. For implementing this SAP note, you need a minimum SAP BW SP. Therefore, we increased the minimum, recommended SP and updated SAP Note SAP Note 2114876 – Release Planning SAP BW for SQL Server Columnstore.

BW Rowgroup Compression

A SQL Server columnstore is organized in rowgroups, having up-to one million rows per rowgroup (see https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/24/concepts-of-sql-server-2014-columnstore for details). For best query performance, you should trigger a columnstore rowgroup compression at the end of all BW Process Chains, which load data into a cube. You can either use a BW Cube Compression or the BW Process Chain type “Index Repair” for this purpose. However, some customers had an issue with this.

  1. When converting all cubes to columnstore, you had to add “Index Repair” to all existing process chains. Customers often wanted to avoid the pain of changing each and every process chain.
  2. In some BW systems we migrated we were faced with hundreds of thousands of rowgroups on SQL Server. Since a single rowgroup can contain up-to one million rows, these rowgroups could theoretically cover some hundred billion rows. It turned out soon, that queries against columnstore related SQL Server system tables (sys.column_store_row_groups) did not scale too well. Therefore, reading the metadata from SQL Server was even slower than running a rowgroup compression. This resulted in an unexpected long runtime of the BW Process Chain type “Index Repair”. This issue does not occur on smaller systems with a reasonable number of DB partitions and rowgroups.

Simplified Process Chains

With the new Rowgroup Compression Framework of SAP Note 2329540, you can trigger a rowgroup compression also with the BW Process Chain type “Update Statistics”. Update Statistics is typically already included in a Process Chain, even when the Process Chain was originally created for a different database platform. A typical process chain for loading data into a cube looks then like this:

pc

The new functionality is explicitly coded for BW Process Chains only. When running a manual Update Statistics (button “Refresh Statistics”) in SAP transaction RSA1, the columnstore rowgroups are not compressed. If you want to perform a manual rowgroup compression in SAP transaction RSA1, you still have to choose the button “Repair DB Indexes”

111416_1441_Simplifieda2.png

Faster Process Chains

It was not possible to change the SQL Server system table sys.column_store_row_groups (in SQL Server 2014 and 2016). However, we achieve now very good query performance on columnstore metadata by using a different SQL Server system table (sys.system_internals_partitions) instead. For implementing this we had to change huge parts of the BW code in several areas (BW index repair, BW index check, BW update statistics, BW cube compression…)

Reading the metadata using the new method is several thousand times faster on very large BW systems. This results in much faster process chains, which contain the process chain types “Index Repair” or “Cube Compression”. The only thing you have to do, is applying SAP Note 2329540. There is no need for configuring anything or for setting any RSADMIN parameter.

However, once SAP Note 2329540 is applied, you have various configuration options regarding columnstore rowgroup compression in SAP BW. The options are described in detail in the SAP Note. The major reason, why we implemented these options, was to be prepared for future changes in SAP BW and SQL Server. Furthermore, the new RSADMIN parameter standardizes already existing (undocumented) parameters. Without setting any RSADMIN parameter, you automatically get the preferred default behavior (which also depends on the SQL Server version).

Moving from SAP 2-Tier to 3-Tier configuration and performance seems worse

$
0
0

Lately we were involved in a case where a customer moved a SAP ERP system from a 2-Tier configuration to a 3-Tier configuration in virtualized environments. This was done because the customer virtualized the system and could not provide VMs large enough on the new host hardware to continue with a 2-Tier SAP setup. Hence the ASCS and only dialog instance got moved into one VM. Whereas the DBMS server was in another VM hosted in the same private cloud. Testing the new configuration afterwards was positive in terms of functionality. However, performance wise some of the batch processes were running factors slower. Not a few percent points, but really factors apart from what those batch jobs took on run time in the 2-Tier configuration. So investigations went into several directions. Did something change on the DBMS settings? No, not really. Got a bit more memory, but no other parameters changed. ASCS and primary application server also configured the same way. Nothing really changed in configuration. Since the tests did run a single batch job, issues around network scalability could be excluded as well.

So next step was looking into SAP single record statistics (transaction STAD). It really showed rather high database times for the selects one of the slowest running batch jobs issued against the database. Means it looked like a lot of the run time was accumulated when trying to select data from the SQL Server DBMS. On the other side there was no reason why the DBMS VM should run slow. We talked about on single batch job that was running against the system when the tests ran. At the end we found out what the issue was. Correct, it had to do with networking latency that was added by moving from a 2-Tier configuration to a 3-Tier configuration. Got us thinking a bit about documenting and investigating a bit on this. Below you can read some hopefully interesting facts.

You got all the tools in SAP NetWeaver to figure it out

What does that mean? With the functionalities built into SAP NetWeaver you can measure what the DBMS system measures on response time for a query and on the other side the response time the application instance experiences. At least for SQL Server. The two functionalities we are talking about in specific are:

  • DBACockpit – SQL Statements
  • ST05 – Performance Trace

As implemented in DBACockpit for SQL Server, the data shown here:

image

Is taken directly from the SQL Server instance. Or to be more precise from a DMV called sys.dm_exec_query_stats. It measures a lot of data about every query execution and stores it I the DMV. However, within the SQL Server instance. So, the time starts ticking when the query enters the SQL Server instance and stops when the results are available to be fetched from the SQL Server client used by the application (like ODBC, JDBC, etc). Hence the elapsed time is recorded and measured purely within SQL Server usually without taking network latency into consideration.

On the other side, the SAP ST05 Performance Trace:

image

is measuring query execution times at a point where the query leaves the application server instance and the result set returns. Means in this case, the time spent in network communication between the SAP application instance and DBMS server is fully included in the measurement. And it is the time the SAP application instance experiences. In case of our batch job the time that can define the execution time majorly. Especially for batch jobs that spend larger quantities of time in interaction with the DBMS system.

Let’s measure a bit

In order to measure a bit and demonstrate the effects, we went a bit extreme by creating a job that spent nearly 100% of its time inter acting with the DBMS. And then we started to measure by performing the following steps:

  • We flushed the SQL Server procedure cache with the T-SQL command dbcc freeproccache. Be aware this command is also dropping the content of the SQL Server DMV sys.dm_exec_query_stats.
  • We then started ST05 with the appropriate filters to trace the execution of our little job
  • We then started our little job. The run time was just around 10 seconds

Our little job read around 9000 Primary keys of the SAP ERP material master table MARA into an internal table. As next step, it looped over the internal table and selected row by row fully defining the primary key. However only the first 3 columns of every row got read.

Our first setup was to test with VMs that were running on rather old server hardware. Something around 6 or 7 year old processor generation. But the goal of this exercise is not really to figure out the execution times as such, but the ratio, we are getting between what we measure on the DBMS side and the SAP side in the different configurations.

Hence in a first test using our older hosts with VMs we saw this after a first run:

image

So, we see 89 microseconds average execution time per query on the SQL Server side. Well, the data as expected, was in in memory. The query is super simple since it accesses a row by the primary key. Just reading the first 3 columns of a relative wide table. So yes, very well doable.

As we are going to measure the timing of running this report on the SAP application side, we are going to introduce the notion of an ‘overhead factor’. We define:

‘Overhead factor’ = The factor of time that is spent on top of the pure SQL Server execution time for network transfer of the data and processing in the ODBC client until the time measurement in the SAP logic is reached. So, if it takes x microseconds elapsed time in SQL Server to execute the query and we measure 2 times x in ST05, the ‘overhead factor is 2.

As we are running in a 2-Tier configuration, expectation would be that we add a few microseconds into this network and processing time. Looking at the summarized ST05 statistics of our trace we were a bit surprised to see this:

image

We measure 362 microseconds in our 2-Tier configuration. Means compared to what SQL Server requires on time to execute our super simple query, the response time as measured from the SAP side is a rough 4 times larger. Or in other words our ‘overhead factor’ is a factor of 4.

The interesting question is now going to be how the picture is going to look in case of a 3-Tier system. The first test we are going to do is involving a DBMS VM and an SAP application server VM that are running on the same host server.

In this case the measurements in ST05 looked like:

image

With 544 microseconds in average. Means the ‘overhead factor’ accumulated to a factor of around 6.

In the second step let’s consider a scenario where the VMs are not hosted on the same server. The servers of the same type and same processor that we used for the next test were actually sitting in two different racks. Means we had at least two or three switches to have the traffic flowing between the VMs. This very fact was reflected in the ST05 trace immediately where we looked at this result:

image

Means our ‘overhead factor’ basically increased to a bit over 8.

So, let’s summarize the results in this table:

image

Thus, we can summarize that the move from 2-Tier to 3-Tier can add significant overhead time, especially if more and more network components get involved. For SAP jobs this can mean that the run time can increase by factors in extreme cases as we tested them here. The more resource consuming a SAP logic is on the application layer, the less the impact of the network will be.

What can you do, especially in case of virtualization?

That assumes that the design principles from an infrastructure point of view are to have the compute infrastructure of the SAP DBMS layer and the application layer as close as possible together. ‘Together’ in the sense of distance, but also in the sense of having the least amount of network switches/routers and gateways in the network path that ideally is not miles in length.

Testing on Azure

The tests in Azure were conducted exactly with the same method and same report.

Tests within one Azure VNet

For the first test on Azure, we created a VNet and deployed our SAP system into it. Since the intent as well was to test a new network functionality of Azure, we used the DS15v2 VM type for all our exercises. The new functionality we wanted to test and demonstrate is called Azure Accelerated Networking and is documented here: https://azure.microsoft.com/en-us/documentation/articles/virtual-network-accelerated-networking-portal/

Executing our little report, in a 2-Tier configuration we measured:

image

60 microseconds to execute a single query in SQL Server. Looking at the ST05 values, we are experiencing 177 microseconds:

This is a factor of around 3.5 times between the pure time it takes on SQL Server and the time measured on the SAP instance that ran on the same VM as SQL Server.

For the first 3-Tier test, we deployed the two VMs, the VM running the database instance and the VM running the SAP application instance without the ‘Azure network acceleration’ functionality deployed. As mentioned the VMs are in the same Azure VNet.

As we perform the tests, the results measured in ST05 on the dedicated SAP instance look like:

image

Means we are looking into 570 microseconds which is nearly 3 times the time it takes with a 3-Tier configuration and around a factor of 9.5 times more time than SQL Server did take to work on a single query execution.

As a next step, we configured the new ‘Azure network acceleration’ functionality in both VMs. The database VM and the VM running the SAP instance were using the acceleration. Repeating the measurements, we were quite positively surprised experiencing a result like this:

image

We basically are down to 340ms per execution of the query measured on the SAP application instance. A dramatic reduction of the impact the network stack introduced in virtualized environments.

The factor of overhead compared to the pure time spent on the execution on SQL Server is reduced to a factor of under 6. Or you cut out a good 40% of communication time.

Additionally, we performed some other tests in 3-Tier SAP configurations in Azure where:

  • We put the SAP instance VM and DBMS VM into an Azure Availability Set
  • Where we deployed the SAP instance VM into a VM type that forced that VM onto another hardware cluster (SAP instance VM = GS4 and DBMS VM = DS15v2). See also this blog on Azure VM Size families: https://azure.microsoft.com/en-us/blog/resize-virtual-machines/

In all those deployments, without ‘Azure network optimization’ functionality, we did not observe any significant change from ST05 measurements. In all the cases the values the single execution values measured with ST05 ended between 530 and 560ms.

So in essence, we showed for the Azure side:

  • Azure compared to on-premise virtualization does not add anymore overhead in communications.
  • The new accelerated networking feature has a significant impact on the time spent in communications and data transport over the network.
  • There is no significant impact having two Azure VMs communicating that run on different hardware clusters.

What are we missing? Yes, we are missing pure bare-metal to bare-metal measurements. The reason we left that scenario out is because most of our customers have at least one side of the 3-Tier architecture virtualized or even both sides. Take e.g the SAP landscape in Microsoft. Except the DBMS side of the SAP ERP system, all other components are virtualized. Means the Microsoft SAP landscape has a degree of 99% virtualization with a good portion of the virtualized systems already running on Azure. Since the tendency by customers is to virtualize their SAP landscapes more and more, we did not see any need to perform these measurements on bare-metal systems anymore.

SAP OS/DB Migration to SQL Server–FAQ v6.1 November 2016

$
0
0
The FAQ document attached to this blog is the cumulative knowledge gained from hundreds of customers that have moved off UNIX/Oracle and DB2 to Windows and SQL Server in recent years.
In this document a large number of recommendations, tools, tips and common errors are documented.  It is recommended to read this document before performing an OS/DB migration to Windows + SQL Server.
The latest version of the OS/DB Migration FAQ includes some updates including:
  1. Customers are increasingly moving large SAP systems in the 5-10TB range to Azure.  This press release is an example of one of many successful projects
  2. With powerful new VMs, Accelerated Networking and Premium Storage it is now possible to import even huge size SAP databases on Azure
  3. Based on experience from recent migrations we recommend using SQL Server 2014 SP2 CU2 or SQL 2016 SP1 (as at November 2016).  It is generally recommended to use the latest Service Pack and Cumulative update
  4. A future blog will cover specific recommendations for BW migrations.  Some changes have been made to improve import performance on BW fact tables and other general recommendations.  These will appear in a blog shortly
  5. Intel has released both the E5v4 and E7v4 processors.  2 socket commodity Intel servers now benchmark~120,000 SAPS and 4 socket Intel servers now benchmark ~220,000 SAPS
  6. Customers still running on UNIX platforms are terminating these platforms and moving to more cost effective Intel systems with much better performance
  7. Multiple customers have terminated the use of SAP BW Accelerator and replaced this with SQL Server’s built in Column Store.  The OS/DB Migration FAQ now includes a link to the procedure to remove BWA configuration from a BW system
Latest OS/DB Migration FAQ can be downloaded from the link below

 

 

Improved SAP compression tool MSSCOMPRESS

$
0
0

SAP program MSSCOMPRESS is a tool for reducing disk space usage of SAP NetWeaver based systems. In addition, it is often used as a monitoring tool. MSSCOMPRESS has been released by SAP over 6 years ago. In the meanwhile, several improvements have been implemented. The latest version of MSSCOMPRESS is available as an attachment of SAP Note 1488135.

Overview

The basic features of MSSCOMPRESS are described in https://blogs.msdn.microsoft.com/saponsqlserver/2010/10/08/compressing-an-sap-database-using-report-msscompress. In this blog we want to describe the new features and functionalities. The user interface of MSSCOMPRESS has changed a little bit:

msscompress_1When starting MSSCOMPRESS or pressing the button “Refresh”, the following information is retrieved from SQL Server for all tables of the SAP database:

  • Total table size
  • Data and index size
  • DB compression type of data and indexes
  • Number of rows, indexes and partitions
  • A checkbox, which flags those tables which use a HEAP
  • Number of uncompressed rows in columnstore rowgroups
    (be aware, that this number may not be accurate for a few minutes after compressing the columnstore rowgroups of this particular table)

The SQL query for retrieving this information has been tuned several times in the past.

New filter options

Traditionally, you can filter the list of tables in MSSCOMPRESS using the Filter Options. The Name Filter now allows the wildcards star and question mark. If you want to see all tables of a particular SAP BW cube, you can use the filter /*/?CUBE* (see the example above). The list now includes an automatic sum of some columns. In this example, the sum of the table sizes is the total cube size.

msscompress_2

In addition, you can now use SAP ALV (ABAP List Viewer) Filters. By right-clicking in the ALV list, you can set ALV Filters or sort the ALV list. This feature is particularly useful, when using MSSCOMPRESS as a monitoring tool. To avoid confusion, compressing “Filtered Tables” is disabled, once you use the traditional Filter Options and the new ALV Filters at the same time. However, you can still compress using “Selected Tables”.

msscompress_3

DB compression type COLUMNSTORE_ARCHIVE

For B-trees (rowstore) you can choose the DB compression type (NONE, ROW or PAGE) separately for data (clustered index or heap) and indexes. For columnstore, you can now choose between DB compression types COLUMNSTORE and COLUMNSTORE_ARCHIVE. This feature has been added to support the full SQL Server functionality in SAP. However, we strongly recommend keeping the SAP default DB compression types: COLUMNSTORE for columnstore indexes and PAGE for rowstore. MSSCOMPRESS does not touch existing indexes by default, if the DB compression type is already correct. By choosing one of the “Force Data/Index/CS Rebuild” check boxes, an index rebuild is always executed, even when not changing the DB compression type.

Compress Rowgroups

This is a new feature in MSSCOMPESS, which actually has nothing to do with the DB compression type. It performs a rowgroup compression of columnstore indexes. For SAP BW, this tasks should be performed in SAP BW process chains, see https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/14/simplified-and-faster-sap-bw-process-chains. There is no need to use MSSCOMPRESS for compressing columnstore rowgroups in SAP BW. However, one may want to create an additional Nonclustered Columnstore Index (NCCI) in SAP ERP on SQL Server 2016. A detailed description regarding this will be published in a BLOG next year. In such a scenario, you can use MSSCOMPRESS to schedule a rowgroup compression job each night.

msscompress_4

By choosing “Compress Rowgroups” the mode of MSSCOMPRESS changes from DB compression (which performs an index REBULD) to rowgroup compression (which performs a columnstore index REORGANIZE). Therefore, all DB compression type options are greyed out. The Online and MAXDOP options are also greyed out, because a rowgroup compression is always performed online and single-threaded. MSSCOMPRESS only performs a rowgroup compression, if there are uncompressed rows in a columnstore index of the chosen table. By choosing “Force Reorganize”, a rowgroup compression will always be performed. This is useful as of SQL Server 2016, since the rowgroup compression also merges small rowgroups into one larger rowgroup. See https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/11/sql-server-2016-improvements-for-sap-bw.

Summary

The newest version of MSSCOMPRESS is faster and provides additional filter options. SAP consultants “misuse” MSSCOMPRESS as a monitoring tool, which enables a quick look on all database tables of an SAP NetWeaver system.

Top 14 Updates and New Technologies for Deploying SAP on Azure

$
0
0

Below is a list of updates, recommendations and new technologies for customers moving SAP applications onto Azure.

At the end of the blog is a checklist we recommend all customers follow with planning to deploy SAP environments on Azure.

1. High Performance & UltraPerformance Gateway

The UltraPerformance Gateway has been released.

This Gateway supports only Express Route Links and has a much higher maximum throughput.

The UltraPerformance Gateway is very useful on projects where a large amount of R3load dump files or database backups need to be uploaded to Azure.

https://azure.microsoft.com/en-us/documentation/services/vpn-gateway/

2. Accelerated Networking

A new feature to drastically increase the bandwidth between DS15v2 VMs running on the same Vnet has been released. The latency between VMs with Accelerated Networking is greatly reduced.

This feature is available in most data centers already and is based on the concept of SRIO-V.

Accelerated networking is particularly useful when running SAP Upgrades or R3load migrations. Both the DB server and the SAP Application servers should be configured for Accelerated networking.

Large 3 tier systems with many SAP Application servers will also benefit from Accelerated networking on the Database server however.

For example a highly scalable configuration would be a DS15v2 database server running SQL Server 2016 with Buffer Pool Extension enabled and Accelerated Networking and 6 D13v2 application servers:

Database Server: DS15v2 database server running SQL Server 2016 SP1 with Buffer Pool Extension enabled and Accelerated Networking. 110GB of memory for SQL Server cache (SQL Max Memory) and another ~200GB of Buffer Pool Extension

Application Server: 6 * D13v2 each with two SAP instances with 50 work processes and PHYS_MEMSIZE set to 50%. A total of 600 work processes (6 * D13v2 VMs * 50 work process per instance * 2 instances per VM = 600)

The SAPS value for such a 3 tier configuration is around 100,000 SAPS = 30,000 SAPS for DB layer (DS15v2) and 70,000 SAPS for app layer (6 x D13v2)

3. Multiple ASCS or SQL Server Availability Groups on a Single Internal Load Balancer

Prior to the release of multiple frontend IP addresses for an ILB each SAP ASCS required a dedicated 2 node cluster.

Example: a customer with SAP ECC, BW, SCM, EP, PI, SolMan, GRC and NWDI would need 8 separate 2 node clusters = total of 16 small VMs for the SAP ASCS layer.

With the release of the multiple ILB frontend IP address feature only 2 small VMs are now required.

A single Internal Load Balancer can now bind multiple frontend IP addresses. These frontend IP addresses can be listening on different ports such as the unique port assigned to each AlwaysOn Availability Group listener or the same port such as 445 used for Windows File Shares.

A script with the PowerShell commands to set the ILB configuration is available here

Note: It is now possible to assign a Frontend IP address to the ILB for the Windows Cluster Internal Cluster IP (this is the IP used by the cluster itself). Assigning the IP address of the Cluster to the ILB allows the cluster admin tool and other utilities to run remotely.

Up to 30 Frontend IP addresses can be allocated to a single ILB. The default Service Limit in Azure is 5. A support request can be created to get this limit increased.

The following PowerShell commands are used

New-AzureRmLoadBalancer

Add-AzureRmLoadBalancerFrontendIpConfig

Add-AzureRmLoadBalancerProbeConfig

Add-AzureRmLoadBalancerBackendAddressPoolConfig

Set-AzureRmNetworkInterface

Add-AzureRmLoadBalancerRuleConfig

4. Encrypted Storage Accounts, Azure Key Vault for SQL TDE Keys & Advanced Disk Encryption (ADE)

SQL Server supports Transparent Database Encryption (TDE). SQL Server keys can be stored securely inside the Azure Key Vault. SQL Server 2014 and earlier can retrieve keys from the Azure Key Vault with a free Connector utility. SQL Server 2016 onwards natively supports the Azure Key Vault. It is generally recommended to Encrypt a database before loading data with R3load as the overhead involved is only ~5%. Applying TDE after an import is possible, but this will take a lot of time on large databases. The recommended cipher is AES-256. Backups are encrypted on TDE systems.

Advanced Disk Encryption is a technology like “Bitlocker”. It is preferable not to use ADE on disks holding DBMS datafiles, temp files or log files and to secure SQL Server (or other DBMS) datafiles at rest with TDE (or the native DBMS encryption tool).

It is strongly recommended not to use both SQL Server TDE and ADE disks in combination. This may create a large overhead and is a scenario that has not been tested. ADE is useful for encrypting the OS Boot Disk

The Azure Platform now supports Encrypted Storage Accounts. This feature encrypts at-rest data on a storage account

5. Windows Server 2016 Cloud Witness

Windows Server 2016 will be generally available for SAP customers in Q1 2017 based on current planning.

One very useful feature is the Cloud Witness.

A general recommendation for Windows Clusters on Azure is:

1. Use Node & File Share Witness Majority with Dynamic Quorum

2. The File Share Witness should be in a third location (not in the same location as either primary or DR)

3. There should be independent redundant network links between all three locations (primary, DR, File Share Witness)

4. Systems that have a very high SLA may require the File Share Witness share to be highly available (thus requiring another cluster)

Until Windows 2016 there were several problems with this approach:

1. Many customers compromised the DR solution by placing the FSW in the primary site

2. Often the FSW was not Highly Available

3. The FSW required at least one additional VM leading to increased costs

4. If the FSW was Highly Available this required 2 VMs and software shared disk solutions like SIOS. This increases costs

Windows Server 2016 resolves this problem with the Cloud Witness:

1. The FSW is now a Platform as a Service (PaaS) role

2. No VM is required

3. The FSW is now automatically highly available

4. There is no ongoing maintenance, patching, HA or other activities

5. After the Cloud Witness is setup it is a Managed Service

6. The Cloud Witness can be in any Azure datacenter (a third location)

7. The Cloud Witness can be reached over standard internet links (avoiding the requirement for redundant independent links to the FSW)

6. Azure Site Recovery Azure-2-Azure (ASR A2A)

Azure Site Recovery is an existing technology that replicates VMs from a source to a target in Azure.

A new scenario will be released in Q1. The new scenario is Azure to Azure ASR.

Scenarios supporting replicating Hyper-V, Physical or VMWare to Azure are already Generally Available.

The key differentiators between Azure Site Recovery (ASR) and competing technologies:

Azure Site Recovery substantially lowers the cost of DR solutions. Virtual Machines are not charged for unless there is an actual DR event (such as fire, flood, power loss or test failover). No Azure compute cost is charged for VMs that are synchronizing to Azure. Only the storage cost is charged

Azure Site Recovery allows customers to perform non-disruptive DR Tests. ASR Test Failovers copy all the ASR resources to a test region and start up all the protected infrastructure in a private test network. This eliminates any issues with duplicate Windows computernames. Another important capability is the fact that Test Failovers do not stop, impair or disrupt VM replication from on-premise to Azure. A test failover takes a “snapshot” of all the VMs and other objects at a particular point in time

The resiliency and redundancy built into Azure far exceeds what most customers and hosters are able to provide. Azure blob storage stores at least 3 independent copies of data thereby eliminating the chances of data loss even in event of a failure on a single storage node

ASR “Recovery Plans” allow customers to create sequenced DR failover / failback procedures or runbooks. For example, a customer might create a ASR Recovery Plan that first starts up Active Directory servers (to provide authentication and DNS services), then execute a PowerShell script to perform a recovery on DB servers, then start up SAP Central Services and finally start SAP application servers. This allows “Push Button” DR

Azure Site Recovery is a heterogeneous solution and works with Windows and Linux and works well with SQL Server, Oracle, Sybase and DB2.

Additional Information:

1. To setup an ASCS on more than 2 nodes review SAP Note 1634991 – How to install an ASCS or SCS instance on more than 2 cluster nodes

2. SAP Application Servers in the DR site are not running. No compute costs incurred

3. Costs can be reduced further by decreasing the size of the DR SQL Servers. Use a smaller cheaper VM sku and upgrade to a larger VM if a DR event occurs

4. Services such as Active Directory must be available in DR

5. SIOS or BNW AFSDrive can be used to create a shared disk on Azure. SAP requires a shared disk for the ASCS as at November 2016

6. Costs can be reduced by removing HA from DR site (only 1 DB and ASCS node)

7. Careful planning around cluster quorum models / voting should be done. Explain cluster model to operations teams. Use Dynamic Quorum

Diagram showing DR with HA

7. Pinning Storage

Azure storage accounts can be “affinitized” or “pinned” to specific storage stamps. A stamp can be considered a separate storage device.

It is strongly recommended to use separate storage account for the DBMS files for each database replica.

In the example below the following configuration is deployed:

1. SQL Server AlwaysOn Node 1 uses Storage Account 1

2. SQL Server AlwaysOn Node 2 uses Storage Account 2

3. A support request was opened and Storage Account 1 was pinned to stamp 1, Storage Account 2 was pinned to stamp 2

4. In this configuration the failure of the underlying storage infrastructure will not lead to an outage.

8. Increase Windows Cluster Timeout Parameters

When running Windows Cluster on Azure VMs it is recommended to apply a hotfix to Windows 2012 R2 to increase the Cluster Timeout values to the defaults that are set on Windows 2016.

To increase the Windows 2012 R2 cluster timeouts to those defaulted in Windows 2016 please apply this KB https://support.microsoft.com/en-us/kb/3153887

No action is required on Windows 2016. The values are already correct

More information can be found in this blog

9. Use ARM Deployments, Use Dv2 VMs, Single VM SLA and Use Premium Storage for DBMS only

Most of the new enhanced features discussed in this blog are ARM only features. These features are not available on old ASM deployments. It is therefore strongly recommended to only deploy ARM based systems and to migrate ASM systems to ARM.

Azure D-Series v2 VM types have fast powerful Haswell processors that are significantly faster than the original D-Series.

All customers should use
Premium Storage for the Production DBMS servers and for non-production systems.

Premium Storage
should also be used for Content Server, TREX, LiveCache, Business Objects and other IO intensive non-Netweaver file based or DBMS based applications

Premium Storage is of no benefit on SAP application servers

Standard Storage can be used for database backups or storing archive files or interface files

More information can be found in SAP Note 2367194 – Use of Azure Premium SSD Storage for SAP DBMS Instance

Azure now offers a financially backed SLA for single VMs. Previously a SLA was only offered for VMs in an availability set. Improvements in online patching and reliability technologies allow Microsoft to offer this feature.

10. Sizing Solutions for Azure – Don’t Just Map Current VM CPU & RAM Sizing

There are a few important factors to consider when developing the sizing solution for SAP on Azure:

1. Unlike on-premises deployments there is no requirement to provide a large sizing buffer for expected growth or changed requirements over the lifetime of the hardware. For example when purchasing new hardware for an on-premises system it is normal to purchase sufficient resources to allow the hardware to last 3-4 years. On Azure this is not required. If additional CPU, RAM or Storage is required after 6 months, this can be immediately provisioned

2. Unlike most on-premises deployment on Intel servers Azure VMs do not use Hyper-Threading as at November 2016. This means that the thread performance on Azure VMs is significantly higher than most on-premises deployments. D-Series v2 have more than 1,500 SAPS/thread

3. If the current on-premises SAP application server is running on 8 CPU and 56GB of RAM, this does not automatically mean a D13v2 is required. Instead it is recommended to:

a. Measure the CPU, RAM, network and disk utilization

b. Identify the CPU generation on-premises – Azure infrastructure is renewed and refreshed more frequently than most customer deployments.

c. Factor in the CPU generation and the average resource utilization. Try to use a smaller VM

4. If switching from 2-tier to 3-tier configurations it is recommended to review this blog

5. Review this blog on SAP on Azure Sizing

6. After go live monitor the DB and APP servers and determine if they need to be increased or decreased in size

11. Fully Read & Review the SAP on Azure Deployment Guides

Before starting a project all technical members of the project team should fully review the SAP on Azure Deployment Guides

These guides contain the recommended deployment patterns and other important information

Ensure the Azure VM monitoring agents for ST06 as documented in Note 2015553 – SAP on Microsoft Azure Support prerequisites

SAP systems are not supported on Azure until this SAP Note is fully implemented

12. Upload R3load Dump Files with AzCopy, RoboCopy or FTP

The diagram below shows the recommended topology for exporting a system from an existing datacenter and importing on Azure.

SAP Migration Monitor includes built in functionality to transfer dump files with FTP.

Some customers and partners have developed their own scripts to copy the dump files with Robocopy.

AzCopy can be used and this tool does not need a VPN or ExpressRoute to be setup as AzCopy runs directly to the storage account.

13. Use the Latest Windows Image & SQL Server Service Pack + Cumulative Update

The latest Windows Server image includes all important updates and patches. It is recommended to use the latest available Windows Server OS available in the Azure Gallery

The latest DBMS versions and patches are recommended.

We do not generally recommend deploying SQL Server 2008 R2 or earlier for any SAP system. SQL Server 2012 should only be used for systems that cannot be patched to support more recent SQL Server releases.

SQL Server 2014 has been supported by SAP for some time and is in widespread deployment amongst SAP customers already both on-premises and on Azure

SQL Server 2016 is supported by SAP for SAP_BASIS 7.31 and higher releases and has already been successfully deployed in Production at several large customers including a major global energy company. Support for SQL 2016 for Basis 7.00 to 7.30 is due soon.

The latest SQL Server Service Packs and Cumulative updates can be downloaded from here.

Due to a change in incremental servicing policies the very latest SQL Server CU will only be available for download.

Previous Cumulative Updates can be downloaded from here

1966681 – Release planning for Microsoft SQL Server 2014

2201059 – Release planning for Microsoft SQL Server 2016

The Azure platform fully supports Windows, Suse 12 or higher and RHEL 7 or higher. Oracle, DB2, Sybase, MaxDB and Hana are all supported on Azure.

Many customers utilize the move from on-premises to Cloud to switch to a single support vendor and switch to Windows and SQL Server

Microsoft has released a  Database Trade In Program
that allows customers to trade in DB2, Oracle or other DBMS and obtain SQL Server licenses free of charge (conditions apply).

The Magic Quadrant for Operational Database Management Systems places SQL Server in the lead in 2015. This lead was further extended in the 2016 Magic Quadrant

14. Migration to Azure Pre-Flight Checklist

Below is a recommended Checklist for customers and partners to follow when migrating SAP applications to Azure.

1. Survey and Inventory the current SAP landscape. Identify the SAP Support Pack levels and determine if patching is required to support the target DBMS. In general the Operating Systems Compatibility is determined by the SAP Kernel and the DBMS Compatibility is determined by the SAP_BASIS patch level.

Build a list of SAP OSS Notes that need to be applied in the source system such as updates for SMIGR_CREATE_DDL. Consider upgrading the SAP Kernels in the source systems to avoid a large change during the migration to Azure (eg. If a system is running an old 7.41 kernel, update to the latest 7.45 on the source system to avoid a large change during the migration)

2. Develop the High Availability and Disaster Recovery solution. Build a PowerPoint that details the HA/DR concept. The diagram should break up the solution into the DB layer, ASCS layer and SAP application server layer. Separate solutions might be required for standalone solutions such as TREX or Livecache

3. Develop a Sizing & Configuration document that details the Azure VM types and storage configuration. How many Premium Disks, how many datafiles, how are datafiles distributed across disks, usage of storage spaces, NTFS Format size = 64kb. Also document Backup/Restore and DBMS configuration such as memory settings, Max Degree of Parallelism and traceflags

4. Network design document including VNet, Subnet, NSG and UDR configuration

5. Security and Hardening concept. Remove Internet Explorer, create a Active Directory Container for SAP Service Accounts and Servers and apply a Firewall Policy blocking all but a limited number of required ports

6. Create an OS/DB Migration Design document detailing the Package & Table splitting concept, number of R3loads, SQL Server traceflags, Sorted/Unsorted, Oracle RowID setting, SMIGR_CREATE_DDL settings, Perfmon counters (such as BCP Rows/sec & BCP throughput kb/sec, CPU, memory), RSS settings, Accelerated Networking settings, Log File configuration, BPE settings, TDE configuration

7. Create a “Flight Plan” graph showing progress of the R3load export/import on each test cycle. This allows the migration consultant to validate if tunings and changes improve r3load export or import performance. X axis = number of packages complete. Y axis = hours. This flight plan is also critical during the production migration so that the planned progress can be compared against the actual progress and any problem identified early.

8. Create performance testing plan. Identify the top ~20 online reports, batch jobs and interfaces. Document the input parameters (such as date range, sales office, plant, company code etc) and runtimes on the original source system. Compare to the runtime on Azure. If there are performance differences run SAT, ST05 and other SAP tools to identify inefficient statements

9. SAP BW on SQL Server. Check this blogsite regularly for new features for BW systems including Column Store

10. Audit deployment and configuration, ensure cluster timeouts, kernels, network settings, NTFS format size are all consistent with the design documents. Set perfmon counters on important servers to record basic health parameters every 90 seconds. Audit that the SAP Servers are in a separate AD Container and that the container has a Policy applied to it with Firewall configuration.

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

SAP Now Supports Azure Resource Manager (ARM) for Windows and Linux

$
0
0

On 13/12 SAP updated SAP Note 1928533 (login required) to include support for SAP systems on Windows on virtual machines that are created using the Azure Resource Manager. The documentation on docs.microsoft.com was updated to reflect the change.

You can now use virtual machines that are created using the Azure Resource Manager for both Windows and Linux virtual machines.

We also published new quick start templates on github for setting up an ASCS/SCS cluster that supports multiple ASCS/SCS instances (aka multi SID). Please have a look at the HA guide for more information on how to use them. The HA guide also contains information on how to modify an already existing ASCS/SCS cluster to support more ASCS/SCS instances.

With the latest change of SAP Note 1928533 (login required) and the release of the new templates, you can now create SAP landscapes on Azure that use less virtual machines and thus further reduce your IT costs.

Please also have a look at the following documentation

Windows Guides

Linux Guides


Recent SAP BW improvements for SQL Server

$
0
0

Over the course of the last few months we implemented several improvements in SAP BW. We already blogged about two improvements separately:

In this blog we want to describe a few, additional improvements

BW query performance of F4-Help

Some BW queries were not using intra-query parallelism, because the BW statement generator did not generate the MAXDOP N hint. This was particularly an issue, when using SQL Server 2014 columnstore. As of SQL Server 2016, queries on columnstore are still pretty fast, even when using MAXDOP 1. See https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/11/sql-server-2016-improvements-for-sap-bw for details. The following SAP notes fix the issues with missing MAXDOP:

BW query performance of 06-tables

The 06-tables are used as a performance improvement for BW queries with large IN-filters, for example the filter condition “WHERE COMPANY_CODE IN (1, 2, 3, … n)”. When the IN-clause contains 50 elements or more, a temporary 06-table (for example /BI0/0600000001) is created and filled with the elements of the IN-clause. Then an UPDATE STATISTICS is executed on the 06-table. Finally, the 06-table is joined with the (fact-) table rather than applying the IN-filter on the (fact-) table. After executing the BW-query, the 06-table is truncated and added to a pool of 06-tables. Therefore, the 06-tables can be reused for future BW queries.

Historically, the 06-tables do not have a primary key nor any other database index. Therefore, no DB statistics exists and an UPDATE STATISTICS command does not have any effect. However, SQL Server automatically creates a column statistics, if the 06-table contains at least 500 rows. Smaller 06-tables do not have a DB statistics, which might result in bad execution plans and long running BW queries.

This issue has not been analyzed for a long time, because it is self-healing: Once a 06-table ever had at least 500 rows, it has a column statistics. Therefore, the UPDATE STATISTICS works fine when the 06-table is reused, even if it contains now less than 500 rows.

As a workaround, you can simply disable the usage of 06-tables by setting the RSADMIN parameter MSS_EQSID_THRESHOLD = 999999999. Therefore, you have to apply the following SAP note first:

As a matter of course, this workaround has some drawbacks. The SQL queries get more complicated when turning off the 06-tables. Therefore, we made a code change, which changes the structure of the 06-tables. They now contain a primary key and have regular DB statistics. For using this, you have to implement the following SAP note and delete the existing 06-tables using report SAP_DROP_TMPTABLES

This note is not generally available yet, because we would like to have some pilot customers first. You can apply as a pilot customer by opening an SAP message in component BW-SYS-DB-MSS.

BW Cube Compression

You will get best BW cube compression performance in SAP release 7.40 and newer.

  • With SAP BW 7.40 SP8 the Flat Cube was introduced (see https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw). Therefore, we had to implement the cube compression for the Flat Cube. At the same point in time, we added a missing feature for non-Flat Cubes, too: Using the SQL statement MERGE for the cube compression of inventory cubes. Before 7.40 SP8, the MERGE statement was only used for cumulative cubes (non-inventory cubes).
  • The next major improvement was delivered in SAP BW 7.40 SP13 (and 7.50 SP1): We removed some statistics data collection during BW cube compression, which consumed up-to 30% of the cube compression runtime. These statistics had not been collected by other DB platforms since years. At the same time, we added a broader consistency check for cubes with columnstore index during BW cube compression. This check takes some additional time, but the overall performance of the BW cube compression improved as of BW 7.40 SP13.

For all BW releases we introduced a HASH JOIN and a HASH GROUP optimizer hint for cubes with columnstore indexes. We already had delivered a HASH JOIN hint in the past, which was counterproductive in some cases for rowstore cubes. Therefore we had removed that hint again. By applying the following SAP Note, the HASH JOIN hint will be used for columnstore tables only. Furthermore, you can configure it the hints using RSADMIN parameters:

The last step of the BW cube compression is the request deletion of compressed requests. Therefore, the partitions of the compressed requests are dropped. It turned out, that the partition drop of a table with more than 10,000 partitions takes very long. On the one hand, it is not recommended to have more than a few hundred partitions. On the other hand, we need a fast way to compress a BW cube with 10,000+ partitions, because BW cube compression is the only way to reduce the number of partitions. We released the following SAP notes, which speed-up partition drop when having several thousand of partitions:

DBACOCKPIT performance

For large BW systems with a huge amount of partitions and columnstore rowgroups, Single Table Analysis in SAP transaction DBACOCKPIT takes very long. Therefore, you should update the stored procedure sap_get_index_data, which is attached to the following SAP Note:

Columnstore for realtime cubes

In the past we used the columnstore only for the e-fact table of SAP BW realtime cubes. Data is loaded into the e-fact table only during BW cube compression. The cube compression ensures that DB statistics are up-to-date and the columnstore rowgroups are fully compressed. The data load into the f-fact table in planning mode does not ensure this. Due to customer demands you can now use the columnstore for selected or all realtime cubes. For this have to apply the following note, which also enables realtime cubes to use the Flat Cube model:

BW System Copy performance improvements

We increased the r3load import performance when using table splitting with columnstore by optimizing the report SMIGR_CREATE_DDL. With the new code, r3load will never load into a SQL Server HEAP any more. Hereby you can minimize the downtime for database migrations and system copies. Furthermore, we decreased the runtime of report RS_BW_POST_MIGRATION when using columnstore. For this, you have to apply the following SAP notes:

Customers automatically benefit from all system copy improvements on SQL Server, when applying the new patch collection of the following SAP Note:

In addition, you should also check the following SAP Note before performing a system copy or database migration

Summary

The SAP BW code for SQL Server is permanently being improved. All optimizations for SQL Server columnstore are documented in the following note:

BW Queries by factors faster using FEMS-pushdown

$
0
0

Complex SAP BW queries often contain FEMS-filters. These filters used to be applied on the SAP application server, which was not very efficient. A new SQL statement generator in SAP BW implements an optimized algorithm for FEMS processing. We have seen performance improvements up-to factor 100. The actual performance improvement varies heavily for different BW queries. The main idea was to reduce the processing time of the application server by pushing down the FEMS filters from the application server to the database. However, the new algorithm even reduces the database processing time in most cases. The following example shows the BW query statistics of a simple BW query with 15 Selection Groups. The total runtime went down from almost 42 seconds to 1.4 seconds.

fems1

What are FEMS filters?

A FEMS filter (Form EleMent Selektion) in SAP BW are filters on Selection Groups. To make a long story short: FEMS filters are filters on key figures, structures or cells. For an existing BW query you can check the number of Selection Groups in SAP transaction RSRT:

fems2

fems3

A BW query with FEMS filters is not necessarily slow. However, complex FEMS filters were often not applied on the database. A SQL query containing almost no filters was created, which returned a huge result set (672,184 rows of the 100,000,000 rows cube in the example above). Running such a SQL query requires high processing time on the DB server and results in high network I/O. Afterwards, the FEMS filters have to be applied on the result set on the SAP application server. Due to the architecture of the SAP application server, this step is always performed single-treaded!

SAP realized soon, that FEMS filters should be pushed down to the database. However, SAP was convinced that this cannot be performed efficiently with SQL. Therefore, an undocumented API was implemented in SAP BWA (and later in HANA) some years ago. This API was used for pushing down FEMS filters to BWA by bypassing the SQL interface. SAP implemented further DB-pushdowns exclusively for BWA (and HANA). However, it figured out, that some BW queries were faster without using DB-pushdown. Therefore, the so-called TREXOPS modes were introduced. Mode 2 is the FEMS-pushdown, mode 3 is the MultiProvider-pushdown. You can configure the TREXOPS mode per BW provider and per BW query. A higher number always includes all optimizations of a lower mode number.

fems4

SAP BWA supports TREXOPS modes 2 and 3. The most important is the FEMS-pushdown. The default setting for BWA is TREXOPS mode 2 (see SAP note 1790426)

FEMS-pushdown with SQL Server

The FEMS-pushdown on SQL Server implements a new algorithm, which uses the standard SQL interface. This algorithm does not only push-down the FEMS filters. For example, it further reduces the complexity of the SQL query by factorizing the FEMS filters on the application server before running the SQL query. As a result, even the DB response time decreases in many cases. One may argue, that reducing the DB response time will result in a higher utilization of the DB server (which is a unique resource, in contrast to the application servers). However, this is not necessarily the case. The optimized algorithm even reduced the consumed CPU time on the DB server in the example above. You can see the total CPU time in the SQL Server query statistics in column worker_time.

fems5

For checking the SQL Server response time, the SAP BW statistics (see Data Manager time above) is more accurate. The column elapsed_time in the SQL Server query statistics does not contain the compilation of the SQL statement. On the other hand, it contains processing time on the application server between the fetches of the SQL query (which is using a cursor).

The FEMS-pushdown on SQL Server also uses the TREXOPS modes described above. Any mode higher than 0 enables the new SQL statement generator for FEMS queries. For non-FEMS queries (or TREXOPS mode 0) the old SQL statement generator is used. The performance improvements caused by the FEMS-pushdown is highly depend on the actual BW query. In a few cases it may even increase the BW query runtime. However, we have only seen this for BW queries, which are fast anyway. It does not really hurt whether a BW query takes 3 or 4 seconds, but it makes a difference whether it takes 300 or 4 seconds. In the worst case, you can disable the FEMS-pushdown for a particular query using the TREXOPS modes.

Prerequisites for FEMS-pushdown

The FEMS-pushdown will be generally available for SQL Server in some weeks. We want to start now with a few pilot customers to get feedback before finally releasing the new statement generator. You can apply by opening an SAP message in component BW-SYS-DB-MSS. We will then provide you the required SAP correction instruction, which enables the FEMS-pushdown for Microsoft SQL Server.

The new statement generator can only be used, when the following prerequisites are fulfilled:

  • The required SAP BW code is implemented (minimum: 7.50 SP4 + SAP correction instruction)
  • The BW query is a FEMS-query (it has at least 2 FEMS-filters)
  • FEMS-pushdown is activated by setting RSADMIN parameter USE_FEMS_IN_DB = X (and TREXOPS mode ≥ 2)
  • An existing BWA connection has to be completely disabled (for all cubes)
  • The InfoProvider is a Flat Cube (This requires SQL Server 2014 or newer). You can also benefit from FEMS Filter Pushdown for Multi-Provider. In this case, a separate SQL query is running against each Part-Provider. SQL queries against Flat Cube Part-Providers use the new statement generator, SQL queries against other Part-Providers simply use the old statement generator.
  • Inventory queries are currently not supported for FEMS Filter Pushdown. They always use the old statement generator.
    However, you can use the new statement generator for inventory cubes, as long as the query does not contain an inventory key figure.

Summary

There are already several options available to speed-up BW queries with SQL Server. Each of them alone can speed-up BW queries by factors:

  • Having sufficient hardware resources, you can simply increase SQL Server intra-query parallelism using RSADMIN parameter MSS_MAXDOP_QUERY.
  • You can use an optimized index structure for BW cubes by applying SQL Server Columnstore. With SQL Server 2012, 2014 and 2016 we already released the 3rd generation of SQL Server Columnstore.
  • As of SAP BW 7.40 (SP8) you can apply an optimized table structure for BW cubes by converting them to Flat Cubes.
  • Finally, as of SAP BW 7.50 (SP4) you can use the optimzed SQL statement generator as described above.

You can already test the FEMS-pushdown when becoming a pilot customer. In any case, your SAP release planning should include an upgrade to SAP BW 7.50 SP4 (or higher) and SQL Server 2016.

Windows 2016 is now Generally Available for SAP

$
0
0

Windows 2016 has been released by Microsoft and is now Generally Available for any SAP NetWeaver 7.0 or higher components. Release information can be found in SAP Note 2384179 – SAP Systems on Windows Server 2016

This blog discusses the relevant SAP Notes and release information. For the official release status of individual SAP applications can be found in the SAP Product Availability Matrix (PAM).

Windows 2016 includes many features deeply integrated into the Azure public cloud platform. SAP on Windows 2016 is simultaneously released for on-premises deployments and on Azure cloud deployments

1. Windows 2016 Long Term Servicing Branch

Starting as of Windows Server 2016 the Server product offers two deployment models: Long Term Servicing Branch and Current Branch for Business.

SAP only supports Windows Server 2016 64bit LTSB with full GUI (Desktop Experience). More information about LTSB vs. CBB and the different versions can be found here.

Datacenter and Standard Edition are both supported for SAP applications. The differences between Standard Edition and Data Center Edition do not impact a SAP NetWeaver Application server. Both versions support 640 Logical processors and 24TB of RAM. The most significant technical difference between Datacenter and Standard Edition is the virtualization features and related capabilities in the network and storage area. Some information about licensing is here

2. Required SAP Kernels

SAP Kernels 7.21_EXT, 7.22_EXT and 7.49 Kernels or higher are the only kernels supported on Windows 2016. It is recommended to run either 7.22_EXT or 7.49 as these are the latest generation kernels. 7.22_EXT is fully downward compatible to SAP_BASIS 7.00. This means any NetWeaver 7.00 to 7.31 application can run on the latest 7.22_EXT kernel. Customers are generally advised not to use old kernels such as 7.00, 7.01, 7.21, 7.40 or 7.42 as this impacts supportability of a system.

SAP Java based components that require the SAP JVM 4.1 are not supported on Windows 2016 at this time. The SAP JVM 4.1 is now end of life and will likely not be validated on Windows 2016. Java based systems must be 7.30 or higher to be supported on Windows 2016.

The screenshot below shows the SAP PAM selection screen and supported 7.2x based kernels that are supported on Windows 2016.

3. Supported Databases and SAP Standalone Engines

Windows 2016 supports:

SQL Server 2012, 2014, 2016 or later

DB2 11.1 or later

Sybase 16 SP2 or later

MaxDB 7.9

Hana 1.0 SP12 and Hana 2.0 SP00 Client Components

This allows SAP customers to leverage the security and high availability features built into Windows for Suite on Hana, BW on Hana and S4 Hana deployments.

Oracle 12c to be supported later. Please check Note 2384179 – SAP Systems on Windows Server 2016 and the SAP Product Availability Matrix

4. Windows 2016 Hyper-V Support

As at March 2017 Windows 2016 Hyper-V scenarios are not supported as the SAP Host Monitoring Extensions are still under development and testing. Note 1409608 – Virtualization on Windows will be updated when this process is complete

5. Benefits of Windows 2016

Azure Cloud WitnessCluster File Share Witness must be in a 3rd location with diverse network connections can now be on Azure cloud

Storage Spaces Direct
(S2D) – new generation storage solution. Datacenter SKU only.

Storage Replica
– Async or Sync replication. Works with Hyper-V, Dedup, ReFS improvements

Powershell 5.0 Many new PowerShell cmdlets for Networking and Hyper-V

6. Required SAP Notes for Windows 2016

It is required to read the following OSS Notes before installing SAP applications on Windows 2016:

2356977 – Error during connection of JDBC Driver to SQL Server: This error will prevent Java based systems on SQL Server from installing on Windows 2016

2424617 – Correction of OS version detection for Windows Server 2012R2 and higher

2287140 – Support of Failover Cluster Continuous Availability feature (CA)

1869038 – SAP support for ReFs filesystem: It is supported to use ReFS instead of NTFS as of Windows 2016 (only for SAP application server and SQL Server at this time)

2055981 – Removing Internet Explorer & Remotely Managing Windows Servers: It is recommended to remove any non-essential software from Windows servers. This procedure works on Windows 2016

1928533 – SAP Applications on Azure: Supported Products and Azure VM types

2325651 – Required Windows Patches for SAP Operations

2419847 – Support of Windows in-place upgrade in Failover Cluster environments

Important Links

Windows Server 2016 Evaluation Edition Download

Windows Server 2016 documentation getting started

Cloud Witness documentation and blog

Windows 2016 on Channel9

SAP Business Objects support of SQL Server 2016 and Azure products

$
0
0

These days, we get a lot of questions around SAP Business Objects components supporting SQL Server 2016. As of February 2017, the support situation looks like:

The support of more recent OS or DBMS releases is not always apparent in the SAP PAM grid where supported OS and DBMS versions usually are listed. Instead you should open the PDF file that you often can find in the upper right corner of the SAP PAM page of the specific SAP product. Like shown below:

clip_image002

We will keep you informed on further progress with other SAP BusinessObjects or Data Services supporting newer SQL Server Releases or
Azure properties.

Large Australian Energy Company Modernizes SAP Applications & Moves to Azure Public Cloud

$
0
0

This blog is a technical overview of a project recently completed by a large Australian Energy Company (the Company) over the past 12 months to transform the Company’s SAP Solution from an End of Life HPUX/Oracle platform to a modern SQL Server 2016 solution running on the Azure Public Cloud. This blog focuses on the technical aspects of the project that involved multiple OS/DB Migrations, Unicode Conversions and SAP Upgrades in addition to a move to Azure public cloud.

The project has now been successfully completed by the Company and BNW Consulting, a SAP consulting company specializing in SAP on Microsoft platforms.

1. Legacy Platform & Challenges

The Company implemented SAP R/3 in 1996 and deployed on HPUX and Oracle, a popular platform for running SAP applications at the time. Over time additional applications such as SAP Business Warehouse, Supply Chain Management, Enterprise Portal, GRC, PI, MDM and SRM were deployed. The UNIX based platform had reached end of life and the Company conducted a due diligence assessment of the available operating systems, databases and hardware platforms for running SAP and selected Azure, Windows 2012 R2 and SQL Server 2016.

Factors that were taken into consideration included:

1. Huge performance and reliability increases on standard commodity Intel platforms over the last 10-15 years.

2. No new investment by UNIX vendors into this technology and near universal move away from UNIX for standard package type applications like SAP.

3. SQL Server 2016 includes a Column Store capability that is integrated into SAP BW and delivers performance sufficient to allow removal of expensive Business Warehouse Accelerator (BWA) from the solution.

4. Azure Public Cloud platform has matured and is fully certified and supported by SAP.

5. Improvements in High Availability and Disaster Recovery technologies for Windows and SQL Server 2016 which allow for improved SLA.

6. Azure cloud has an integrated Disaster Recovery tool called Azure Site Recovery (ASR). This tool allows for DR testing at any time without impacting the DR SLA or production system.

7. Azure platform has a strong partner roadmap with SAP including the provision of 4TB Hana appliances and certifications of Azure VMs for Hana.

8. Overall technical platform capabilities and a single point for support. Availability of skilled partners

In addition the current platform was running on Oracle 11g which was also at end of life, there are restrictions on database compression with Oracle, no roadmap for FEMS Push Down for BW on Oracle and the SAN storage was out of space and needed renewing.

Before the migration to Windows, SQL Server and Azure the SAP application landscape consisted of these SAP components:

SAP ECC 6.0 EHP 5 (Non-Unicode)

SAP BW 7.30 (Non-Unicode)

SAP BWA 7.20

SAP Business Objects 4.1 SP6

SAP SRM 7.31

SAP SRM TREX 7.1

Content Server 6.40

SAP SCM (Non-Unicode)

SAP SCM LiveCache 7.7

SAP PI 7.3

SAP EP 7.3

SAP GRC 7.02

SAP Gateway 7.31

SAP Solution Manager 7.1

SAP MDM 7.1

SAP Console

2. Target Landscape

The target landscape on Azure (Australia Region), Windows 2012 R2 and SQL Server 2016 Service Pack 1 required some components to be simultaneously OS/DB migrated, transferred to Azure and converted to Unicode. The target landscape releases are listed below:

SAP ECC 6.0 EHP 7 (Unicode)

SAP BW 7.40 (Unicode)

SAP Business Objects 4.2 SP2 pl4

SAP SRM EHP2

SAP SRM TREX 7.1

Content Server 6.5

SAP SCM EHP3 (Unicode)

SAP SCM LiveCache 7.9

SAP PI 7.4

SAP EP 7.4

SAP GRC 10

SAP Gateway 7.4

SAP Solution Manager 7.2

SAP SLD/ADS 7.4

SAP MDM 7.1

SAP Console

Operating System: Windows 2012 R2 Datacenter

Database: SQL Server 2016 SP1 Enterprise Edition

Database Server VM types: BW: GS5, ERP:G4, Other SAP:D14v2. MaxDB\TREX: DS12v2

SAP Application Server VM types: D12v2 and D14v2

Storage: Azure Premium Storage used for all DBMS servers (SQL Server, MaxDB and TREX). Standard used for all other workloads

Network Connectivity: ExpressRoute with High Performance Gateway

Azure Deployment Model: Azure Resource Manager (ARM)

3. Upgrades, Unicode Conversions and OS/DB Migrations

To achieve the best outcome for the Company the SAP OS/DB migration partner BNW Consulting recommended against a “Big Bang” go live. Such a go live would involve moving the entire production environment to Azure in an entire weekend. While this is certainly technically possible to do, the advantages of this approach are only small and the resources required to do this are considerable.

An incremental go live approach was successfully used. Smaller NetWeaver systems were migrated, upgraded or reinstalled on Azure. Then the BW system was exported in the existing datacenter using fast Intel based Windows servers to run R3load. The performance of R3load on HPUX and UNIX platforms in general far slower than Intel based platforms. The dumpfiles were transferred using ftp and SAP migmon, imported in parallel and then BW was upgraded to BW 7.40 with the latest Support Pack Stack. The database was converted to Unicode during the migration.

In the final phase the SAP ECC 6.0 system was moved to Azure, upgraded from EHP5 to EHP7 and converted to Unicode.

To speed up the import the SQL Server transaction log file was temporarily placed on the D: drive (non-persistent disk) on the GS5 DB server. Additional log capacity was allocated on a Windows Storage Space created from 6 x P30 Premium disks.

Setup and configuration of the SAP Application Servers was simplified considerably by placing all the profile parameters into the Default.pfl. There is no reason for individual SAP Application Servers to have different configurations and troubleshooting is greatly simplified by having a uniform configuration.

The implementation partner also tested and remediated where necessary more than 200 interfaces to more than 40 non-SAP applications.

4. SQL Server 2016 Enterprise Edition SP1 Features for SAP

SQL Server 2016 has many features of great benefit to SAP Customers:

1. Integrated Column Store: Drastic reduction in storage and vastly increased performance. This technology can be automatically deployed during a migration during the RS_BW_POSTMIGRATION phase or using report MSSCSTORE. Many customers have terminated SAP Business Warehouse Accelerator and achieved the same or better performance after implementing SQL Server Column Store.

2. SQL Server AlwaysOn is a built in integrated HA/DR solution that is simple to setup.

3. SQL Server Transparent Data Encryption secures the database at rest and database backups. TDE is integrated with the Azure Key Vault service.

4. SQL Server Backup to Blob Storage allows a backup to write directly to multiple files directly on a URL path on Azure storage. Full and Transaction Log backups are taken off SQL Server AlwaysOn secondary databases. This reduces the load on the primary database and also allows for easy creation of offsite backups in the DR datacentre

5. SQL Server 2016 supports Azure Storage level “snapshots” similar to Enterprise SAN level snapshots – currently 4 hourly and retained for 48 hours. SQL 2016 has the ability to do larger backups > 1TB which unified and simplified the Company’s backup processes across all SAP applications

5. SQL Server Database Compression reduces the database size dramatically

SQL Server 2016 database compression results:

SAP Component DB Size on HPUX/Oracle DB Size on Windows & SQL Server Comment
SAP BW 7.40 7.7TB 2.3TB 7.3 to 7.4 Upgrade, Unicode Conversion and Column Store & Flat Cube
SAP ECC 6.0 3.1TB 1.5TB 7.3 to 7.4 Upgrade, EHP6 to EHP7 and Unicode Conversion
SAP SRM 261GB 126GB
SAP SCM 122GB 53GB Upgrade from 7.0 EHP1 to EHP3 and Unicode Conversion

SQL Server 2016 Column Store for SAP BW drastically improves user query and data load times. The Flat Cube functionality found on SAP BW on Hana is available to SQL Server customers. The graph below shows the performance comparison on Oracle (with SAP Business Warehouse Accelerator) and SQL Server 2016 (BWA removed). The graph was collected and prepared by the Company several weeks after go live

Graph 1. On-premises HPUX/Oracle + BWA performance vs. Windows 2012 R2, SQL Server 2016 and SQL Server Flat Cube and Column Store

In addition the BW Process Chain times have reduced by 50%. This has allowed users to report on more up to date data.

5. Azure Configuration

The project has utilized the following Azure specific features and capabilities to improve the performance and resiliency of the SAP solution

1. Availability Sets – SQL servers, ASCS servers and SAP application servers were configured into Availability Sets to ensure that there was at least one node available and running at any one time.

2. Premium Storage – SQL Server and any DBMS like workload utilizes Premium Storage to deliver high IOPS at stable latencies of low single digit milliseconds.

3. Multiple IP address on a ILB – ASCS components were consolidated onto a single ASCS cluster using the multiple frontend IP feature of the Azure Internal Load Balancer (ILB)

4. Storage Pinning – Separate storage accounts were used for SQL Server AlwaysOn nodes. This ensures that the failure of an individual storage account does not result in the storage being unavailable for both nodes. This capability is now built into a new Azure feature called “Managed Disks”.

6. Network Security Groups – Network ACLs were can be created per vNet, subnet and per individual VMs.

7. Azure ExpressRoute – High speed connectivity via a dedicated encrypted WAN link between a customer site and the Azure datacenter.

8. Active Directory Group Policy – SAP Servers and Service Accounts are placed into a container in Active Directory and a Group Policy is enforced to harden the Windows operating system thereby reducing the servicing and patching requirements. The Service Accounts for SQL Server were configured to Lock Pages in Memory and Perform Volume Maintenance Tasks. Internet Explorer was removed from Windows Server 2012 R2 further reducing the need for servicing and patching. Most customers adopting this deployment configuration are able to eliminate regular patching and move to a yearly update and patching cadence.

6. High Availability & Disaster Recovery

All SAP application components within the Primary Datacenter have High Availability. This means the failure of an individual server or service will not result in an outage. SQL Server is protected via synchronous AlwaysOn with a 0 second RPO, 10-45 sec RTO. The SAP ASCS service is protected using Windows Clustering.

Because Azure does not natively support shared disks a disk replication tool called BNW AFSDrive is used to present a shared disk to the cluster.

The Disaster Recovery site must be in a totally separate geolocation in order to provide true resilient DR capabilities. Completely independent electricity, WAN links and governmental services and security are required to reach the required SLA.

The Disaster Recovery solution incorporates one SQL Server and ASCS node in the DR site. If a DR event forced the Company to run for more than a few days at the DR site High Availability would be added to the DR solution.

7. Partner Products & Services

BNW StopLoss – Large Enterprise customers require absolute certainty that every single row in the source database has been correctly migrated to SQL Server. The SAP Migration tools do keep track of export and import row counts at a basic level, but when these tools have been used by uncertified consultants cases of inconsistencies have occurred.  BNW StopLoss eliminates this possibility. BNW StopLoss is a tool that scans export row counts and is asynchronously counts the rows in the target SQL Server database. If any differences are detected an alert is generated.

BNW AFSDrive – Azure does not natively support a shared cluster disk. BNW AFSDrive creates a virtual shared disk cluster disk between two or more Azure VMs. BNW AFSDrive is certified and supported on Windows 2012 R2 and Windows 2016.

BNW CloudSnap – SQL Server, Oracle and Sybase databases can be copied using the Azure disk/storage cloning technologies. The CloudSnap utility will briefly quiesce the disks and command the database to checkpoint. This technology allows for backups and system copies to be done effortlessly.

8. Benefits of Azure

The Company was able to move off an end of life HPUX platform, leverage the many performance and space savings features in SQL Server 2016 SP1 and eliminate the expensive proprietary BWA appliance. In parallel all applications were upgraded to the latest versions, patches and converted to Unicode. The solution delivered allows the Company to stabilize their SAP applications on a modern platform that will stay in support and maintenance until 31 December 2025.

Test environments can be created and refreshed much more rapidly than before and performance of BW and ECC was greatly improved.

The runtimes of DBA tasks such as backups, database integrity checks and restores are drastically faster than on HPUX/Oracle.

Overall system performance and response times have improved dramatically. In addition the HA/DR solution has improved and the DR solution is now easily testable.

Links & Notes

SAP & Azure Partner – BNW Consulting http://www.bnw.com.au/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research.

SAP on SQL: General Update for Customers & Partners March 2017

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the SAP on SQL Server platform. The key objective of the SAP on Windows SQL port is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. Urgent Kernel Update Required on 7.4x & 7.5x Systems

A bug in the SAP Kernel can cause this error message in the SQL Server Errorlog and can cause SQL Server to freeze.

A bug in the SAP opens a cursor to read from the database, but this cursor is never closed. Eventually all worker threads are occupied and SQL Server may freeze

It is recommended to update the SAP Kernel on any NetWeaver 7.4x or 7.5x system!

All schedulers on Node 3 appear deadlocked due to a large number of worker threads waiting on ASYNC_NETWORK_IO. Process Utilization 4%.

All schedulers on Node 3 appear deadlocked due to a large number of worker threads waiting on ASYNC_NETWORK_IO. Process Utilization 5%.

All schedulers on Node 3 appear deadlocked due to a large number of worker threads waiting on ASYNC_NETWORK_IO. Process Utilization 5%.

SAP Note 2333478 – Database cursors not closed when table buffer is full

2. Very Useful SQL Server DMV & Logging for Detecting Issues

The procedures below are useful for diagnosing problems such as those described above

1. Useful script to display blocking, long running queries and how many active requests are running at any given instant

select session_id, request_id,start_time,
status
,

command, wait_type, wait_resource, wait_time, last_wait_type, blocking_session_id

from
sys.dm_exec_requests
where session_id >
49 order
by wait_time desc;

To show the SQL Server internal engine processes remove where session_id >49

2. Useful script for displaying issues related to memory consumption

— current memory grants per query/session

select

session_id, request_time, grant_time ,

requested_memory_kb /
( 1024.0 * 1024 ) as
requested_memory_gb ,

granted_memory_kb /
( 1024.0 * 1024 ) as

granted_memory_gb ,

used_memory_kb /
( 1024.0 * 1024 ) as
used_memory_gb ,

st.text

from

sys.dm_exec_query_memory_grants g  cross
apply sys.dm_exec_sql_text(sql_handle) as  st

— uncomment the where conditions as needed

where grant_time is not null  — these sessions are using memory allocations

where grant_time is     null  — these sessions are waiting for memory allocations

SQL Server Column Store systems with a high value for Max Degree of Parallelism can experience high memory usage

The formula for calculating the amount of memory to run a query on a Column Store table is:

Memory Grant Request in MB = ((4.2 * number of columns in the columnstore index) + 68) * Degree of Parallelism + (number of string columns * 34)

The DMV above is useful for diagnosing this issues. Remember to UNCOMMENT where grant_time is or is not null.

3. Diagnostic tool for collecting logs to attach to an OSS Message – Hangman.vbs

It is recommended to be familiar with running this utility. Hangman.vbs captures many useful parameters on a system and should always be run when sending OSS messages for performance or stability problems.

948633 – Hangman.vbs

2142889 – How to Run the Hangman Tool [VIDEO]

Two very good blogs explaining Hangman.vbs analysis

https://blogs.msdn.microsoft.com/saponsqlserver/2008/10/24/analyzing-a-hangman-log-file-part-1/

https://blogs.msdn.microsoft.com/saponsqlserver/2008/11/23/analyzing-a-hangman-log-file-part-2/

4. SQL Server 2016 Query Store

SQL Server Query Store is switched off by default but is a very useful tool for pinpointing expensive queries or queries that are sometimes fast and sometime slow depending on input parameters (such as BUKRS – Company Code).

This feature is only available in SQL 2016 and higher. A good video is available on Channel9

3. Very Useful Windows Perfmon Template for Diagnosing Issues

Attached to this blog is a XML template file that contains a recommended set of Perfmon counters to run on SQL Server Database servers and SAP Application servers. The template file has been renamed from “zsapbaseline” to “zSAPBaseline.txt” as downloading XML files is blocked by some firewalls and proxies. The template is built with a SQL Server named instance “PRD”.  Do a find & replace to change this to the SQL instance name in use.  After downloading this file, rename the file to zSAPBaseline.xml and follow the following steps

1. Open perfmon.msc from the Run menu

2. Navigate to Data Collector Sets -> User Defined

3. Right click on User Defined -> New

4. Create from template from the zSAPBaseline.xml file

5. Ensure the properties below are set to avoid filling up the file system

6. Set the schedule so that the collector will automatically restart if the server is restarted

Note: for those who prefer graphing in Excel a *.blg file can be converted into a csv with relog -f csv inputfile.blg -o outputFile.csv

3. Please Use Windows 2016 for New Installation

Windows 2016 is now generally available for most SAP software including all releases of NetWeaver based applications from 7.00 through to 7.51.

We recommend all new installations use this operating system. If you are unsure about the exact release status of a SAP application post a question in this blog.

Windows 2016 is now Generally Available for SAP

4. Windows 2016 Server Hardening Recommendations

Windows 2016 does not have the Security Configuration Wizard. Many of the hardening tasks required on previous Windows releases are not required.

It is still recommended to:

1. Remove Internet Explorer and follow SAP Note 2055981 – Removing Internet Explorer & Remotely Managing Windows Servers

dism /online /Disable-Feature /FeatureName:Internet-Explorer-Optional-amd64

2. Always activate the Windows Firewall and configure ports via Group Policy Object

3. Review the security resources here: https://www.microsoft.com/en-us/cloud-platform/windows-server-security

4. Use the Security Baseline for Windows Server 2016

5. SQL Server 2016 Backup to URL Settings

To backup database to URL (Azure Blob), especially for VLDB, please use below best practices:

1. Backup to multiple URL targets. Prior to SQL 2016 only one URL target was supported.

2. Specify MAXTRANSFERSIZE = 4194304 to tell SQL server to use 4MB as max transfer size. Without this parameter, most of network IO to Azure blob is 1MB.  The test below shows that it can reduce 70% of blocks consumed.

3. Use COMPRESSION to reduce the number of block write requests to Azure Blob. The test below shows this option can help reduce 65% of backup size, and 2~4 times faster.  However please be aware of the CPU usage when using compression. If on your server CPU usage is already very high (say, >80%) please monitor CPU usage closely when using compression.

4. You can also increase BUFFERCOUNT if your target storage account has enough bandwidth, to increase backup throughput. Say, you can use 20~500, and choose the best one to meet your needs.

Please note that below issue has been reported when you  backup VLDB to azure blob:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

10 percent processed.

20 percent processed.

30 percent processed.

Msg 3202, Level 16, State 1, Line 78

Write on “https://customer.blob.core.windows.net/dbw-db-backups/DBW_20170102111029_FULL_X22.bak” failed: 1117(The request could not be performed because of an I/O device error.)

Msg 3013, Level 16, State 1, Line 78

BACKUP DATABASE is terminating abnormally.

If you enable trace by below trace command:

DBCC TRACEON(3004, 3051, 3212,3014, 3605, 1816)

(To turn the trace off, you can run “DBCC TRACEOFF(3004, 3051, 3212,3014, 3605, 1816)”)

You can then get more diagnostic information in errorlog:

Write to backup block blob device https://storageaccount.blob.core.windows.net/backup/xx.bak failed. Device has reached its limit of allowed blocks.

The above error is because that azure blob has 50,000 blocks limitation. Use above best practices can help avoid this error.

Below is the testing result for your reference. My testing database is small, it is about 2GB so your database may behavior difference if it is very large. But the above best practices apply.

Non-TDE database means the database is without TDE. TDE database means the database is with TDE encryption.

Backup to URL non-TDE database 64kb network IO 1MB network IO 4MB network IO Target backup size Backup Duration Backup Speed
No option specified 40 752 761MB 85 seconds 8.9 MB/sec
MAXTRANSFERSIZE
= 4194304
39 185 764MB 129 seconds 5.86 MB/sec
MAXTRANSFERSIZE
= 4194304, COMPRESSION
5 67 269MB 27 seconds 27.9 MB/sec
Backup to URL TDE database
No option specified 40 752 761MB 73 seconds 10.3 MB/sec
MAXTRANSFERSIZE
= 4194304
39 185 764MB 80 seconds 9.4 MB/sec
MAXTRANSFERSIZE
= 4194304, COMPRESSION
53 67 271MB 33 Seconds 22.5 MB/sec

Use the below backup command for your reference:

DECLARE @Database varchar(3)
DECLARE @BackupPath varchar(100)
DECLARE @TimeStamp varchar(15)
DECLARE @Filename1 VARCHAR(MAX)
DECLARE @Full_Filename1 AS
VARCHAR (300)
DECLARE @Full_Filename2 AS
VARCHAR (300)
DECLARE @Full_Filename3 AS
VARCHAR (300)
DECLARE @Full_Filename4 AS
VARCHAR (300)
DECLARE @Full_Filename5 AS
VARCHAR (300)
DECLARE @Full_Filename6 AS
VARCHAR (300)
DECLARE @Full_Filename7 AS
VARCHAR (300)
DECLARE @Full_Filename8 AS
VARCHAR (300)

SET @Database =
‘DBW’
SET @BackupPath =
‘https://customer.blob.core.windows.net/’
+
Lower(@Database)
+
‘-db-backups/’
SET @TimeStamp =
REPLACE(CONVERT(VARCHAR(10),
GETDATE(), 112),
‘/’,
)
+
REPLACE(CONVERT(VARCHAR(10),
GETDATE(), 108)
,
‘:’,
)
SET @Full_Filename1 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X1.bak’
SET @Full_Filename2 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X2.bak’
SET @Full_Filename3 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X3.bak’
SET @Full_Filename4 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X4.bak’
SET @Full_Filename5 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X5.bak’
SET @Full_Filename6 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X6.bak’
SET @Full_Filename7 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X7.bak’
SET @Full_Filename8 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X8.bak’

–Backup database

BACKUP
DATABASE @Database TO
URL
= @Full_Filename1,
URL
=  @Full_Filename2,
URL
=  @Full_Filename3,
URL
=  @Full_Filename4,
URL
=  @Full_Filename5,
URL
= @Full_Filename6,
URL
=  @Full_Filename7,
URL
=  @Full_Filename8
WITH
STATS
= 10,
FORMAT, MAXTRANSFERSIZE
= 4194304, COMPRESSION
GO


SQLCAT team provide a blog on backing up VLDB here.

Thanks to Simon Su, Escalation Engineer, Microsoft CSS, Asia Pacific & Greater China for contributing this article on Backup to URL after resolving this problem for a customer.

6. Obsolete Windows Server, SQL Server & SAP Kernels – Please Upgrade

The majority of SAP on Windows and SQL Server customers have modernized their Operating System and Database releases to at least Windows 2012 R2 and SQL Server 2012. However there are some customers running very old operating systems and database releases. Here is a list:

1. Windows 2003 – now over 14 years old and out of support by both Microsoft and SAP. Please update immediately! 2135423 – Support of SAP Products on Windows Server 2003 after 14-Jul-2015

2. SQL Server 2005 – out of support by Microsoft since last year. Please update immediately!

3. Windows 2008 & Windows 2008 R2 – both of these operating system versions are approaching end of life soon and we recommend planning to upgrade to the latest available operating system

4. SQL Server 2008 & SQL Server 2008 R2 – both these database versions are near end of life and we recommend upgrading to SQL Server 2014 or SQL Server 2016. Customers running SAP BW can benefit from performance gains measured in many hundreds of percent due to improvements in modern SQL Server versions

5. SAP Kernels 7.00, 7.01, 7.21, 7.40, 7.42 are end of life. It is recommended to run either 7.22_EXT or 7.49 Kernels as at March 2017

Recommendation: As at March 2017 deploy Windows 2016 with all the latest updates & SQL Server 2016 with the latest Service Pack & Cumulative Update. SQL Server patches can be downloaded from here.

2254428 – Error while upgrading to SAP NetWeaver 7.5 based systems: OS version 6.1 out of range (too low)

Downward Compatible Kernel Documentation:

2350788 – Using kernel 7.49 instead of kernel 7.40, 7.41, 7.42 or 7.45

2133909 – SAP Kernel 722 (EXT): General Information and Usage

7. Windows Server 2016 Cloud Witness – How to Change Port Number Used

Windows 2016 includes a very useful feature called Cloud Witness.

Cloud Witness is an enhancement on the previous File Share Witness and has much better costs and functionality.

Cloud Witness is implemented inside the process rhs.exe and issues a https call on port 443 to the address <storage-account-name>.blob.core.windows.net

Some customers with high security requirements have requested a process to change the default port used by Cloud Witness

rhs.exe can be routed via a proxy server. To do this run the following command:

netsh winhttp set proxy proxy-server=”https=SERVER:PORT”

To configure the settings to enable PowerShell or the UI to work, enable the .net proxy settings. The easiest way to do this is by setting the proxy settings in Control Panel -> Internet Options (Internet Explorer should already be removed)

If additional security is required the address <storage-account-name>.blob.core.windows.net can be added to the firewall whitelist

The proxy server and any other supporting infrastructure (such as firewalls) become critical to the quorum calculation if the default behavior of the cloud witness is changed (meaning a failure of the proxy server could cause the cluster to lose majority and this would mean the cluster would deliberating offline the cluster role).

8. Important Notes for SAP BW on SQL Server Customers & Column Store on SAP ERP Systems

SAP BW on SQL Server Customers can benefit from several new technologies.

1. SAP BW 7.00 to 7.50 customers – SQL Server Column Store on F Fact and E Fact tables
https://launchpad.support.sap.com/#/notes/2116639

2. SAP BW 7.40 to 7.50 customers – SQL Server Flat Cube

3. SAP BW 7.50 SPS 04 or higher can leverage FEMS pushdown

Additional performance improvements for SAP BW are documented here

It is recommended to update the report MSSCOMPRESS. This blog discusses the new version of this report

9. SQL Server on AlwaysOn Post Installation Steps – SWPM

The SAP Installation tools are not fully aware of SQL Server AlwaysOn. In general it is recommended to install SAP applications prior to establishing an AlwaysOn availability group.

After adding an AlwaysOn replica the following SAPInst option can be run to create the required users and jobs.

An example high level install procedure for installing a NetWeaver

Note: This option needs to be run on each replica while the replica is online as the Primary node. SAP applications must be shutdown prior to running this option.

Review this Blog – Script sap_synchronize_always_on can be used

https://blogs.msdn.microsoft.com/saponsqlserver/2016/02/25/always-on-synchronize-sap-login-jobs-and-objects/

SAP Note 1294762 – SCHEMA4SAP.VBS

SAP Note 683447 – SAP Tools for MS SQL Server

10. SQL Server 2016 SP1 – Transaction Log Writing to NVDIMM

It is possible to drastically speed up SQL Server transaction log write performance by using NVDIMMs.

This is a new technology released in SQL Server 2016 SP1.

Customers who have an ultra-low downtime requirement for OS/DB migrations may wish to test this new capability.

Information about this feature can be found below

https://blogs.msdn.microsoft.com/psssql/2016/04/19/sql-2016-it-just-runs-faster-multiple-log-writer-workers/

https://blogs.msdn.microsoft.com/bobsql/2016/11/08/how-it-works-it-just-runs-faster-non-volatile-memory-sql-server-tail-of-log-caching-on-nvdimm/

Recommended Notes & Links

555223 – FAQ: Microsoft SQL Server in NetWeaver based

1676665 – Setting up Microsoft SQL Server 2012

1966701 – Setting up Microsoft SQL Server 2014

2201060 – Setting up Microsoft SQL Server 2016

1294762 – SCHEMA4SAP.VBS

1744217 – MSSQL: Improving the database performance

2447884 – VMware vSphere with VMware Tools 9.10.0 up to 10.1.5: Performance Degradation on Windows

2381942 – Virtual Machines Hanging with VMware ESXi 5.5 p08 and p09

2287140 – Support of Failover Cluster Continuous Availabilty feature (CA)

2046718 – Time Synchronization on Windows

2325651 – Required Windows Patches for SAP Operations

2438832 – Network problems if firewall used between database and application servers

1911507 – Background Jobs canceled during failover of (A)SCS instance in windows failover cluster

Netsh config

https://parsiya.net/blog/2016-06-07-windows-netsh-interface-portproxy/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

More Questions From Customers About SQL Server Transparent Data Encryption – TDE + Azure Key Vault

$
0
0

Recently many customers have been moving from AIX and HPUX to Windows 2016 & SQL Server 2016 running on Azure as these UNIX platforms are no longer mainstream, developed or invested by their vendors

Most of these customers are deploying TDE to protect the Database files and backups. Encrypting databases with strong ciphers like AES-256 is a highly effective way to prevent theft of data, consequently great care must be taken with the keys. If there is a DR event or some need to restore the database and the keys cannot be found the database is for all purposes lost. It is not possible to unencrypt AES-256 by “brute force” methods.

To prevent this from happening we recommend leveraging the Azure Key Vault to securely store the SQL Server TDE keys

Many readers of this blog and customers have asked for an end to end process for a new SAP installation or migration on SQL 2016 on Azure with AlwaysOn with TDE using the Azure Key Vault.

Before reviewing the rest of this blog topic it is recommended to fully review this link https://msdn.microsoft.com/en-us/library/mt720686.aspx

Note: Using SQL Server TDE & storing SQL datafiles on Bitlocker or Azure ADE disks is not tested and is not recommended due to performance concerns

Prerequisites:

1. Segregate duties between the DBA and the Azure Key Manager. The DBA should not have access to the Azure Key Vault and the Key Administrator should not have access to SQL Server databases and backups

2. Ensure Azure Active Directory has been setup (most commonly this is integrated with on-premises Active Directory)

3. Ask the Key Administrator to assist with the Key Vault steps

4. Download the Azure Key Vault Integration

Before proceeding it is essential to read this documentation and understand the following process flow. More information is here

Implementation:

5. Register the SQL Server Application in Azure Active Directory

Open the ASM portal https://manage.windowsazure.com and navigate to the “Active Directory” service

Click on the Directory Service (either the default directory or if configured the integrated directory). Then click on “Applications”

The values in the URL/URI can be any value so long as the site is available

After creating the Azure Active Directory Application, click on the configure tab and note the Client ID and the Client Secret

Note: Some documentation and the PowerShell scripts refer to the “Client ID” as the “ServicePrincipalName”. In this procedure they are the same. A potential source of confusion.

Create the Secret with either 1 or 2 years duration under the “keys” section of the Configuration Tab of the Applications menu in Azure Active Directory

6. Create the vault, master key and authorize SQL Server to access the Key

Grant the Client ID (ServicePrincipalName) permissions to get, list, wrapKey and unwrapKey on the Key Vault that already exists or has just been created

Set-AzureRmKeyVaultAccessPolicy
-VaultName
SAPKeyVault
-ServicePrincipalName
2db602bd-4a4b-xxxx-xxxx-d128c143c8a9
-PermissionsToKeys
get, list, wrapKey, unwrapKey

Check permissions on the Key Vault with the following command. The application registered in Azure Active Directory can be seen highlighted below

Get-AzureRmKeyVault -VaultName SAPKeyVault


Create the Key with the following command:

Add-AzureKeyVaultKey
-VaultName
‘SAPKeyVault’
-Name
‘SAPonSQLTDEKey’
-Destination
‘Software’

Alternatively a Key can be created via the Azure Portal as shown below

7. Create the database in advance in SQL Management Studio. Make the database size at creation large enough for the installation or import plus enough for a few months growth. Provided the database is created with the DB name = <SID> SAPInst will recognize this as the installation target database.

8. Set the database recovery model to SIMPLE

9. Enable TDE with this command

— Enable advanced options.

USE
master;

GO

sp_configure
‘show advanced options’, 1 ;

GO

RECONFIGURE
;

GO

— Enable EKM provider

sp_configure
‘EKM provider enabled’, 1 ;

GO

RECONFIGURE
;

GO

— Create a cryptographic provider, using the SQL Server Connector

— which is an EKM provider for the Azure Key Vault. This example uses

— the name AzureKeyVault_EKM_Prov.

On all releases of SQL Server it is still required to download and install the SQL Server Connector for Microsoft Azure Key Vault.

After Installation of the connector run this command.

CREATE
CRYPTOGRAPHIC
PROVIDER AzureKeyVault_EKM_Prov

FROM
FILE
=
‘C:\Program Files\SQL Server Connector for Microsoft Azure Key Vault\Microsoft.AzureKeyVaultService.EKM.dll’;

GO

The next part is quite difficult. The Secret in this command is = Client ID (referenced as the ServicePrincipalName) with the hyphens removed + the Secret from the Azure Active Directory Application

Example:

Azure Active Directory Application Client ID = 2db602bd-4x4x-4322-8xxf-d128c143c8a9

Azure Active Directory Application Secret = FZCzXY3K8RpZoK12MxF/WFxxAw6aOxxPU2ixxEkQBbc=

Step A: remove the hyphens 2db602bd-4x4x-4322-8xxf-d128c143c8a9 -> 2db602bd4x4x43228xxfd128c143c8a9

Step B: concatenate Client ID (minus hyphens) and Secret = 2db602bd4x4x43228xxfd128c143c8a9FZCzXY3K8RpZoK12MxF/WFxxAw6aOxxPU2ixxEkQBbc=

******* NEXT STEP

USE
master;

CREATE
CREDENTIAL sysadmin_ekm_cred


WITH
IDENTITY
=
‘SAPKeyVault’,
— for public Azure


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.usgovcloudapi.net’, — for Azure Government


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.azure.cn’, — for Azure China


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.microsoftazure.de’, — for Azure Germany


SECRET
=
‘2db602bd4a4b43228d7fd128c143c8a9fhEP5adz9FTrx2Nt4N36HGxxxx1X0Lo5VcTyJRxte7E=’

FOR
CRYPTOGRAPHIC
PROVIDER AzureKeyVault_EKM_Prov;

— Add the credential to the SQL Server administrator’s domain login

— The login needs to already exist. This would typically be the DBA or SAP <sid>adm user

ALTER
LOGIN [SQLTDETEST\cgardin]

ADD
CREDENTIAL sysadmin_ekm_cred;

******* NEXT STEP

— While logged in as the DBA or SAP <sid>adm run this command. This may not work if logged in as another user

CREATE
ASYMMETRIC
KEY
SAP_PRD_KEY

FROM
PROVIDER [AzureKeyVault_EKM_Prov]

WITH PROVIDER_KEY_NAME =
‘SAPonSQLTDEKey’,

CREATION_DISPOSITION = OPEN_EXISTING;

******* NEXT STEP

USE
master;

CREATE
CREDENTIAL Azure_EKM_TDE_cred


WITH
IDENTITY
=
‘SAPKeyVault’,
— for public Azure


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.usgovcloudapi.net’, — for Azure Government


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.azure.cn’, — for Azure China


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.microsoftazure.de’, — for Azure Germany


SECRET
=
‘2db602bd4a4b43228d7fd128c143c8a9fhEP5adz9FTrx2Nt4N36HGxxxb1X0Lo5VcTyJRxte7E=’

FOR
CRYPTOGRAPHIC
PROVIDER AzureKeyVault_EKM_Prov;

******* NEXT STEP

USE
master;

— Create a SQL Server login associated with the asymmetric key

— for the Database engine to use when it loads a database

— encrypted by TDE.

CREATE
LOGIN TDE_Login

FROM
ASYMMETRIC
KEY
SAP_PRD_KEY;

GO

— Alter the TDE Login to add the credential for use by the

— Database Engine to access the key vault

ALTER
LOGIN TDE_Login

ADD
CREDENTIAL Azure_EKM_TDE_cred ;

GO

******* NEXT STEP

USE PRD;

GO

CREATE
DATABASE
ENCRYPTION
KEY

WITH
ALGORITHM
=
AES_256

ENCRYPTION
BY
SERVER
ASYMMETRIC
KEY
SAP_PRD_KEY;

GO

— Alter the database to enable transparent data encryption.

ALTER
DATABASE PRD

SET
ENCRYPTION
ON;

GO

******* NEXT STEP

USE
master

SELECT
*
FROM
sys.asymmetric_keys

— Check which databases are encrypted using TDE

SELECT d.name, dek.encryption_state

FROM
sys.dm_database_encryption_keys
AS dek

JOIN
sys.databases
AS d


ON dek.database_id = d.database_id;

11. Only when the ENCRYPTION STATUS = 3 continue this procedure

Even a blank database with no data will take some time to encrypt. The reason is that “nothing” is encrypted using a symmetric key and the original “nothing” or null value is represented by a completely random value. All of the above steps can be done prior to a SAP OS/DB migration and therefore these steps do not increase downtime

12. Run SWPM to install or migrate the SAP NetWeaver system

13. Complete post processing as per the SAP System Copy Guide

14. Set the SQL Server database recovery model to FULL

15. Start a full database backup

16. Copy the database backup file to a location where AlwaysOn Replica #1 can restore the file

17. Run the commands from step 9 up and including the step “ALTER LOGIN TDE_Login” step in this procedure to install the TDE Key on Replica #1 [Repeat on each AlwaysOn Replica node]

18. Restore the database on AlwaysOn Replica #1

19. Configure the Azure Internal Load Balancer – ILB if this has not already been done in advance (ensure Direct Server Return is enabled)

20. The AlwaysOn Availability Group Wizard will not work with TDE databases. It is not possible to use the wizard to setup AlwaysOn

These two blogs discuss how to setup AlwaysOn on TDE databases

In these blogs ignore the Key Management procedures as in this scenario Keys are stored in Azure and not locally. The T-SQL to create the AlwaysOn Availability Group is the same

https://blogs.msdn.microsoft.com/alwaysonpro/2015/01/07/how-to-add-a-tde-encrypted-database-to-an-availability-group/

https://blogs.msdn.microsoft.com/sqlserverfaq/2013/11/22/how-to-configure-always-on-for-a-tde-database/

21. Test failover by running the Failover wizard in SSMS

22. Run the step listed in topic #9 in this blog to create users on the new Replica Node (SAPInst would have already performed this activity as part of the install or migration on the Primary Node)

23. Check access to the database with a simple query SELECT * FROM <sid>.T000;

24. Change the default.pfl value for dbs/mss/server = <primary node hostname> to dbs/mss/sqlserver = <alwayson listener name> (for Java systems use ConfigTool)

25. Start the SAP application servers and run SICK

26. Run the Always On failover wizard again to test failover and failback.

Note: Azure Key Vault integration for SQL Server TDE requires these hosts and ports to be whitelisted

login.microsoftonline.com/*:443
*.vault.azure.net/*:443

If any problems are observed check the contents of the trace file dev_w0. The contents of the tracefile should look something like:

M Fri Mar 24 22:37:40 2017

M calling db_connect …

B Loading DB library ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’ …

B Library ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’ loaded

B Version of ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’ is “745.04”, patchlevel (0.201)

C Callback functions for dynamic profile parameter registered

C Warning: Env(MSSQL_SERVER) [<LISTENER>,<PORT>;MultiSubnetFailover=YES] <> Prof(dbs/mss/server) [<LISTENER>,<PORT>;MultiSubnetFailover=YES]

C Thread ID:15964

C Thank You for using the SLODBC-interface

C Using dynamic link library ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’

C 7450 dbmssslib.dll patch info

C SAP patchlevel 0

C SAP patchno 201

C Last MSSQL DBSL patchlevel 0

C Last MSSQL DBSL patchno 201

C Last MSSQL DBSL patchcomment SAP Support Package Stack Kernel 7.45 Patch Level 201 (2340627)

C ODBC Driver chosen: ODBC Driver 13 for SQL Server native

C Network connection used from <APPSERVER> to <LISTENER>,<PORT>;MultiSubnetFailover=YES using tcp: <LISTENER>,<PORT>;MultiSubnetFailover=YES


Using Columnstore on ERP tables

$
0
0

SQL Server columnstore indexes are optimized for aggregations of large amounts of data. Therefore, they are successfully used in SAP’s data warehouse system SAP BW since years. ERP systems typically still use rowstore (b-tree) indexes, because they are optimized for the most common data access pattern of ERP systems: Directly reading a few rows specified by very selective filters or the primary key. However, there are also reporting queries in ERP systems, which have to access a large number of rows. Such queries would benefit from a columnstore index, too.

When talking about ERP below, we mean all non-BW products of the SAP Business Suite like ERP or CRM.

Since we released our first version of SQL Server columnstore in 2012, we constantly get requests from SAP customers for using the columnstore on an ERP system, too. This was not possible for various technical reasons in SQL Server 2012 and 2014. This restriction is gone in SQL Server 2016, see also https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/11/sql-server-2016-improvements-for-sap-bw. You can now create an additional Nonclustered Columnstore Index (NCCI) on an SAP ERP table, which results in a hybrid data storage: The table itself, the primary key and all existing indexes stay in row format (b-trees). Only the new, additional index is stored in columnar format.

Customer Scenarios

SAP will not deliver columnstore indexes on ERP tables within SAP installation or upgrade. The NCCI is intended as a tuning option for specific customer scenarios. Selecting suitable ERP tables and testing the impact of the NCCI is a consulting project. An NCCI will certainly improve reporting performance. However, it may have negative impact on transactional throughput. Every modification of a data record has to be done on the columnstore, too.

For cubes of an SAP BW system, the columnstore index replaces several rowstore indexes, which results in disk space savings. This is possible, because we exactly know the workload of SAP BW cubes. However, for ERP systems, we cannot exactly tell you unneeded indexes. Each customer implements different customizing and has a different work load mix. Therefore, the NCCI is intended as an optional, additional index.

Using the columnstore in an SAP ERP system is not intended as a replacement for a dedicated SAP BW system. SAP BW is fully optimized for reporting queries. A distinct SAP BW system separates the reporting workload from the transactional workload of SAP ERP.

Creating a Columnstore Index

After applying SAP Note 2419662, you can create one NCCI per table using SAP report MSS_CS_CREATE. This Columnstore Index always contains all fields of the table (with a few exceptions, e.g. IMAGE fields). Report MSS_CS_CREATE only has three parameters: Table name, index name and degree of parallelism, which defines the number of logical CPUs used for the index creation.

erp

You can schedule report MSS_CS_CREATE as a batch job. The NCCI is always created offline, means concurrent row modifications on the same table are blocked while the NCCI is being created. SQL Server 2016 does not support the online creation of an NCCI. This feature is planned for the next version of SQL Server.

Integration in SAP DDIC

All indexes in SAP are defined in the SAP Data Dictionary (DDIC). Unfortunately, an index in DDIC is restricted in the number of columns and the number of bytes per index row. Therefore, we had to trick the DDIC somehow: For an NCCI, the index columns in DDIC and on the DB do not always match. However, you do not have to take care about this: The new SAP report MSS_CS_CREATE creates the NCCI on the database. At the same time, it creates a DDIC definition for the NCCI, which fulfills all DDIC requirements.

DDIC does not know anything about the columnstore property of an index (it is stored as a DDSTORAGE parameter). This results in a restriction for creating the NCCI in SAP: You cannot transport an NCCI from the development system to the productive system. Instead, you have to create the NCCI on both systems separately using report MSS_CS_CREATE.

Best practices

Columnstore indexes are only useful for large tables. Therefore, you should not even consider creating an NCCI on an ERP table with less than 10 million rows. As a matter of course, an NCCI is only useful on tables, which are used for long running, complex reporting queries. Ideally, these tables have a low or moderate rate of change.

For best reporting performance, you should make sure that all columnstore rowgroups are compressed. The concepts of columnstore rowgroups and the procedure of rowgroup compression is described in https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/24/concepts-of-sql-server-2014-columnstore. For SAP BW, rowgroup compression is performed as a final step during data load. In SAP ERP, there is no separate data load phase. Instead, ERP tables are updated all the time during normal working hours. If you have a dedicated time window for your reporting, you might run the columnstore rowgroup compression strait before running your reports (which are supposed to use the NCCI). For this purpose, you can use report MSSCOMPRESS as described in https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/25/improved-sap-compression-tool-msscompress.

erp2

There are two options in report MSSCOMPRESS for processing columnstore rowgroups:

  • When choosing “Compress Rowgroups”, then an ALTER INDEX REORGANIZE command (with the option COMPRESS_ALL_ROW_GROUPS=ON) is performed, if (and only if) there are uncompressed rowgroups in the columnstore index of the selected table.
  • When choosing in addition “Force CS Reorganize”, then the ALTER INDEX REORGANIZE command is always performed. Thereby small rowgroups are merged (in addition to the rowgroup compression of open rowgroups).

For very large SAP ERP tables (some hundred million rows), the rowgroup compression is not as important as for SAP BW. Since an SAP ERP table is never partitioned, you can have a maximum of one million uncompressed rows. On the other hand, the ALTER INDEX REORGANIZE also optimizes already compressed rowgroups when lots of UPDATEs and DELETEs had been executed before. Therefore, you might run the rowgroup compression as a periodical SAP batch job using the scheduler in report MSSCOMPRESS.

Conclusion

The 3rd generation of SQL Server columnstore provides lots of improvements. You can now use the columnstore even for SAP ERP systems. Therefore, it is highly recommended to upgrade to SQL Server 2016.

Transparent Data Encryption (TDE) acceleration for SQL 2016 in Windows Azure

$
0
0

Today we want to show you the speed improvements we get by supporting the Intel AES-NI instruction set for transparent data encryption (TDE) on Windows Azure. This instruction set reduces the CPU overhead of turning on Transparent Data Encryption for SQL Server databases.

For the testing scenario we used an Azure DS15 virtual machine with 40 CPUs and 140 GB of Memory. All 16 Disk drives were SSDs, the log drive a RAID 1 over 2 SSDs. The test were performed against SQL Server 2014 and SQL Server 2016 with a 1TB SAP Database. All 4 Encryption algorithms (AES_128, AES_192, AES_256 and TRIPLE_DES) were used, for TRIPLE_DES on SQL Server 2016 the database has to be in compatibility mode for SQL Server 2014 (120) (*) as this algorithm was deprecated in SQL Server 2016.

runtimes

In this graph you see that the encryption and decryption of this 1TB database always run faster on SQL Server 2016 ((*) except for the Triple_DES algorithm that is only available in the SQL Server 2014 compatibility mode on SQL Server 2016). The decrease in runtime goes up to 67 % (AES_192 Decryption) , the average is 38% without Triple_DES (22,5 % with Triple_DES). Means SQL Server 2016 improves the encryption / decryption speed by 38% just by making use of the Intel AES-NI instruction set.

For the load test we used a DBCC and a DBCC with the physical_only option. These were the measured run times:

dbccpersql

One can see that the run times on SQL Server 2016 (green) are much smaller due to the hardware support of the Intel AES-NI chip set and the changes for DBCC in SQL Server 2016. The execution times for the three AES algorithm do not depend very much the algorithm, the times are nearly the same encrypted or not encrypted. Even the deprecated algorithm TRIPLE_DES is on SQL Server 2016 faster than the default algorithm AES_256 on SQL Server 2014.

Building an average over all the algorithms and the 4 executions (Normal, physical_only, encrypted, encrypted and physical_only) the picture is even clearer:

DBCC Averages

The difference between the encrypted and not encrypted run (normal or physical_only) is on SQL Server 2014 much higher than on SQL Server 2016, means in our test SQL Server 2014 needed an hour more time (1:27 h to 2:22 h, 38,8 % increase) for the encrypted case whereas SQL Server 2016 only needed 10 minutes more (0:49 h to 0:59 h, 16,9 % increase). The overhead that is added through TDE (difference between blue and gray or between orange and yellow) is in SQL Server 2016 much smaller than in SQL Server 2014.

SQL Server 2016 is able to detect and to leverage the Intel AES-NI instruction set on the Azure virtual machine and to cut so the overhead of transparent data encryption in half.

SAP on Azure: General Update for Customers & Partners April 2017

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the Azure cloud platform. The objective of the Azure cloud platform is to provide the same performance, product availability support matrix and availability as on-premises solutions with the added flexibility of cloud. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. SQL Server Multiple Availability Groups on Azure

The Azure platform requires an Internal Load Balancer (ILB) to support clustering on both Linux or Windows High Availability solutions.

Previously there was a limit of one IP address per ILB. This has been removed and now up to 30 IP addresses can be balanced on a single ILB. There is also a limit of 150 port rules per ILB

This means that it is now possible to consolidate multiple AlwaysOn Listeners onto two or more cluster nodes in the same manner as an on-premises deployment.

Before deploying such a configuration it is highly recommended to make a detailed diagram and plan resources such as:

a. Premium Disk design, Number of SQL Datafiles, NTFS format size and whether to use SQL Datafiles stored directly in Azure blob storage

b. Cluster Quorum model and votes

c. Physical and virtual hostnames and IP addresses

d. AlwaysOn replica configuration (such as auto-failover nodes, synchronous and asynchronous replicas)

e. Document the Port that the SQL Server AlwaysOn Listener will use for each Availability Group

IP Address Name IP Address Number Hostname Port Probe Port Comment
Host1 – Physical IP xx.xx.xx.10 Host1 IP assigned to SQL Node 1
Host2 – Physical IP xx.xx.xx.11 Host2 IP assigned to SQL Node 2
Host3 – Physical IP yy.yy.yy.12 Host3 IP assigned to SQL Node 3 in DR DC
Virtual IP for SQL Listener 1 xx.xx.xx.100 SAPDB1 56000 59998 Virtual IP created by cluster for SQL AG #1 in Primary DC [assigned to ILB]
Virtual IP for SQL Listener 2 xx.xx.xx.101 SAPDB2 56001 59999 Virtual IP created by cluster for SQL AG #2 in Primary DC [assigned to ILB]
Virtual IP for Windows Cluster xx.xx.xx.1 SAPCLUDB1 Virtual IP for internal cluster in Primary DC [not assigned to ILB]
Virtual IP for SQL Listener 1 yy.yy.yy.100 SAPDB1 56000 Virtual IP created by cluster for SQL AG #1 in DR DC
Virtual IP for SQL Listener 2 yy.yy.yy.101 SAPDB2 56001 Virtual IP created by cluster for SQL AG #2 in DR DC
Virtual IP for Windows Cluster yy.yy.yy.1 SAPCLUDB1 Virtual IP for internal cluster in DR DC

After careful planning then the ILB configuration PowerShell script can be found in this link

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-ps-alwayson-int-listener

https://blogs.msdn.microsoft.com/igorpag/2016/01/25/configure-an-ilb-listener-for-sql-server-alwayson-availability-groups-in-azure-arm/

https://blogs.msdn.microsoft.com/sql_pfe_blog/2017/02/21/trouble-shooting-availability-group-listener-in-azure-sql-vm/

Note: it is recommended to set the SQL Server max memory parameter in this configuration. It is recommended to enable Direct Server Return (called Floating IP in the Portal) in this configuration. Similar functionality to stack multiple instances on a single VM is also available on other DBMS. If there are 2 AlwaysOn nodes in DR, then another separate ILB is required in the DR datacenter. Probe ports must be unique per ILB, but the same probe port number can be reused on different ILB

2. Multiple ASCS on a Cluster in Azure

Multiple ASCS clusters can be consolidated onto a single cluster. This Multi-SID configuration is explained in detail in this documentation.

It is essential to plan and document the solution before attempting to run the PowerShell Scripts to configure the ASCS

Note: As the same port 445 is shared by multiple Frontend IP addresses Direct Server Return must be enabled in this scenario. Direct Server Return is the PowerShell terminology. Floating IP is the terminology for Direct Server Return in the Azure Portal

IP Address Name IP Address Number Hostname Ports Probe Port Comment
Host1 – Physical IP xx.xx.xx.20 Host1 IP assigned to ASCS Node 1
Host2 – Physical IP xx.xx.xx.21 Host2 IP assigned to ASCS Node 2
Host3 – Physical IP yy.yy.yy.22 Host3 IP assigned to ASCS Node 3 in DR DC
Virtual IP for ECC ASCS xx.xx.xx.200 SAPECC 3200,3300,3600 59998 Virtual IP created by cluster for ECC ASCS in Primary DC [assigned to ILB]
Virtual IP for BW ASCS xx.xx.xx.201 SAPBW 3201,3301,3601 59999 Virtual IP created by cluster for BW ASCS in Primary DC [assigned to ILB]
Virtual IP for Windows Cluster xx.xx.xx.2 SAPCLUSAP1 Virtual IP for internal cluster in Primary DC [not assigned to ILB]
Virtual IP for ECC ASCS yy.yy.yy.200 SAPECC 3200,3300,3600 Virtual IP created by cluster for ECC ASCS in DR DC
Virtual IP for BW ASCS yy.yy.yy.201 SAPBW 3201,3301,3601 Virtual IP created by cluster for BW ASCS in DR DC
Virtual IP for Windows Cluster yy.yy.yy.2 SAPCLUSAP1 Virtual IP for internal cluster in DR DC

SAP ASCS on Azure Checklist:

1. Ensure the Timeout on the ILB is set to 30 minutes (this is the default in the PowerShell script)

2. Ensure the default.pfl parameter enque/encni/set_so_keepalive = TRUE

3. On Windows set TCP/IP registry values KeepAliveTime and KeepAliveInterval set to 180000 (3 minutes) 1593183 – TCP/IP networking parameters for SQL Server https://launchpad.support.sap.com/#/notes/1593183/E

4. Choose Probe Ports (normally 59999)

5. Set the Windows Cluster timeout

With PowerShell

$cluster = Get-Cluster; $cluster.SameSubnetDelay = 1500

$cluster = Get-Cluster; $cluster.SameSubnetThreshold = 10

$cluster = Get-Cluster; $cluster.CrossSubnetDelay = 1500

$cluster = Get-Cluster; $cluster.CrossSubnetThreshold = 20

With Cluster Command

cluster /cluster:<ClusterName> /prop SameSubnetDelay=1500

cluster /cluster:<ClusterName> /prop SameSubnetThreshold=10

cluster /cluster:<ClusterName> /prop CrossSubnetDelay=1500

cluster /cluster:<ClusterName> /prop CrossSubnetThreshold=20

On Windows 2016 the default values are already set to the correct values

5. A future blog post will discuss setup and configuration of HA on Suse

See SAP Note 1634991 – How to install (A)SCS instance on more than 2 cluster nodes https://launchpad.support.sap.com/#/notes/0001634991

3. Does the Internal Cluster Virtual IP Need To Be Assigned the Azure Internal Load Balancer (ILB)?

Windows cluster has its own internal Virtual IP and Virtual Hostname. These resources are needed for the operation of the cluster. The virtual IP address of the internal cluster does not need to be added as a Frontend IP address onto the Azure Internal Load Balancer (ILB).

There is no requirement to add the cluster Virtual IP address to the ILB, however this can optionally be done.

4. Useful PowerShell Commands for Azure

A basic level of PowerShell knowledge is typically required to deploy SAP systems on Azure at large scale.

It is possible to perform nearly all activities via the Azure Portal, however it is fast, simple and very repeatable to make PowerShell scripts.

To setup Azure PowerShell Cmdlets:

Make sure to install AzureRM PowerShell Cmdlets while running PowerShell as an Administrator

https://msdn.microsoft.com/en-us/library/mt125356.aspx

On a Windows 10 based console it should be possible to open PowerShell as an Administrator and run:

PS C:\> Install-Module AzureRM

PS C:\> Install-AzureRM

Login using the account provided by the Azure administrator. Typically this is username@domain.com

Login-AzureRmAccount

List the available Azure subscriptions with:

Get-AzureRmSubscription

Set-AzureRmContext -SubscriptionName “<subscription name goes here>”

https://docs.microsoft.com/en-us/powershell/#pivot=main&panel=getstarted

https://blogs.technet.microsoft.com/heyscriptingguy/2013/06/22/weekend-scripter-getting-started-with-windows-azure-and-powershell/

https://docs.microsoft.com/en-us/powershell/azure/overview?view=azurermps-3.7.0

5. SAP Hana on Azure – Virtual Machines

SAP Hana is certified for production OLAP workloads on Azure VM GS5. SAP BW on Hana and similar applications can be run in production on this VM type

GS5 is not Generally Available for Production OLTP workloads as at April 2017

The GS5 VM is certified for all workloads: Suite on Hana, BW on Hana and S/4 Hana for non-production scenarios

https://global.sap.com/community/ebook/2014-09-02-hana-hardware/enEN/iaas.html

More information about installing SAP applications on Hana on Azure GS5 VM type can be found here

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started

1928533 – SAP Applications on Azure: Supported Products and Azure VM types https://launchpad.support.sap.com/#/notes/1928533

2015553 – SAP on Microsoft Azure: Support prerequisites https://launchpad.support.sap.com/#/notes/2015553

6. SAP Hana on Azure – Azure Large Instances

Enterprise customers running large scale Hana scenarios are likely to require more than 2TB of RAM to run large BW on Hana or Suite on Hana scenarios and allow for 1-3 years DB growth. The Azure platform provides the following Large Instances for these scenarios based on Intel E7 Haswell & Broadwell processors:

  • SAP HANA on Azure S72 (2-socket, 768GB)
  • SAP HANA on Azure S72m (2-socket, 1.5TB)
  • SAP HANA on Azure S192 (4-socket, 2.0TB)
  • SAP HANA on Azure S192m (4-socket, 4.0TB)
  • all use cases are supported, OLAP and OLTP, including BWoH, SoH, S/4H
  • in production and non-production
  • scale-out configurations are possible on “SAP HANA on Azure S144 (4-socket, 1.5TB)” and on “SAP HANA on Azure S192 (4-socket, 2.0TB)” up to 15+1 nodes (15 active (BW: 1 master, 14 worker) nodes, 1 standby)

Common Question & Answer about Azure Large Instances for Hana:

Q1. Are Large Instances VMs or bare metal? Answer = bare metal Hana TDI certified appliances

Q2. Which HA/DR solutions are supported? Answer = both HSR and storage level replication options are possible

Q3. How to order and provision an Azure Hana Large Instance for Hana? Answer = contact Microsoft Account Team

Q4. What is included in the monthly charge on the Azure Bill? Answer = all compute charges, high speed storage equal to 4 x Hana RAM, network costs between SAP application server VMs and the Hana appliance and any network utilization for storage based replication for DR solutions to another Azure DR peer datacenter are included

Q5. Can Azure Large Instances for Hana be upgraded to a larger size? Answer = Yes

Q6. Are all Hana scenarios such as MCOS and MDC supported? Answer = yes, typically the same functionalities that are available with any other TDI solution are available on Azure Large Instances for Hana

Q7. Does Microsoft resell Hana licenses or provide Hana support? Answer = No, Hana licenses and support is provided by SAP. Microsoft provides IaaS (Infrastructure as a Service) only. Hardware, firmware, storage, networking and an initial installation of Suse for SAP Applications or Redhat is provided. Hana should then be installed by a Hana certified consultant. Customers need to buy a Suse or Redhat license and obtain a support contract for Suse or Redhat

Q8. What is the SLA for Azure Large Instances for Hana? Answer = SLA of 99.99% is described here

Q9. Does Microsoft patch and maintain the Suse or Redhat OS on a Hana Large Instance? Answer = No, Hana Large Instances is a IaaS offering. Layers lower than OS are managed and supported by Microsoft.

Q10. Do Hana Large Instances fully support Suse or Redhat clustering? Answer = Yes

2316233 – SAP HANA on Microsoft Azure (Large Instances) https://launchpad.support.sap.com/#/notes/2316233

Links:

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-architecture?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

HA/DR on Large Instances

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

Backup/Restore Guide

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-guide

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-storage-snapshots

7. Resource Groups on Azure – What Are They & How Should I Use Them?

Resource Group is a logical collection of Azure objects. The properties of a Resource Group are:

1.All the resources in your group should share the same lifecycle. You deploy, update, and delete them together.

2.deployment cycle it should be in another resource group.

3.Each resource can only exist in one resource group.

4.You can add or remove a resource to a resource group at any time.

5.You can move a resource from one resource group to another group. For more information, see Move resources to new resource group or subscription.

6.A resource group can contain resources that reside in different regions.

7.A resource group can be used to scope access control for administrative actions.

8.A resource can interact with resources in other resource groups. This interaction is common when the two resources are related but do not share the same lifecycle

The question for SAP Basis administrators is: How should I structure resource groups across the Sandbox, Development, QAS and Production environments that make up the entire SAP Landscape?

Feedback so far from actual customer deployments:

1. Small Sandbox or Development systems might all be grouped together into only one Resource Group. A small Development or Sandbox environment might comprise of ECC 6.0, BW, SCM, GRC, PI and a Solman system.

2. Small Development or Sandbox would share a common storage account or use Managed Disks

3. Often a single Vnet or at maximum several Vnets for non-prod and a production (Note: It is possible to deploy VMs or other resources from Resource Group A onto a Vnet in Resource Group B).

4. If there is more than one Vnet within the same datacenter, then Vnet peering is used to reduce latencies

5. If there is one Vnet in Datacenter A for Production and another Vnet in Datacenter B for Non-Production and DR there is a Vnet-2-Vnet gateway setup between these two vnets

6. Network Security Groups are typically setup to only allow SAP specific ports onto the subnets such as 3200-3299, 3300-3399, 3600-3699 etc. A full list of SAP ports can be found here. Windows File Sharing ports 135, 139 and 445 would normally be blocked

7. Prior to the introduction of Managed Disks guidance around storage accounts could be summarize as:

-In all cases Premium Storage should be used for DBMS servers or for standalone engines with high IOPS requirements such as TREX

-Small systems such as Development systems might share one storage account

-Larger QAS systems that might be used for performance testing should ideally have their own storage account

-Large Production SAP applications like ECC or BW should have their own storage account

-Smaller Production systems with low IOPS requirement such as Solman, EP, PI or GRC can share a single storage account

Since the introduction of Managed Disks it is generally recommended to use Managed Disks

8. Some customers are deploying individual SAP applications into their own resource groups in production

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups

8. Azure Managed Disks

Managed Disks is a new feature in Azure Storage. It is recommended to evaluate Managed Disks for new SAP deployments. The benefits of Managed Disks are explained here.

In summary the benefits are:

a. No need to create specific storage accounts and balance the number of VHDs across storage account up to a limit of 10,000 disks per subscription

b. There is no need to manually “pin” specific VHDs to different storage stamps to ensure storage level high availability (for example on AlwaysOn cluster nodes)

c. Integration into Azure Backup

Note: Azure Managed Disks request port 8443 to be opened

https://azure.microsoft.com/en-us/services/managed-disks/

https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview

https://azure.microsoft.com/en-us/pricing/details/managed-disks/

9. Network Latency on Azure

Azure datacenters are distributed across more geographic locations than any other cloud platform. Network latencies from an Azure datacenter to most worldwide locations are more than acceptable for running SAP applications.

Before deploying SAP on Azure it is recommended to test the network latencies from a customer office to nearby Azure datacenters.

The test below shows a test using http://azurespeed.com/ from Singapore.

A test such as this should be performed on a wired connection. Latencies of geographically nearby datacenters should be 10-50ms and trans-continental could be 100-250ms RTT

If values well in excess of these are seen it is possible that the ISP may have routing problems.

10. SAP Application Server on Azure Public Cloud

When installing SAP application servers on Azure Public Cloud Virtual Machines we recommend following the deployment pattern detailed below:

a. Do not provision additional disks for SAP application servers. Install the following components on the Windows C: drive or Linux Root

-Boot

-OS

-/usr/SAP SAP executable directory

b. Place the OS Pagefile onto the local temporary disk (this is the default for Windows)

c. Deploy the latest OS release possible. As at April 2017 Windows 2016, Suse 12.2 and Redhat 7

d. Linux FS type – review 405827 – Linux: Recommended file systems

e. Use a single virtual network card. If SAP LaMa is used a second virtual network card is recommended

Note: Do not under any circumstances use SAP application servers as file servers for interface files. Create a dedicated Management Station for interface files and SAP DVD Installation Media

11. Windows Dynamic Port Ranges

Windows server uses Dynamic Callback Ports that can overlap with SAP J2EE ports

It is recommended to reserve the ports 50000-50999 for SAP.

The commands below should be run on Windows servers with Java Instances installed:

netsh int ipv4 set dynamicport tcp start=60000 numberofports=5536

netsh int ipv4 show dynamicport tcp

1399935 – Reservation of TCP/UDP ports in Windows https://launchpad.support.sap.com/#/notes/1399935

https://support.microsoft.com/en-us/help/929851/the-default-dynamic-port-range-for-tcp-ip-has-changed-in-windows-vista-and-in-windows-server-2008

12. Switch Off SAP Application Server AutoStart During Maintenance

The availability of SAP application servers is improved by configuring Autostart.  In a scenario where an Azure component fails and the Azure platform self-heals and moves a VM to another node the impact of this restart is much less if the application server restarts automatically.

Autostart of an SAP instance is configured by adding Autostart = 1 to the SAP default.pfl

Maintenance operations like support packs, upgrades, enhancement packs or kernel updates may assume that the default behavior of the SAP system is not to automatically restart.

Therefore it is generally recommended to comment this profile parameter during such activities

13. DevTest Labs – Great for Building Sandbox Systems

The Azure Portal VM properties page has a feature to automatically shutdown VMs. This is very useful for Sandbox or Upgrade/Support Pack testing machines. This feature allows the BASIS team to provision large and powerful virtual machines for testing but to limit the costs by shutting down these VMs when not in use. This is particularly useful for Hana test machines as Hana needs very large VMs

https://azure.microsoft.com/en-us/services/devtest-lab/

SAP systems that are accessed by end users or SAP functional and ABAP teams must also have an Automatic Start feature in addition to the Automatic Stop feature.

This can be achieved using the following methods:

http://clemmblog.azurewebsites.net/start-stop-windows-azure-vms-according-time-schedule/

https://blogs.technet.microsoft.com/uktechnet/2016/07/18/how-to-schedule-vm-shutdown-using-azure-automation/

14. Disk Cache & Encryption Settings

Azure storage provides multiple options for disk caching and encryption.

General guidance for the use of Premium Azure Storage
disk caching:

Premium Disk Type Default Cache Setting Recommended Cache Setting for SAP Servers
OS Disk ReadWrite ReadWrite
Data Disk – Write Heavy None None (for example DB Transaction Log)
Data Disk – Read Only None ReadOnly

Do not use ReadWrite disk cache settings on SAP systems including DBMS disks or TREX

General Guidance for the use of Encryption:

1. Assess the risk profile of the SAP systems and evaluate if Encryption is required or not

2. Azure platform supports Encryption at two different layers

-Storage Account Level

-Disk Level

3. DBMS platforms also support encryption – SQL Server TDE or Hana encryption for example

4. Typically DBMS level encryption has the advantage of also encrypting backup files. Backup files are common attack vector for data theft

5. It is strongly recommended not to use multiple encryption technologies on the same device (for example use DB level encryption, File System level encryption and using an encrypted storage account). This can lead to performance problems

6. A recommended configuration is:

-For DB servers use Disk encryption to protect the OS/Boot disk only. Use native DBMS level encryption to protect the DB files and backups

-For SAP application server use Disk encryption to protect the OS/Boot disk.

Note: some forms of DB level encryption are vulnerable to cloning the entire Virtual Machine and all disks. Using Azure Disk Encryption on the OS/Boot disk prevents cloning an entire VM.

Customer Case Studies on Azure

Large Australian Energy Company Modernizes SAP Applications & Moves to Azure Public Cloud

Zespri https://customers.microsoft.com/en-us/story/kiwi-grower-prunes-costs-defends-business-from-disaste

PACT Building on synergy for a bold growth strategy https://customers.microsoft.com/en-us/story/building-on-synergy-for-a-bold-growth-strategy

AGL Innovation Spotlight: AGL puts energy into action with the Cloud https://customers.microsoft.com/en-us/story/innovation-spotlight-agl-puts-energy-into-action-with

Coats UK https://customers.microsoft.com/en-us/story/coats

The Mosaic Company https://customers.microsoft.com/en-us/story/mosaicandcapgemini

Several new large Enterprise Case Studies and customer go lives will be released on this blog soon

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

SAP OS/DB Migration to SQL Server–FAQ v6.2 April 2017

$
0
0

The FAQ document attached to this blog is the cumulative knowledge gained from hundreds of customers that have moved off UNIX/Oracle and DB2 to Windows and SQL Server in recent years.

In this document a large number of recommendations, tools, tips and common errors are documented.  It is recommended to read this document before performing an OS/DB migration to Windows + SQL Server.

The latest version of the OS/DB Migration FAQ includes some updates including:

  1. Due to the considerable number of customers moving off AIX/DB2 to Windows and SQL Server running on Azure a section of creating an R3load server for DB2 has been added
  2. Migration of BW systems to SQL Server to leverage the Column Store, Flat Cube and FEMS pushdown logic continues to be popular with several major customers replacing SAP BWA with SQL Server Column Store
  3. Post migration report RS_BW_POSTMIGRATION automatically converts F Fact and E Fact tables to Column Store when the recommended “All Column-Store” option is selected in SMIGR_CREATE_DDL
  4. Windows 2016 is now Generally Available for almost all SAP applications on all major DB platforms other than Oracle 12c
  5. Older Operating System and Database releases such as Windows 2012 (non-R2) and SQL 2012 are now no longer recommended for new projects
  6. Ensure sp_updatestats is included in the post processing steps after the import
  7. Many other recommendations around kernels, known bugs and Azure specific migration information

Recently a large Asian Airline recently moved from AIX/DB2 to Windows 2016 and SQL Server 2016 on Azure.  The migration was completed in two phases over two weekends and was 100% successful with the customer realizing significant performance improvement running on D-Series VMs and Premium Storage.

Another large Energy Company in Australia moved from HPUX/Oracle to Windows 2012 R2 and SQL 2016 on Azure. More information on this customer can be found Large Australian Energy Company Modernizes SAP Applications & Moves to Azure Public Cloud

Latest OS/DB Migration FAQ can be downloaded from oracle-to-sql-migration-faq-v6-2

Performance evolution of SAP BW on SQL Server

$
0
0

In SAP customer support, we still see several customers running old SAP BW code that cannot leverage the improvements we delivered within the last years. In this blog, we want to demonstrate the huge performance improvements which can be achieved even without hardware replacements. Until 2011, the standard configuration of BW queries on SQL Server used only one database thread and was running against a BW cube with b-tree indexes. With the actual improvements, you can easily speed-up BW queries by factor 100!

Test Scenario

All tests were running with SQL Server 2016 on a former high-end server with 48 CPU threads constructed in the year 2008. This server does not even support modern vector operations (SIMD), which can be natively used by SQL Server 2016. We created 54 BW test queries with varying complexity and a varying number of FEMS filters. All queries were running against a BW cube with 100,000,000 rows. BW cube compression had been performed on 90% of all rows, resulting in 100 uncompressed BW requests. The queries had been created for our own, internal performance tests. They have not been modified or optimized for this blog. However, they might not be typical for your specific BW query mix.

Optimization levels

The BW queries were running against the following configurations:

  1. MAXDOP 1
    This was the default SAP BW configuration from 2011 when running on Microsoft SQL Server: Standard BW cubes with rowstore (b-tree) indexes were used. All tables in SQL Server were PAGE compressed. BW queries were not using SQL Server intra-query parallelism.
  2. PAGE-compression (Rowstore)
    In this scenario, all SAP BW queries can use 8 CPU threads. Therefore, the SAP RSADMIN parameter MSS_MAXDOP_QUERY is set to 8.
  3. COLUMN-compression (Columnstore)
    Requires: SQL Server 2014 or higher, SAP BW 7.x

    For this scenario we change the index structure of SAP BW cubes. A clustered columnstore index is applied on the cubes using SAP report MSSCSTORE. We do not recommend using the restricted read-only columnstore of SQL Server 2012 anymore. An overview of SQL Server 2014 (and 2016) columnstore is attached in the following blog: https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/23/sql-server-2014-columnstore-released-for-sap-bw. Detailed requirements are documented in SAP Note 2116639 – SQL Server Columnstore documentation
  4. FLAT Cube
    Requires: Columnstore, SAP BW 7.40 (SP8) or higher

    The next optimization step is to apply a new table structure for the BW cube. Therefore, the cube is converted to a Columnstore Optimized Flat Cube (which does not need an e-fact table and the dimension tables any more). The Flat Cube is described in https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw.
  5. FEMS Pushdown
    Requires: Flat Cube, SAP BW 7.50 (SP4) or higher

    The last optimization uses a new SQL query structure, which implements the push down of FEMS filters from the OLAP engine to SQL Server. A brief overview of this feature can be found here: https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/06/bw-queries-by-factors-faster-using-fems-pushdown.

Measured results

The below table contains the runtime of the 54 BW queries in the different configurations. The time consumed in SQL Server is displayed in purple, the time spend in the SAP BW OLAP engine is displayed in blue. A significant OLAP runtime is only observed for queries with a couple of FEMS filters. The runtime is rounded to full seconds. It has been measured by the SAP OLAP statistics in transaction ST03.

bw_perf1

Comparing optimization levels

The following table shows the performance impact of each optimization step individually. Some optimizations may even be counterproductive for a particular BW query. However, the mix of all optimizations almost always results in great BW query performance.
In this mix of 54 BW queries, the slowest query with FEMS optimization (21 seconds) was even faster than the fastest query without any optimization (27 seconds). The average performance improvement was factor 121 faster!

bw_perf2

Conclusion

The SAP BW code is permanently being updated for supporting new Microsoft SQL Server features like columnstore. Several BW improvements have been implemented to optimize SAP BW running on SQL Server. These optimizations have increased SAP BW query performance by two magnitudes within the last 6 years.
Therefore, customers should upgrade to SQL Server 2016 and apply the required SAP BW code soon.

Viewing all 90 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>