Quantcast
Channel: Running SAP Applications on the Microsoft Platform
Viewing all 90 articles
Browse latest View live

Parallel Processing in SAP

$
0
0

New generations of CPUs do not provide any significant single-thread performance improvements. Instead, the number of logical CPU cores is increasing with each new CPU generation. You can significantly reduce the runtime of a task by running sub-tasks in parallel on many CPU cores. However, splitting a task into sub-tasks consumes additional resources (CPU and memory). This overhead may or may not be significant, dependent on the used algorithm.

Therefore, you can optimize one aspect of IT system performance using parallelism: The response time. If you want to optimize the throughput of an IT system, then parallelism can be counterproductive on high-loaded systems (because of the overhead for coordinating the sub-tasks).

Parallelism on the SAP application server

Many tasks in an ERP system are already fast enough, for example recording of sales orders. Therefore, it makes no sense to parallelize such tasks (except parallelism of users by hiring more employees). In contrast to dialog processing, batch processing can highly benefit from parallelism on the application server. A typical example is the Data Transfer Process (DTP) in an SAP BW system. Parallelism is used here by default, but you can further customize it (for all DTPs or separately per DTP):

  • You can define whether parallelism is used at all
  • You can define the type (DIA or BTC) and the number of work processes
  • You can define the package size, which is the number of rows processed within a sub-task. If the total number of rows of a DTP is smaller than the package size, then only one work process is used. In this case, you could reduce the package size for enabling parallelism.

Historically, the default number of work processes used in a DTP is 3. In the meanwhile, customers have application servers with much more CPU threads than years ago. Therefore, it makes sense to adjust the number of configured work processes for existing DTPs in an SAP BW System. This can be configured in the Batch Manager settings of SAP transaction RSA1 (at Administration->Housekeeping Tasks->Batch Manager->Mass Maintenance)

parallel1

Parallelism in SQL Server

SQL Server can utilize many CPU threads at the same time for running a single SQL query. This is called intra-query parallelism. Creating the sub-tasks of such a query is done automatically by SQL Server. You do not have to define a package size, but you can control the maximum number of CPUs, which are used by an operator.

Before executing a SQL query, SQL Server creates an execution plan. This plan can be either a serial or a parallel plan. An execution plan consists of several operators (iterators). Each of them can be either serial or parallel. An execution plan with at least one parallel operator is called a parallel execution plan. A parallel operator is displayed with two arrows within a yellow circle:

parallel2

The same execution plan without any parallelism looks like this:

parallel3

The maximum allowed degree of parallelism (MAXDOP) is defined in the SQL Server configuration option “max degree of parallelism” (value 0 means, MAXDOP is infinite). You can overwrite MAXDOP for a particular query using an optimizer hint. SQL Server will not create a parallel plan in the following cases:

  • if the database server has only one logical CPU
  • if MAXDOP is 1
  • if the query optimizer does not see a benefit in using parallelism.

A parallel execution plan does not contain the number of logical CPUs used. The actual degree of parallelism (DOP) is decided at runtime of the query. It is not only dependent on MAXDOP, but also on the available system resources at query runtime. You can check the actual DOP of a query in the SQL Server DM view sys.dm_exec_query_stats. SQL Server intra-query parallelism typically decreases the runtime of a query, but it can result in varying execution times of the same query: You can configure MAXDOP, but the actual DOP may be different when running the same query again! Therefore, the actual runtime of a query is not predictable any more.

In SQL Server 2014, there is one important limitation: You can only use batch-mode operators, if the actual DOP is higher than one (This limitation has gone in SQL Server 2016). Batch-mode operators are much faster than row-mode operators. Hence, we want to make sure, that there are always sufficient resources for running the query in parallel (DOP 2 or higher). This can be achieved by restricting MAXDOP to a relative small value, for example MAXDOP 4 on a machine with 32 CPU threads.

Configuring SQL Server parallelism in SAP ERP

The SAP installation changes the value of the SQL Server configuration option “max degree of parallelism” to 1. Therefore, no parallelism is used for running a single database query (As a matter of course, you can still run many database queries in parallel). For years, we did not recommend changing this configuration for an SAP ERP system. We wanted to avoid the overhead of intra-query parallelism and keep a predictable query runtime. However, in the meanwhile, customers often have more logical CPUs available on the database server than concurrently running SQL queries. Not using intra-query parallelism would simply be a waste of CPU resources. Therefore, customers can increase “max degree of parallelism” for an SAP ERP system. See https://blogs.msdn.microsoft.com/saponsqlserver/2015/04/29/max-degree-of-parallelism-maxdop for details.

Configuring SQL Server parallelism in SAP BW

In contrast to SAP ERP, SQL Server is using intra-query parallelism for SAP BW since years. The configuration of SQL Server parallelism is much more sophisticated in an SAP BW system than just configuring a global MAXDOP value. For SAP BW, the SQL Server configuration option “max degree of parallelism” should always be set to 1. Hence, normal Open SQL commands use MAXDOP 1. However, SAP BW queries use MAXDOP 4 by default using optimizer hints. SQL Server index creation is running with MAXDOP 8 by default. The optimizer hints are controlled by SAP BW RSADMIN parameters, documented in

The RSADMIN parameters can be changed in SAP report SAP_RSADMIN_MAINTAIN:

parallel4

Unfortunately, we have seen a few cases with SAP BW queries, where the SQL Server query optimizer decided to create a serial execution plan. However, SAP BW queries are always quite complex. Therefore, we would always benefit from intra-query parallelism. The latest version of SQL Server 2016 provides an optimizer hint, which enforces the creation of parallel plans. We use this hint in the SAP BW statement generator. Therefore, all operators in an execution plan for SAP BW queries are parallel, if possible (be aware, that execution plans with a spool table never contain any parallel operator):

parallel5

To benefit from these forced parallel execution plans, you have to apply the newest SP und CU of SQL Server 2016 and the correction instructions of

The created execution plans are slightly different from normal parallel execution plans, because all operators are parallel (see NonClustered Index Seek in the picture above). However, we did not see a single case in SAP BW, where this slightly difference caused an issue.

Parallelism used in DB pushdown

SAP NetWeaver uses a single-threaded, multi-process application server. Parallelism on the application server has to be explicitly coded. Furthermore, it is often not easy (or even impossible) to divide a task on the application server in sub-tasks of similar size. Using intra-query parallelism on the database server is much easier.

More and more functionality in SAP is being pushed down from the application server to the database server. The main idea behind this is to reduce the network load (between application server and DB server). However, such a DB pushdown has further advantages. You can now benefit from intra-query parallelism without manually generating sub-tasks. An example is the FEMS-Pushdown, described in https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/06/bw-queries-by-factors-faster-using-fems-pushdown. To make it clear: FEMS-Pushdown does not just use parallelism. A new algorithm improves performance beyond the usage of additional CPUs.

Conclusion

Parallelism is the key for reducing response time. However, it could result in a reduced throughput, particularly in high-loaded ERP systems. In SAP BW systems, you can increase intra-query parallelism by setting RSADMIN parameter MSS_MAXDOP_QUERY. Parallelism on the SAP application server has to be manually adjusted, dependent on the available CPU resources and the size of processed data.


End to End Setup for SAP HANA Large Instances

$
0
0

So, you are ready for “SAP HANA on Azure Large Instances” deployment, Great! And, you want to know the step by step process with screen shots to start the work? Then you are reading the right article.

This blog illustrates the various steps required for the SAP HANA on Azure Large Instances (or in short HANA Large Instances) setup.

Here are the high-level steps:

  1. Setup the vNet
  2. Provide the details to Microsoft for provisioning the HANA Large Instances
  3. Connect your Azure vNet to HANA Large Instances
  4. Test the connectivity from Azure VM to HANA Large Instances

Install the HANA on HANA Large Instances server

Please download the full article End-to-End-Setup-of-SAP-HANA-Large-Instances to get the complete details.

Customer experience with SAP BW FEMS-Pushdown

$
0
0

A few months ago we released a new SAP BW statement generator, which increases BW query performance for complex queries containing FEMS filters, see https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/06/bw-queries-by-factors-faster-using-fems-pushdown. In the meanwhile, a few customers who tested the new feature, provided feedback to the “SAP on SQL Server development”. Based on this feedback, we further improved the performance of FEMS queries and released the optimized code in SAP Note 2483734 (see below). However, in some cases the query performance was still not optimal, because of an unsuitable system configuration. The intention of this blog is give guidance and best practices, based on our customer experience.

Prerequisites

Customers typically do not want to apply and test new SAP code on the productive system. It is a good idea to use a virtual machine for the testing. However, for FEMS-Pushdown, you should keep in mind that you want to test performance, not simply functionality. Therefore, you should provide sufficient resources to the VM.

  • Hardware requirements
    For FEMS-Pushdown we consider 16 CPU threads for SQL Server as a minimum configuration. As a matter of course, SQL Server should also have access to sufficient RAM and a fast I/O system
  • SQL Server version
    We strongly recommend SQL Server 2016 (SP1 CU2 or newer) when using FEMS-Pushdown. SAP BW can force a parallel execution plan on SQL Server 2016 using an optimizer hint. Furthermore, SQL Server 2016 always uses batch-mode processing for the columnstore. See the following blog for details: https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/18/parallel-processing-in-sap.
    Technically, FEMS-Pushdown also works with SQL Server 2014. In this case, you should set SQL Server startup parameter -T8649 for forcing a parallel execution plan. However, SQL Server 2014 may use row-mode processing under high work load, which decreases BW query performance.
  • Required BW code
    The SAP code for columnstore and BW queries is permanently being updated. We regularly publish known documentation and code issues in SAP Note 2116639 – SQL Server Columnstore documentation. The scope of this SAP Note has been extended to FEMS-Pushdown. Therefore, it contains a link to the newest code improvements in SAP Note 2483734 (see below).
    FEMS-Pushdown requires the Columnstore Optimized Flat Cube. You can create a Semantically Partitioned Cube (SPO) as a Flat Cube, but you cannot convert an existing SPO to a Flat Cube yet. The conversion report is still under development by SAP.

Best Practices

When running a BW query with FEMS-Pushdown, you can run into the same issues as with conventional BW queries: Lack of system resources, sub-optimal execution plans and poorly designed BW queries. Therefore, you should follow the following recommendations:

  • Update Statistics
    When loading data into a cube, you should update the database statistics and perform columnstore rowgroup compression within the BW process chain. This is described in https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/14/simplified-and-faster-sap-bw-process-chains. However, when using the Flat Cube, SQL Server execution plans are much more robust, even with outdated database statistics.
  • Force parallel execution plans
    After applying the newest SQL Server 2016 Cumulative Update and the newest SAP BW code, all SQL Server queries created by the SAP BW statement generator will have a parallel execution plan. See SAP Note 2395652 – SQL Query Hints USE HINT for details.
  • Avoid Sematically Partitioned Cubes (SPOs)
    This recommendation is not specific for FEMS-Pushdown. It is related to all cubes using the columnstore. Existing SPOs work fine with the columnstore. However, we do not encourage our customers to create additional SPOs
    • A BW Multi-Provider is a logical cube consisting of many physical cubes. This concept is similar to a union-view on many tables in SQL Server. There are often organizational (structure of data load) or business reasons for using Multi-Providers. Therefore, customers often use Multi-Provider (with or without columnstore)
    • An SPO is a specific Multi-Provider, where all part-providers have the exactly same structure. It logically “partitions” a cube by time (or another characteristic). The idea is to speed-up BW query performance by dividing a cube into smaller chunks and running SQL queries on these chunks in parallel.
      However, when using the columnstore, one large cube results in better performance than many small ones. Selects on the columnstore use efficiently intra-query parallelism and can benefit from rowgroup elimination (similar to partition pruning). Also archiving is very fast on columnstore tables (however, archiving is not so important anymore, because columnstore data is stored very storage-efficient).

New improvements (with SAP Note 2483734)

Optimized BW code for FEMS-Pushdown has been released in SAP Note 2483734 – FEMS-Pushdown performance improvements for Microsoft SQL Server. The correction instructions of this SAP Note are available as of SAP BW 7.50 SP4. They are not available on SAP BW 7.51 or 7.52. On these SAP BW releases, you have to wait for the next SAP Support Package. The following improvements have been implemented

  • Columnstore-Pushdown
    The idea of FEMS-Pushdown is to shift the evaluation of SAP BW query filters from the SAP application server to the database server. Therefore, a SQL command is being created in the SQL statement generator for FEMS-Pushdown. The new version of this statement generator creates additional, redundant filters in the SQL Server statement. Therefore, these filters can be directly evaluated in the columnstore clustered index scan (before running the first level of aggregation). Hereby, the BW filters are further pushed-down inside the SQL Server statement execution.
  • Intra-Query parallelism
    BW queries with FEMS-Pushdown benefit much more from additional CPU threads than other BW queries. Furthermore, increasing the maximum intra-query parallelism on SQL Server 2016 does not have the negative side effect as on SQL Server 2014 (sporadic row-mode processing). With the new FEMS-Pushdown code, the maximum number of CPU threads for a FEMS query is calculated based on the complexity of the query. It can even exceed the value of the RSADMIN parameter MSS_MAXDOP_QUERY, but it will never be higher than the new parameter MSS_MAXDOP_FEMS. Hence, FEMS-Pushdown queries can use more SQL Server threads than normal BW queries. However, SQL Server can reduce the actual used CPU threads at query runtime, once there is a resource bottleneck. Only for SQL Server 2014, we recommend setting RSADMIN parameter MSS_MAXDOP_FEMS. There is no need for this on SQL Server 2016 or newer.
  • BW hierarchy improvements
    We implemented some additional improvements, for example for BW hierarchy filters. Keep in mind, that we did not use any of the new improvements of the new FEMS-Statement generator when measuring the performance in https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server.

Analyzing FEMS Queries

FEMS-Pushdown cannot be used for all FEMS queries. For example, inventory queries cannot use FEMS-Pushdown yet. There are several tools, where you can check the FEMS-Pushdown usage in SAP BW:

  • SQL statement in RSRT
    You can easily verify, that FEMS-Pushdown is actually used, by looking at the SQL query in SAP transaction RSRT. The query contains a common table expression (CTE) starting with “WITH [T_FEMS] AS“.
  • Statistics Data in RSRT
    For a FEMS-Pushdown, the aggregate name <cube>$F is displayed in the Aggregation Layer tab of the RSRT statistics data
  • Event IDs in Statistics Data
    The idea of FEMS-Pushdown is reducing the runtime on the SAP application server. Particularly, the runtime of OLAP event ID 3110 should be significantly reduced. However, seeing a long runtime for event ID 3110 does not necessarily mean, that FEMS-Pushdown was not used. When using BW Exceptional Aggregation, additional time is spent for event ID 3110 and 3200.

  • ST03 Statistics
    The best way for monitoring BW query runtime is the BI Workload Monitor in SAP transaction ST03. Here you can see the runtime of BW queries by day, cube and query. Furthermore, you can see where the time was spent: “DB Time” is the time consumed by SQL Server and “OLAP Time” is consumed by the SAP application server. You can reset the statistics (on your test system) by running report RSDDSTAT_DATA_DELETE. Take care, this permanently deletes the ST03 statistics, also for other SAP users.
    Be aware, that the SQL Statement statistics in SAP transaction DBACOCKPIT can be misleading, particularly for SAP BW queries. SAP BW always opens a database cursor for running a BW query. Processing in the BW OLAP engine is always performed in packages between database fetches. SQL Server is measuring the runtime of a SQL query as the time between the OPEN and the last FETCH. Therefore, the SQL query runtime in DBACOCKPIT contains the processing time on the application server! However, in SAP transaction ST03 (or RSRT), the processing time on the application server is correctly not included in the “DB Time” (or “Data Manager” time).

Conclusion

For best BW query performance, we recommend using SQL Server 2016 and the newest SAP BW code of SAP Note 2483734. SAP BW FEMS-Pushdown requires using the Flat Cube. More and more customers start using the Flat Cube actually because of the FEMS-Pushdown. We got feedback from many customers, the merely the Flat Cube (even without FEMS-Pushdown) running on a modern hardware results in similar performance as they observed on their BW Accelerator. Using the FEMS-Pushdown can reduce peaks in query runtime caused by the most complex BW queries.

Migrating to SAP HANA on Azure

$
0
0

The S/4HANA platform on the SAP HANA DBMS provides many functional improvements over SAP Business Suite 7, and additions to business processes that should provide customers with a compelling reason to migrate to SAP S/4HANA with SAP HANA as the underlying DBMS. Another reason to migrate to S/4 HANA is that support for all SAP Business Suite 7 applications based on the ABAP stack will cease at the end of 2025, as detailed in SAP Note #1648480 – “Maintenance for SAP Business Suite 7 Software”. This SAP Note details support for all SAP Business Suite 7 applications and maintenance dates for the SAP Java Stack versions.

Note: some SAP references linked in this article may require an SAP account.

HANA migration strategy

SAP HANA is sold by SAP as a high-performance in-memory database. You can run most of the existing SAP NetWeaver-based applications (for example, SAP ERP, or SAP BW), on SAP HANA. Functionally this is hardly different from running those NetWeaver applications on top of any other SAP supported DBMS (for example, SQL Server, Oracle, DB2). Please refer to the SAP Product Availability Matrix for details.

The next generation of SAP platforms, S/4HANA and BW/4HANA, are built specifically on HANA and take full advantage of the SAP HANA DBMS. For customers who want to migrate to S/4HANA, there are two main strategies:

In discussions about migrating to HANA, it is important to determine which strategy to follow. The impact and benefit of each option is quite different, from the perspective of SAP, the customer, and Azure. The initial step of the second option is only a technical migration with very limited benefit from a business process point a view. Whereas the migration to S/4HANA (either directly or as a second step), will involve a functional migration. A functional migration has more impact to the business and business processes, and as such takes more effort. SAP S/4HANA usually comes with significant changes to the mapping of business processes. Therefore, most S/4HANA projects we are pursuing with our global system integrators require rethinking the complete mapping of business processes into different SAP and LOB products, and the usage of SaaS services.

HANA + cloud

Besides initiating a rearchitecting of the business process mapping and integration based on S/4HANA, looking at S/4HANA and/or SAP HANA DBMS prompts discussions about moving SAP workloads into public clouds, like Microsoft Azure. Leveraging Azure usually minimizes migration cost and increases the flexibility of the SAP environment. The fact that SAP HANA needs to keep most data in memory, usually increases costs for the server infrastructure compared to the server hardware customers have been using.

Azure is an ideal public cloud to host SAP HANA-based workloads with a great TCO. Azure provides the flexibility to engage and disengage resources which reduces costs. For example, in a multitier environment like SAP, you could increase and decrease the number of SAP application instances in a SAP system based on workload. And with the latest announcements, Microsoft Azure offers the largest server SKUs available in the public cloud tailored to SAP HANA.

Current Azure SAP HANA capabilities

The diagram below shows the Azure certifications that run SAP HANA.

HANA large instances provide a bare metal solution to run large HANA workloads. A HANA environment can currently be scaled up to 32 TB using multiple units of HANA large instances, with the potential to move up to 60TB as soon as the newly announced SKUs are available. HANA large instances can be purchased with a 1-year or 3-year commitment, depending on large instance size. With a 3-year commitment, customers get a significant discount providing high performance at a very competitive price. Because HANA large instances are a bare metal solution, the ordering process differs from ordering/deploying an Azure Virtual Machine (VM). You can just create a VM in the Azure Portal and have it available in minutes. Once you order a HANA large instance unit it can take up to several days before you can use it. To learn about HANA large instances, please check out SAP HANA (large instances) overview and architecture on Azure documentation.

To order HANA large instances, fill out the SAP HANA on Azure information request.

The above diagram shows that not all Azure SKUs are certified to run all SAP workloads. Only larger VM SKUs are certified to run HANA workloads. For dev/test workloads you can use smaller VMs sizes such as DS13v2 and DS14v2. For the highest memory demands, customers seeking to migrate their existing SAP landscape to HANA on Azure will need to use HANA large instances.

The new Azure VM sizes were announced in Jason Zander’s blog post. The certification for those, as well as some existing VM sizes, are on the Azure roadmap. These new VM sizes will allow more flexibility for customers moving their SAP HANA, S/4HANA and BW/4HANA instances to Azure. You can check for the latest certification information on the SAP HANA on Azure page.

Combining multiple databases on one large instance

Azure is a very good platform for running SAP and SAP HANA systems. Using Azure, customers can save costs compared to an on-premises or hosted solution, while having more flexibility and robust disaster recovery. We’ve already discussed the benefits for large HANA databases, but what if you have smaller HANA databases?

Smaller HANA databases, common to small and midsize customers or departmental systems, can be combined on a single instance, taking advantage of the power and cost reductions that large instances provide. SAP HANA provides two options:

  • MCOS – Multiple components in one system
  • MDC – Multitenant database containers

The differences are detailed in the Multitenancy article on the SAP website. Please refer to SAP notes #1681092, #1826100, and #2096000 for more details on these multitenant options.

MCOS could be used with single customers. SAP hosting partners could use MDC to share HANA large instances between multiple customers.

Customers that want to run SAP Business Suite (OLTP) on SAP HANA can host the SAP HANA part on HANA large instances. The SAP application layer would be hosted on native Azure VMs and benefit from the flexibility they provide. Once M-series VMs are available, the SAP HANA part can be hosted in a VM for even more flexibility.

Momentum of SAP workload moving to Azure

Azure is enjoying great popularity with customers from various industries using it to run their SAP workloads. Although Azure is an ideal platform for SAP HANA, the majority of customers will still start by moving their SAP NetWeaver systems to Azure. This isn’t restricted to lift & shift scenarios running Oracle, SQL Server, DB2, or SAP ASE. Some customers move from proprietary on-premises UNIX-based systems to Windows/SQL Server, Windows/Oracle, or Linux/DB2-driven SAP systems hosted in Azure.

Many system integrators we are working with observe that the number of these customers is increasing. The strategy of most customers is to skip the migration of SAP Business Suite 7 applications to SAP HANA, and instead fully focus on the long term move to S/4HANA. This strategy can be summarized in the following steps:

  1. Short term: focus on cost savings by moving the SAP landscape to industry standard OS platforms on Azure.
  2. Short to mid-term: test and develop S/4HANA implementations in Azure, leveraging the flexibility of Azure to create (and destroy) proof of concept and development environments quickly without hardware procurement.
  3. Mid to long-term: deploy production S/4HANA based SAP applications in Azure.

High Available ASCS for Windows on File Share – Shared Disk No Longer Required

$
0
0

SAP Highly Available ASCS Now Supports File Share UNC Source

SAP has released documentation and a new Windows Cluster DLL that enables the SAP ASCS to leverage a SMB UNC source as opposed to a Cluster Shared Disk.

The solution has been tested and documented by SAP for usage in non-productive systems and can be used in Azure Cloud. This feature is for SAP NetWeaver components 7.40 and higher.

The Azure Cloud platform fully supports cluster solutions such as Windows Cluster and Suse Cluster in contrast to other Cloud providers.

1. Requirements for SAP Highly Available ASCS on File Share

The requirements for the SAP ASCS on File Share are listed below:

1. SAP Kernel Update: The latest 7.49 [for Netweaver 7.40 or higher] is required.

2. The SAP profile parameter service/check_ha_node=1 must be set

3. The Windows cluster DLL must be updated – 1596496 – How to update SAP Resource Type DLLs for Cluster Resource Monitor

4. The SAP landscape must have a SMB Server to provide the file share \\<SAPGLOBALHOST>\sapmnt

Before deploying this solution review the documentation:

SAP ASCS on File Share Installation Document

2. SMB Server Options for the File Share

There are many options for providing a highly available SMB 3.x compatible share.

These options are documented in this blog here: How to create a high available SAPMNT share?

It is not supported to use the Azure Files Service as Azure Files does not support NTFS ACLs yet.

WARNING: It is not supported to change the share name from \\< SAPGLOBALHOST>\sapmnt to \\< SAPGLOBALHOST>\sapmnt_<SID>

Every SAP SID must have its own unique SAPGLOBALHOST

For example: \\< SAPGLOBALHOST_<SID>>\sapmnt such as

\\sapsmb3_PRD\sapmnt

\\sapsmb3_BWP\sapmnt

\\sapsmb3_SOL\sapmnt

Review this note: 2492395 – Can the share name sapmnt be changed?

Also review:

2287140 – Support of Failover Cluster Continuous Availabilty feature (CA)

2506805 – Transport Directory DIR_TRANS

The SMB server used the SAPMNT share can also be used for Interface Files and the DIR_TRANS

3. Integration with Azure Site Recovery

The SAP ASCS on File Share works in combination with Azure Site Recovery.

Azure Site Recovery is tested and supported with SAP applications.

This blog discusses SAP applications on Azure Site Recovery.

A full whitepaper on protecting SAP applications with Azure Site Recovery

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-workload#protect-sap

4. Frequently Asked Questions

Q1. Where to find the Documentation for SAP ASCS on File Share?

A1. SAP ASCS on File Share Installation Document

Q2. Is the SAP ASCS on File Share fully integrated with SWPM (SAPInst)?

A2. The initial installation is via SAPInst and then there are some manual steps required

Q3. What is the recommended SMB server technology?

A3. It is recommended to evaluate all the options. There are pluses and minuses for each option. DFS-R is a technology that is quite mature, supports NFS (for Linux systems) and supports DR scenarios. DFS is site aware – review the documentation about the INSITE option

Q4. Windows 2016 includes a Scale Out File Server (SOFS). Can this be used as the SMB source?

A4. Yes, SOFS is a good option. Note that SOFS is not site aware, this means that if a DR solution is setup with SOFS Servers in remote locations (possibly over slow WAN links) the SMB client [the SAP app server] cannot determine which server is local. This may result in unpredictable performance and is not recommended. If all the SOFS are in the same site SOFS is suitable

Q5. Does the SAP ASCS work with Azure Site Recovery?

A5. Yes, the SAP ASCS on File Share works well with Azure Site Recovery

Q6. Is the SAP ASCS on File Share supported on old kernels?

A6. No, 7.49 [Netweaver 7.40 or higher] is required. Do not run old unsupported kernels

Kernels 7.22 or lower cannot be used with the ASCS File Share as they do not understand the parameter service/check_ha_node=1

Q7. Is the SAP ASCS on File Share supported on Cloud platforms?

A7. The SAP ASCS on File Share works on Azure. Windows cluster solutions do not work correctly on other Cloud platforms that do not support the dynamic assignment, change and start of an IP address.

Q8. Is the SAP ASCS Enqueue Replication Server supported?

A8. Yes, use the SWPM tool to add the ERS onto the cluster.

Protecting SAP Solutions with Azure Site Recovery

$
0
0

Protect SAP Applications

Most large and medium sized SAP solutions have some form of Disaster Recovery solution. The importance of robust and testable Disaster Recovery solutions has increased as more core business processes are moved to applications such as SAP. Azure Site Recovery has been tested and integrated with SAP applications and exceeds the capabilities of most on-premises Disaster Recovery solutions and does so at a lower TCO than competing solutions.

A new Whitepaper has been written to guide SAP customers through the deployment of Azure Site Recovery for SAP solutions

Start by reviewing the documentation Protect a multi-tier SAP NetWeaver application deployment using Azure Site Recovery

Benefits of Azure Site Recovery for SAP Customers:

  1. Azure Site Recovery substantially lowers the cost of DR solutions. Site Recovery does not start Azure VMs until an actual or test failover therefore compute charges are not incurred normally. Only the Storage cost is charged while a VM is in replication mode.
  2. Azure Site Recovery allows customers to perform non-disruptive DR Tests at any time without the need to roll back the DR solution after the test. Site Recovery Test Failovers mimic actual failover conditions and can be isolated to a separate test network. Test failovers can also be run for as long as required.
  3. The resiliency and redundancy built into Azure far exceeds what most customers and hosting providers are able to provide in their own datacenters.
  4. Site Recovery “Recovery Plans” allow customers to orchestrate sequenced DR failover / failback procedures or runbooks, giving you the ability to achieve true Application level DR.
  5. Azure Site Recovery is a heterogeneous solution and works with Windows and Linux VMs, supports VMware and Hyper-V and works well with a range of database solutions.
  6. Azure Site Recovery has been tested with many SAP NetWeaver and non-NetWeaver applications.

Supported Scenarios

The following scenarios:

  • SAP systems running in one Azure datacenter replicating to another Azure datacenter (Azure-to-Azure DR), as architected here.
  • SAP systems running on VMWare (or Physical) servers on-premises replicating to a DR site in an Azure datacenter (VMware-to-Azure DR), which requires some additional components as architected here.
  • SAP systems running on Hyper-V on-premises replicating to a DR site in an Azure datacenter (Hyper-V-to-Azure DR), which requires some additional components as architected here.

More support information https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-support-matrix-azure-to-azure

SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types

Prerequisites

Before you start, make sure you understand the following:

  1. Replicating a virtual machine to Azure
  2. How to design a recovery network
  3. Doing a test failover to Azure
  4. Doing a failover to Azure
  5. How to replicate a domain controller
  6. How to replicate SQL Server

SAP 3-Tier vs. SAP 2-Tier Systems

3-Tier SAP Systems are recommended for Azure Site Recovery with the following considerations:

  1. Strictly 3-tier systems with no critical SAP software installed on the DBMS server
  2. Replication of the DBMS layer by the native DBMS replication tool (such as SQL Server AlwaysOn).
  3. SAP Application Server layer is replicated by Azure Site Recovery.
  4. ASCS layer can be replicated by Azure Site Recovery in most scenarios.
  5. Non-NetWeaver and non-SAP applications need to be assessed on a case by case basis to determine if they are suitable for replication by Azure Site Recovery or some other mechanism.
  6. Only Azure Resource Manager is supported for SAP systems using Site Recovery for DR purposes.

* Note: SAP Host Monitoring agents are not considered critical and may be installed on a 3-tier DBMS server.

In the diagram below the Azure Site Recovery Azure-to-Azure (ASR A2A) scenario is depicted:

  • The Primary Datacenter is in Singapore (Azure South-East Asia) and the DR datacenter is Hong Kong (Azure East Asia). In this scenario local High Availability is provided by having two VMs running SQL Server AlwaysOn in Synchronous mode in Singapore
  • The File Share ASCS is used (this does not require a cluster shared disk solution)
  • DR protection for the DBMS layer is achieved using Asynchronous replication
  • This scenario show “symmetrical DR” – a term used to describe a DR solution that is an exact replica of production, therefore the DR SQL Server solution has local High Availability. The use of symmetrical DR is not mandatory and many customers leverage the flexibility of cloud deployments to build a local High Availability Node quickly after a DR event
  • Customers may also reduce the size of the VM type used in the DR datacenter and increase the VM size after a DR event
  • The diagram shows that the SAP NetWeaver ASCS and Application server layer is replicated to DR via Azure Site Recovery tools

Note: SAP now supports deploying the ASCS without the requirement to have a shared disk (called SAP ASCS File Share Cluster). Azure Site Recovery also supports SIOS Shared Cluster Disks

SAP Notes

Following is a list of useful SAP Notes for various requirements:

License key related

94998 – Requesting license keys and deleting systems

607141 – License key request for SAP J2EE Engine

870871 – License key installation

1288121 – How to download temporary license keys for Analytics Solutions from SAP (BusinessObjects)

1644792 – License key/installation of SAP HANA

2035875 – Windows on Microsoft Azure: Adaption of your SAP License

2036825 – How to Get an Emergency Key from SAP

2413104 – How to get a license key after the hardware exchange

Supported scenarios

1380654 – SAP support in public cloud environments

1928533 – SAP Applications on Azure: Supported Products and Azure VM types

2015553 – SAP on Microsoft Azure: Support prerequisites

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions

Setup and installation

1634991 – How to install (A)SCS instance on more than 2 cluster nodes

2056228 – How to set up disaster recovery for SAP BusinessObjects

Troubleshooting

1999351 – Troubleshooting Enhanced Azure Monitoring for SAP

Microsoft Links & KB Articles

https://blogs.msdn.microsoft.com/saponsqlserver/

https://azure.microsoft.com/en-us/blog/tag/azure-site-recovery/

New SQL Server 2016 functionality helps SAP supportability

$
0
0

Due to the combined effort of the SAP – Microsoft Porting group the SQL Server Development team added a new functionality to the SQL Server UPDATE STATISTICS command and to the way SQL Server automatically updates statistics.

This new functionality enables the SAP customers on SQL Server to persist a sample rate of the manual and automatic statistics update.
In some cases the default sample rate of the manual or automatic UPDATE STATISTICS command is too small to reflect the  real distribution of data within the table. This is especially true for very large tables with a low or very low selectivity on the column in question. One should know that the sample rate that is used for the automatic update statistics depends on the total amount of rows and will decrease with an increase of the table size, means bigger tables have smaller sample rates. With this new addition we can force a sample rate for specific columns that is then used by manual (manual updates without specifying a sample rate) and automatic updates later on.

The new addition of the UPDATE STATISTICS command is a new option in the WITH clause with the syntax:

PERSIST_SAMPLE_PERCENT = { ON | OFF }

It is officially documented in books online and Pedro Lopes from the Microsoft Tiger Team blogged about it in more detail here. It is shipped with SQL Server 2016 SP1 CU4 (13.0.4446.0), so you need to have at least this update if you want to use this feature.

Please handle this new option with care and only when you have strong evidence that the default sample rate is too small. Wrong usage (e.g. very high sample rate for many columns on busy tables) can increase the system load tremendously up to a system standstill.

Few interesting findings on HANA with MDC

$
0
0

Few interesting findings on HANA with MDC

I was working on HANA with MDC and had few very interesting learnings.

If you have been using non- MDC HANA databases, you may come across those scenarios. This article summarizes the issues and their resolutions.

Setup Configuration

I had following setup for SAP HANA during my testing.

  • HANA Version Installed: HANA 2 SPS1
  • Operating System: SUSE 12 SPS2
  • Hardware: HANA Large Instances in Azure

Along with the above setup, I had following SAP application layer installed:

  • SAP Application: SAP Net Weaver 7.4
  • Database Name: H11
  • Instance Number:00

The following common scenarios were tested and here it describes the fix for those issues.

  • Scenario 1: Unable to take backup from HANA Studio
  • Scenario 2: Unable to confirm snapshot from HANA Studio
  • Scenario 3: Unable to view Index server process

Please download the complete article Few-interesting-findings-on-HANA-with-MDC for more information.


Distributed Availability Groups extended

$
0
0

SQL Server 2016 supports distributed availability groups (AG) as additional HA feature of the engine. Today we want to show how this feature can be used to transfer near-real-time productive data around the globe.

Based on Jürgens Blog about SQL Server 2016 Distributed Availability Groups we set up a Primary AG with the SAP Database (e.g. PRD) as the only database. Furthermore did we setup two additional servers with SQL Server 2016 as a single node cluster (just for demonstration purposes, a two node cluster would work as well). The one cluster will act as a distribution server, that takes the data from the productive system and distributes it to the global locations, our second one-node cluster (target). This picture illustrates how the complete setup will look like:

The availability mode is synchronous in the Primary AG and between the Primary and the Distribution AG. From there the data is send to the far-far-away location (Target AG) in the asynchronous mode. The distribution system is either located in the same or a very close data center as the primary AG to be able to use the synchronous mode. With this setup we get an synchronous like replication to the target, even if the complete primary system goes up in flames, the distribution system will still be able to sync the data to the Target AG.

We have three separate AGs (Primary, Distribution, Target) which are connected with two distributed AGs, the first over the Primary and the Distribution AG and the second over the Distribution and the target AG. With this kind of setup one can replicate multiple system over one distributor to the target AG like this picture is showing:

How do we set it up ? As a prerequisite we have the PRD database from the PRIMARY server restored on all other servers (SECONDARY, DISTRIBUTION, TARGET) in recover mode, so that we can easily setup the AGs. As the SQL Server Management Studio 17 is not supporting distributed AGs in all details, we have to set it up with a script. On a high level the script executes these steps:

  • connect to the PRIMARY server, create and configure an AlwaysOn endpoint (AO_Endpoint)
  • connect to the SECONDARY server, create and configure an AlwaysOn endpoint (AO_Endpoint)
  • connect to the DISTRIBUTION server, create and configure an AlwaysOn endpoint (AO_Endpoint)
  • connect to the TARGET server, create and configure an AlwaysOn endpoint (AO_Endpoint)

  • connect to the DISTRIBUTION server, create and configure a one-node AG (AG_PRD_DISTRIBUTOR) with a TCP/IP listener
  • connect to the TARGET server, create and configure a one-node AG (AG_PRD_TARGET) with a TCP/IP listener
  • connect to the PRIMARY server, create and configure a two-node AG (AG_PRD_MAIN) with a TCP/IP listener

  • still on the PRIMARY create a distributed AG (DAG_PRD_MAIN) over the main AG (AG_PRD_MAIN) and the AG of the DISTRIBUTION server (AG_PRD_DISTRIBUTOR)
  • connect to the DISTRIBUTION server and join the distributed AG (DAG_PRD_MAIN) from the PRIMARY
  • then create a distributed AG (DAG_DISTRIBUTOR_TARGET) over the AG of the TARGET server (AG_PRD_TARGET) and the AG of the DISTRIBUTION server (AG_PRD_DISTRIBUTOR)
  • connect to the TARGET server and join the distributed AG (DAG_DISTRIBUTOR_TARGET) with the DISTRIBUTION server
  • change the status of the DB to readable on the target

You will find the full script at the end as an attachment.

To test the scenario we started the SGEN transaction on the underlying PRD SAP System that is connected to the Primary AG. We used just one application server with 20 Dialog work processes to generate all the report loads of the system. SGEN used up to 15 work processes at the same time:

Measuring the throughput that is flowing through the DAGs we used the Windows Performance Monitor tool on one of the systems.  One can see that the primary is sending an average of 1.8 MB/Sec to the Secondary (red block). The first DAG to the distributor (green) and the data flow to the target (blue) are showing nearly the same value, so there is throughput penalty by using distributed AGs in a system.

If you want to measure the overall travel time of a change, you can setup an easy test. Prerequisite for this is, that the target database is setup as a readable secondary, so that we can connect against the DB. On the target system we are start the SQL Server Management Studio, open a new query and run the following script:


USE PRD
WHILE NOT EXISTS(SELECT name FROM sys.objects WHERE name = N’DAGTEST’)
WAITFOR DELAY N’00:00:01′

PRINT SYSDATETIME()


This script checks if a table named DAGTEST exists in the PRD database and waits a second if it can’t be found. Once it can be found it prints out the current time. On the primary AG we now open a query that creates the table:


CREATE TABLE DAGTEST (f1 INT)
PRINT SYSDATETIME()
— Later we can drop the table again
— DROP TABLE DAGTEST


This script just creates the table DAGTEST and then prints out the current time. Once the table is created the changes will be transferred over the distributor to the target. On the target the still running script detects the freshly created table and prints out the current time as well. By comparing the time from the primary script and the target script we can determine to overall travel time of changes through the system.

Full Script to setup the DAGs:
DAGScript

File Server with SOFS and S2D as an Alternative to Cluster Shared Disk for Clustering of an SAP (A)SCS Instance in Azure is Generally Available

$
0
0

We are excited to announce general availability of clustering of an SAP (A)SCS instance on Windows Server Failover Cluster (WSFC) with file server, e.g. Scale Out File Server (SOFS) and Storage Spaces Direct (S2D) in Azure cloud. This is an alternative option for clustering of an SAP (A)SCS instance, to the existing option with cluster shared disks.

Many SAP customers who run their SAP system on Azure configure their SAP systems as high availability (HA) systems using Microsoft Windows Server Failover Cluster. They cluster SAP single point of failures (SPOF), that is DBMS and SAP (A)SCS instance.

When you cluster an SAP (A)SCS instance, a typical setting is to use cluster share disks.

Whoever worked with Windows Failover Cluster and cluster shared disks, knows that cluster shared disks are often hardware based solutions. As hardware-based solutions, you cannot always use it in each environment.  Azure is one such example.

Microsoft in the past didn’t offer own solution for cluster shared disks. Typically, SAP customers on Azure would use some of the 3rd party solutions for a cluster shared disk for the HA of an SAP (A)SCS instance.

One of the top request from SAP customers running their SAP HA systems in Azure was an alternative Microsoft solution for clustered shared disks.

SAP developed new HA architecture for clustering SAP (A)SCS instance using file share, as additional option to cluster shared disks. SAP also developed a new SAP cluster resource DLL which is file share aware. For more information, check this blog: New SAP cluster resource DLL is available!

Clustering of an SAP (A)SCS instances with file share is supported for SAP NetWeaver 7.40 (and higher) products, with SAP Kernel 7.49 (and higher)

Existing SAP HA Architecture with Cluster Shared Disk

When you install an SAP (A)SCS instance on a cluster shared disk, you install not only SAP (A)SCS<Nr> instance, but also SAP GLOBAL HOST folder, e.g.  SYS folder. The virtual cluster host network name of the (A)SCS instance (e.g. of message and enqueue server processes) is also at the same time the SAP GLOBAL HOST name.

Existing SAP (A)SCS HA architecture with cluster shared disks

Figure 1: Existing SAP (A)SCS HA architecture with cluster shared disks

New SAP HA Architecture With File Share

With the new SAP (A)SCS HA architecture, the most important changes are the following:

  •  SAP (A)SCS instance (message and enqueue server processes) is separated from SAP GLOBAL HOST SYS folder
  • SAP central services run under an SAP (A)SCS instance
  • SAP (A)SCS instance is clustered and is accessible using the virtual host name <(A)SCSVirtualHostName>
  • Clustered SAP (A)SCS<InstNr> instance is installed on local disks on both nodes of SAP (A)SCS cluster – therefore we do not need a shared disk
  • SAP GLOBAL files are placed on SMB file share and are accessed using the host name \\<SAPGLOBALHost>\sapmnt\<SID>\SYS
  • The <(A)SCSVirtualHostName> network name is different from <SAPGLOBALHost> name

 

Figure 2: New SAP (A)SCS HA architecture with SMB file share

If we would install file server on standalone Windows machine, we would create a single point of failure. Therefore, high availability of a file share server is also an important part of the overall SAP system HA story.

To achieve high availability of a file share:

  • You must ensure that planned or unplanned downtime of Windows servers/VMs is not causing downtime of the file share
  • Disks used to store files must not be a single point of failure

With Windows Server 2016, Microsoft offers two features which fulfil these requests:

  •  Scale Out File Server (SOFS)
  • Storage Spaces Direct (S2D)

Scale Out File Server as Microsoft File Share HA Solution

Microsoft recommends the Scale Out File Share (SOFS) solution for enabling HA file shares. In SAP case, SAPMNT file share is protected with the HA SOFS solution.

SAP (A)SCS instance and SOFS deployed in TWO clusters

Figure 3: SAP (A)SCS instance and SOFS deployed in TWO clusters

 

As the name implies, this solution is “scale-out”, e.g. access to file share is parallelized. Different clients (in our case SAP application servers and an SAP (A)SCS instance) are accessing through all cluster nodes. This is a big advantage in comparison to a Generic File share, another HA file share feature of Windows Cluster, where access to file share is running through an active node.

Storage Spaces Direct (S2D) as Cluster Shared Storage HA Solution

SOFS stores files on a cluster shared disk, e.g. on cluster shared volumes (CSV). SOFS is supports different shared storages technologies.

For running SOFS on Azure, two criteria are important for cluster shared disks:

  • Support of cluster shared disks for SOFS on Azure environment
  • High availability and resiliency of cluster shared storage

Storage spaces direct (S2D) feature that comes with Windows Server 2016 fulfills both of these criteria.

S2D enables us to stripe local disks and create storage pool across different cluster nodes. Inside of those pools, we can create volumes which are presented to a cluster as shared storage e.g. as cluster shared volumes.

S2D is synchronously replicating disk content and offering different resilience, so loosing of some disks will NOT bring the whole shared storage down.

SOFS file share used to protect SAP GLOBAL Host files

Figure 4: SOFS file share used to protect SAP GLOBAL Host files

 

The nice thing about S2D is that it is a software-based shared storage solution that works transparently in Azure cloud, as well as in on-premises physical or virtual environments.

End-to-End Architecture

Complete end-to-end architecture of SAP NetWeaver HA with File Share looks like this:

End-to-End SAP NetWeaver HA Architecture with SOFS File Share

Figure 5: End-to-End SAP NetWeaver HA Architecture with SOFS File Share

 

Multi-SID Support

SAP (A)SCS Multi-SID enables us to install and consolidate multiple SAP (A)SCS instances in one cluster. Through consolidation, your overall Azure infrastructure costs will be reduced.

SAP (A)SCS Multi-SID clustering is also supported with a file share.

To enable a file share for the second the SAP <SID2> GLOBAHOST on the SAME SOFS cluster, you can use the same existing SAP <SID1> <SAPGLOBAlHOST> network name and same Volume1.

: SAP Multi-SID configuration in two clusters

Figure 6: SAP Multi-SID configuration in two clusters

 

Multi-SID SOFS using same SAP GLOBAL host name

Figure 7: Multi-SID SOFS using same SAP GLOBAL host name

Another option, is to use new <SAPGLOBAlHOST2> network name and new Volume2 for the second <SID2> file share.

Multi-SID SOFS with a different SAP GLOBAL host name 2

Figure 8: Multi-SID SOFS with a different SAP GLOBAL host name 2

 

Available Documentation

For more information, have a look at the new documentation and white papers on Microsoft SAP on Azure site:

From SAP side, you can check this new white paper: Installation of an (A)SCS Instance on a Failover Cluster with no Shared Disks.

You can find more information on SOFS and S2D here:

 

 

SAP NetWeaver Installation on HANA database

$
0
0

This blog describes the SAP NetWeaver installation steps on the SAP HANA database – a step by step installation guide with the real screenshots!

In this setup, the HANA Large Instance server is used to install the SAP HANA database, and SAP NetWeaver application layer runs on the Azure VM. This is a hybrid mode installation where SAP application is installed on Windows operating system in Azure, and the HANA database is installed on the HANA Large Instances on the linux operating system.

Please download a PDF version for complete details: SAP-NW-on-HANA-Installation-V1

Customer Experience with Columnstore on ERP

$
0
0

SAP released the report MSS_CS_CREATE a few months ago. Using this report, customers can create an additional Nonclustered Columnstore Index (NCCI) on any SAP ERP table. This has already been described here: https://blogs.msdn.microsoft.com/saponsqlserver/2017/04/13/using-columnstore-on-erp-tables.

In the meanwhile, several customers tested this feature. They reported performance improvements for reporting scenarios using huge aggregations (see below). Other customers had feature requests for the report MSS_CS_CREATE. A new version of this report is now available in SAP Note 2419662 – Implementing Columnstore Indexes for ERP tables. You have to re-apply the correction instructions of this SAP Note to get the code update.

Performance Improvements in SAP CO-PA

One of our customers is using the NCCI for SAP CO-PA. A huge performance improvement has been achieved simply by increasing SQL Server intra-query parallelism. For additional information regarding parallelism in SAP, see https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/18/parallel-processing-in-sap. You might increase the SQL Server configuration option “max degree of parallelism”, but this has an impact on all SAP queries (not only on CO-PA). Therefore, the customer decided using a SQL Server optimizer hint in the ABAP code. Just using this hint resulted in a performance improvement of factor 10. Adding an NCCI on the largest CE1, CE2, and CE4 tables further improved the performance to an overall acceleration of factor 77 (from 771 to 10 seconds).

Using ABAP Optimizer Hints for forcing an index

Having rowstore and columnstore indexes at the same time on the same table can become a challenge for the SQL Server Query optimizer. Therefore, you might have to add an ABAP optimizer hint. For example, to enforce the ABAP index IN1 (name of the index in SAP DDIC) on table ERPTEST, you have to add the following hint:
%_HINTS MSSQLNT ‘TABLE ERPTEST abindex(IN1)’.

Take care that the table name and index name are in UPPER case. If the SELECT consists of a single table (no JOIN involved), then there is no need to explicitly use the table name. In this case, you can use &TABLE& instead:
%_HINTS MSSQLNT ‘TABLE &TABLE& abindex(IN1)’.

You can use several optimizer hints within a single SELECT, for example:
SELECT MAX( msgnr ) sprsl
  FROM t100 INTO
l_t_result
  GROUP BY sprsl
  %_HINTS MSSQLNT ‘OPTION maxdop 8’
          MSSQLNT ‘OPTION hash group.

You can also combine optimizer hints in a single line:
%_HINTS MSSQLNT ‘OPTION maxdop 8  OPTION hash group’.

When using an optimizer hint for SQL Server intra-query parallelism, you should not hard code the degree of parallelism. Instead, you can use a variable. (The same is done in SAP BW with the RSADMIN parameter MSS_MAXDOP_QUERY). The ABAP code could look like this:

Used Optimizer Hints in SAP CO-PA

Our customer added a few SQL Server optimizer hints in the ABAP code of the CO-PA templates. It is a good idea to add optimizer hints using an ABAP variable. When setting this variable in an external form routine (e.g. GET_SQL_HINT), then you can change the hints without having to change the CO-PA code again:
Dependent on the input parameter (name of the CO-PA form routine), GET_SQL_HINT calculates the required optimizer hint and fills the ABAP variable SQL_HINT. Even if the report Z_COPA_SQL_HINTS (which contains GET_SQL_HINT) does not exist, you do not get an error. In this case, the variable SQL_HINT is empty and no optimizer hint is added.

  • For the SELECTs with aggregation on the CE1, CE2, and CE4 tables, a MAXDOP and a HASH GROUP hint have been added.
    Therefore, several form routines in the template include RKEVRK2B_READ_COST were changed: The call of GET_SQL_HINT and the hint %_HINTS MSSQLNT SQL_HINT has been added. Hereby, the variable SQL_HINT is set to ‘OPTION maxdop 16 OPTION hash group’. The following example shows the form routine OPEN_CURSOR_NO_HOLD_CE1 from the template include RKEVRK2B_READ_COST:

  • For the SELECTs without aggregation on the CE1, CE2, and CE4 tables, a different hint has been added.
    It turned out, that there are some SELECTs without aggregation, where the existing rowstore index would be a much better choice than the columnstore index. However, caused by the re-usage of SQL Server execution plans for different selective parameters, the columnstore index was sometimes used. This resulted in a high CPU load and suboptimal performance. You could force the required index using an optimizer index hint as described above. However, our customer decided to use a different optimizer hint, which solved the issue: OPTIMIZE FOR UNKNOWN
    Therefore, several form routines in the template include RKEVRK2A_POST have been changed: The call of GET_SQL_HINT and the hint %_HINTS MSSQLNT SQL_HINT has been added. Hereby, the variable SQL_HINT is set to ‘OPTION optimize for unknown’. The following example shows the form routine READ_ALL_PAOBJNRS_BY_CHARVALS from the template include RKEVRK2A_POST:
    Keep in mind that changing the templates above has no impact until the ABAP code is regenerated using the new templates. Therefore, the operating concerns must be regenerated using SAP transaction KEA0.

Improvements in SQL Server 2017

SQL Server 2017 allows the online creation of an NCCI. In SQL Server 2016, it was only possible to create an NCCI offline. Therefore, a shared lock was held during the whole runtime of the index creation. This blocked all data modifications (INSERTs, UPDATEs, DELETEs) while the NCCI was created. As of SQL Server 2017, you can now choose in report MSS_CS_CREATE, whether you want to use the online option or not. Keep in mind, that creating an index online takes longer and consumes tempdb space. In return, you do not block any other SAP users while creating the index.

Improvements in SAP report MSS_CS_CREATE

The NCCI cannot be transported using the SAP transport landscape. Therefore, you have to create the NCCI on the development-, consolidation-, and productive-system separately. This works fine with report MSS_CS_CREATE even on a productive system, which is configured in SAP as Not Modifiable. However, you cannot delete an NCCI using SAP transaction SE11 on a Not Modifiable SAP system. Therefore, report MSS_CS_CREATE has now a Delete Index button (Del Indx):

The second improvement in MSS_CS_CREATE is the Online option. It is greyed out in the screenshot above, because this SAP system is running on SQL Server 2016.

Conclusion

An NCCI can speed-up reporting performance on an SAP ERP system running on SQL Server 2016 or 2017. However, it is probably not useful for tables with a high transactional throughput (permanently many concurrent data modifications in the table). Based on the customer scenario, you can create an NCCI index on the tables of your choice.

Improve SAP BW Performance by Applying the Flat Cube

$
0
0

Overview

SAP released the Columnstore Optimized Flat Cube over two years ago. We want to give a brief explanation of the benefits and advantages of using the Flat Cube – and in consequence engage customers to apply it.

The Flat Cube has many benefits, for example, improved BW query performance and faster Data Transfer Processes (DTPs) into a cube. Furthermore, the Flat Cube is a prerequisite for using the improved BW statement generator (FEMS-pushdown). Before using the Flat Cube, you have to convert each cube to the Flat Cube design. Below we give a guidance for quickly converting all cubes using the new report RSDU_IC_STARFLAT_MASSCONV. The report is available as a Correction Instruction in

Benefits of Flat Cube

A brief overview of the Flat Cube is contained in https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw. The Flat Cube on SQL Server uses the same table design as BW on HANA has been using for years. The benefits of a Flat Cube are:

  • Faster DTPs
    DTPs into a Flat Cube are typically much faster, because a Flat Cube does not contain dimension tables any more. Therefore, there is no need for the time consuming DIMID generation when loading data into a cube. The BW Cube Compression is much faster for a Flat Cube. In most cases, it is even not needed any more. However, for Inventory Cubes we still recommend running the BW Cube Compression.
  • BW Query Performance
    The BW query performance is typically much better for the Flat Cube. The generated SQL queries are simpler since there is only one fact table and almost all dimension tables are gone (except the P-dimension). Therefore, there is no need any more for joining the dimension tables with the fact tables. Typical performance numbers are documented here:
    https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server
  • FEMS-Pushdown
    The Flat Cube is a prerequisite for further accelerating complex BW queries: The BW FEMS-pushdown is described here: https://blogs.msdn.microsoft.com/saponsqlserver/2017/06/30/customer-experience-with-sap-bw-fems-pushdown.
  • Aggregates not required
    You cannot create BW aggregates on a Flat Cube since you typically do not need them any more. If you still see the need for aggregates on a particular cube, then you can convert the cube back to a non-Flat cube.

Prerequisites

The Flat cube requires at least SQL Server 2014, but we recommend SQL Server 2016 or newer. Read the following SAP Notes:

You can convert the cubes of a BW Multi-Provider separately to the Flat Cube design. You can create a BW Semantically Partitioned Cube (SPO) in Flat Cube design. However, there is currently no conversion report available for converting an SPO to the Flat Cube design. Until SAP releases such a report, you have to create a new SPO as a Flat Cube and then transfer the data using a DTP from the old SPO.

Originally, it was not possible to use the Flat Cube design for a Real-Time Cube. This has been fixed with

Converting to Flat Cube

SAP uses the Repartitioning Framework for converting a Non-Flat Cube to a Flat Cube and the other way around. In the following, we only describe the way to Flat Cube (which is probably the only way you need for SQL Server).

The conversion to a Flat Cube is not done by simply creating indexes or copying data. Rows in the original e-fact table will be compressed further under some circumstances during the Flat Cube conversion. Therefore, the Flat Cube might contain less rows in it fact table compared with the number of rows in the original f-fact and e-fact tables together.

The DIMIDs 0, 1 and 2 in the P-dimension of a Flat Cube are reserved for special purposes. This allows a fixed database partitioning (of 4 partitions) for best performance. Cubes, which had never been compressed in the past might use DIMID 2. In this case, you get an error in the prerequisite check of the Flat Cube conversion. You are asked to compress a specific BW request (a particular request number with DIMID 2) before you can run the Flat Cube conversion.

The Flat Cube uses an optimized approach for loading Historical Transactions into Inventory Cubes (see Inventory Management at https://help.sap.com/saphelp_nw73/helpdata/en/e1/5282890fb846899182bc1136918459/frameset.htm). Before converting to a Flat Cube, you must run BW Cube Compression of all BW requests, which contain Historical Transactions.

SAP report RSDU_REPART_UI has been extended for the Flat Cube Conversion:

After choosing “Non-Flat to Flat” and entering the cube name, press “Initialize”. A popup window occurs which reminds you to perform a full database backup before running the conversion.

In the next screen, you can schedule a batch job. Be aware that the Flat Cube conversion is always running as a batch job (with job name: RSDU_IC_FLATCUBE/<cube>). Report RSDU_REPART_UI is just used for scheduling and monitoring these batch jobs. You should not schedule the report RSDU_REPART_UI itself as a batch job.

The conversion of a Flat Cube can require a huge amount of transaction log space. Therefore, you might have to increase the size of SQL Server transaction log and the frequency of log backups. To keep the transaction size low, the cube is copied in chunks (each request in the f-fact table and each time-DIMID in the e-fact table is copied separately). The chunks are processed in parallel using RFCs. By default, up to 3 chunks are processed. You can speed up processing by configuring more parallel running chunks using RSADMIN parameter RSDU_REPART_PARALLEL_DEGREE. However, this parameter will be overwritten by the RSADMIN parameter QUERY_MAX_WP_DIAG (if it is explicitly set).

After pressing “Monitor” in RSDU_REPART_UI, you can track the progress of the Flat Cube Conversion. How to process failed conversions is described in the section “Troubleshooting” below.

Flat Cube Mass-Conversion

SAP recently released the report RSDU_IC_STARFLAT_MASSCONV (all necessary code changes are described in SAP Note 2116639 – SQL Server Columnstore documentation). This Report allows scheduling the conversion of many BW cubes at the same time. When starting report RSDU_IC_STARFLAT_MASSCONV the first time, you have to press “Generate Work List”. This starts a batch job, which collects information about all non-Flat cubes. When pressing “Refresh Display of Work List”, each non-Flat cube is displayed in one of the three tabs:

Non-Convertible cubes are displayed in the 1st tab. These cubes do not fulfill the prerequisites of the Flat Cube conversion (yet). Once you have applied all necessary prerequisites, you have to run the work list batch job again.

In the 2nd tab, you can select the cubes you want to convert. After pressing the Start icon, a SAP batch job schedule window occurs. Here you can define the start time for the conversion of the first cubes. The conversion of the other selected cubes is scheduled as a chain: The next batch job starts once its predecessor finishes.

The batch scheduling makes sure, that only 3 cubes are running at the same time. You can change this number in the main screen of RSDU_IC_STARFLAT_MASSCONV. Furthermore, the total number of rows to be converted at the same time is also limited. The idea behind this is reduction of the workload and prevention of a full database log (when running in recovery model Simple).

However, a production system should use the SQL Server recovery model Full or Bulk-logged. In this case, you should increase the size of the transaction log and the frequency of the transaction log backups during the Flat Cube conversions. Otherwise, the transaction log might fill up, whether you run the conversions serially or in parallel. To be on the save side, you should run the conversion of the biggest cubes separately. Run a transaction log backup immediately before starting the conversion.

In the 3rd tab of RSDU_IC_STARFLAT_MASSCONV, you can see all jobs for the cube conversion (whether they are scheduled, running, or failed). After selecting one conversion job, you can jump to the conversion log screen (which actually is the same screen as the monitor screen in RSDU_REPART_UI).

Troubleshooting Failed Conversions

The Flat Cube conversion of a single cube consist of a sequence of steps. You can see these steps in the monitor screen of report RSDU_REPART_UI. At the very beginning of the cube conversion, a cube conversion lock is set. In addition, a read lock is set in step SET_READ_LOCK. The data and structure of the original cube will not be touched until the step SET_READ_LOCK has been executed.

If the conversion fails before reaching step SET_READ_LOCK, then you do not need to take care of this issue immediately. You might release the cube conversion lock and continue working with the Non-Flat cube. For releasing the locks (read lock and cube conversion lock), simply press the UNLOCK button in report RSDU_REPART_UI (For this, SAP Note 2580730 – Unlock failed Flat Cube Conversion has to be applied)

In the following example, the Conversion to Flat failed in step COPY_TO_SHD_EFACT. When clicking on the step, you can see SQL error 9002, which means that the transaction log has filled up. Therefore, the first thing to do is performing a transaction log backup.

You can simply restart the conversion in RSDU_REPART_UI with 2 clicks:

  1. Select the conversion request by clicking on it (“Conversion to Flat”).
  2. Press the button “Restart Request”.
    A popup window occurs, which lets you schedule a batch job.

In report RSDU_IC_STARFLAT_MASSCONV, you have to restart each failed request individually. A mass restart is planned for a future version of RSDU_IC_STARFLAT_MASSCONV. Therefore, you should take care about transaction log size and backups when using RSDU_IC_STARFLAT_MASSCONV.

Conclusion

SAP BW performance can be improved by applying the Flat Cube. Using report RSDU_REPART_UI, you can convert a single cube. Using RSDU_IC_STARFLAT_MASSCONV, you can convert many cubes at the same time. We recommend converting all cubes to the Flat Cube design with two exceptions: Converting SPO cubes is currently not possible. For Real-Time Cubes, the benefits of the Flat Cube highly depend on the customer scenario.

SAP on Azure: General Update – January 2018

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the Azure cloud platform. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. M-Series, Dv3 & Ev3-Series VMs Now Certified for NetWeaver

Three new VM types are certified and supported by SAP for NetWeaver AnyDB workloads. AnyDB refers to NetWeaver applications running on SQL Server, Oracle, DB2, Sybase or MaxDB.

A subset of these VMs is currently being certified for Hana.

Dv3 VM type has 4GB of RAM per cpu and is suitable for SAP application servers or small DBMS servers

Ev3 VM type has 8GB of RAM per cpu for E2v3 – E32v3. The E64v3 has 432GB. This VM type is suitable for large DBMS servers

M VM type has up to 3.8TB and 128cpu and is suitable for very large DBMS workloads

All three of these new VM types has many new features, in particular greatly improved networking performance. More information on Dv3/Ev3

Azure Service Availability site details the release status of each VM type per datacenter, however Ev3/Dv3 should generally be available everywhere

New VM Types and SAPS values:

VM Type CPU & RAM SAPS
D2s_v3 2 CPU,  8 GB 2,178
D4s_v3 4 CPU, 16 GB 4,355
D8s_v3 8 CPU, 32 GB 8,710
D16s_v3 16 CPU, 64 GB 17,420
D32s_v3 32 CPU, 128 GB 34,840
D64s_v3 64 CPU, 256 GB 69,680
E2s_v3 2 CPU, 16 GB 2,178
E4s_v3 4 CPU, 16 GB 4,355
E8s_v3 8 CPU, 32 GB 8,710
E16s_v3 16 CPU, 64 GB 17,420
E32s_v3 32 CPU, 128 GB 34,840
E64s_v3 64 CPU, 432 GB 70,050
M64s 64 CPU, 1000 GB 67,315
M64ms 64 CPU, 1792 GB 68,930
M128s 128 CPU, 2000 GB 134,630

The official list of VM types certified for SAP NetWeaver applications can be found in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types 

SAP cloud benchmarks are listed here:

E64v3 Benchmark can be found here 

D64v3 Benchmark can be found here 

m128 Benchmark can be found here 

m128 BW Hana Benchmark can be found here 

2. SAP Business One (B1) on Hana & SQL Server Now Certified on Azure

SAP Business One is a common SMB ERP solution. Many SAP B1 customers run on SQL Server today. SAP B1 on SQL Server is now Generally Available for Azure VMs

SAP has also ported SAP B1 to Hana. SAP B1 on Hana is now certified on Azure DS14v2 for approximately 40 users

Customers planning to run SAP B1 on Azure may be able to lower costs by using the B1 Browser Access feature available in more modern versions of SAP B1. Browser Access in some cases eliminates the need to install the B1 client on Terminal Server VMs on Azure.

These SAP Notes may be useful:
2442627 – Troubleshooting Browser Access in SAP Business One

2194215 – Limitations in SAP Business One Browser Access

2194233 – Behavior changes in Browser Access mode of SAP Business One as compared to Windows desktop mode

SAP on Azure certification information can be found here:

For all SAP on Azure documentation start at this page 

3. Managed Disks Recommended for SAP on Azure

Managed Disks are generally recommended for all new deployments.

Managed Disks reduce complexity and improve availability by automatically distributing storage for VM in an availability set onto different storage nodes so that the failure of a single storage node will not cause an outage on two or more VMs in an Availability Set

Notes:

1. Managed Standard disks are not supported for SAP NetWeaver application server or DBMS server. The Azure host monitoring agent does not support Managed Standard disks.

2. In general it is recommended to deploy SAP application servers without additional data disk and install the /usr/sap/<SID> on the boot disk. The boot disk can be up to 1TB in size, however SAP application servers do not require this much storage space and do not require high IOPS under normal circumstances

3. It is not possible to add both managed and unmanaged disks to a VM that is in an availability set

4. SQL Server VMs running with datafiles directly on blob cannot leverage or utilize the features of Managed Disks

5. In general it is recommended to use Managed Premium disks for SAP application servers so that these VMs are guaranteed the financially backed Azure Single VM SLA of 99.9% (Note: the actual achieved SLA is typically much higher than 99.9%)

6. In general it is recommended to use Managed Premium disks for SAP DBMS servers as detailed in SAP Note 2367194 – Use of Azure Premium SSD Storage for SAP DBMS Instance 

A good overview of managed disks is available here:

A deep dive on managed disks is available here:

Pricing and performance details for the various Azure Disks can be found here:

A very good Frequently Asked Questions is here 

4. Sybase ASE 16.3 PL2 “Always-on” on Azure

Sybase ASE 16 SP2 and higher is supported on Windows and Linux on Azure as documented in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types

Sybase ASE includes a HA/DR solution called “Always-on”. The features and functions of this Sybase solution is very different than SQL Server AlwaysOn. This HA solution does not require shared disks.

For information about Sybase ASE release schedule review 

The central Sybase HA documentation can be found here:

SAP support multiple replica databases according to SAP Note 2410733 – Always-On (HADR) support for 1-to-many replication – SAP ASE 16.0 SP02 PL05 

Sybase “Always-on” does not require an Azure ILB and the installation on Azure is relatively transparent. There is no requirement to configure an Internal Load Balancer in typical configurations. Sybase 16 SP3 PL2 is meant to offer several new features for SAP customers.

If there are questions about the setup of Sybase on Azure or inconsistencies in the documentation, please open an OSS message to BC-DB-SYB.

5. Resource Groups, Tags, Role Based Access Control, Billing, VNet, NSG, UDR and Resource Locking

Before starting an Azure deployment it is very important to design the core “Foundation Services” and structures that will support the IaaS and PaaS resources running on Azure.

To document the recommendations to design and configure all these elements would be a multi-part blog itself. This topic provides a starting point for customers planning their Azure deployment for SAP landscapes and the kinds of questions that commonly asked by SAP customers

1. Resource Groups provide a way monitor, control access, provision and manage billing for collections of Azure objects. Often SAP customers deploy a Resource Group per environment, such as Sandbox, Dev, QAS and Production. This allows for a simple billing breakdown per environment. If a business unit wants a clone of production for testing new business processes, built in Azure functionality can be used to clone production and copy this into a new Resource Group called “Project”. The monthly cost can be monitored and charged back to the business unit that requested this system

2. Azure Tags are used by some SAP customers to provide additional attributes about a particular VM or other Azure object. For example a VM could be tagged as “ECC 6.0” or “NetWeaver Application Server”. Azure tags allow for more precise billing and security control with Role Based Access Control. It is possible to query tags and for example determine which VMs are SAP or non-SAP VMs, which VMs are application servers or DBMS servers

3. Role Based Access Control allows a customer to segregate duties, delegate limited administrative rights to teams such as the SAP Basis Team and create a fine-grained security model. It is common to delegate significant Azure IaaS rights to the Basis Team. Basis should be allowed to create and change many Azure resources such as VMs. Typically Basis would not be able to create or change VNet or network level resources

4. Billing allows greater cost transparency than on-premises solutions. Azure Resource Groups and Tags should be designed so that it is very clear which SAP system or environment corresponds to line items on Azure monthly bills. Ideally additional project systems or systems requested by individual business units should be able to be charged back.

5. Azure VNet, NSG and UDR design is normally handled by a network expert and not the SAP Basis Team. There are some factors that must be considered when designed the VNet topology and the NSG and UDR:

a. Communication between the SAP application servers and DBMS servers must not be routed or inspected by virtual appliances. SAP is very sensitive to latency between application server and DBMS server. “Hub & Spoke” network topologies are one solution that allows security and inspection of client traffic while avoiding inspection of SAP traffic

b. Often the DBMS servers and Application servers are placed in separate subnets on the same VNet. Different NSGs are then applied to SAP application servers and DBMS servers

c. UDR should not route traffic unnecessarily back to the on-premises proxy server – common mistakes seen include: SQL Server with datafiles on blob are accessed via the on-premises proxy server! (leading to very poor performance) or http(s) communication between SAP applications is routed back to on-premises proxies

It is now popular to deploy a “Hub and spoke” network topology.

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg

A very good blog https://blogs.msdn.microsoft.com/igorpag/2016/05/14/azure-network-security-groups-nsg-best-practices-and-lessons-learned/

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview

6. Azure Resource Locking prevents accidental deletion of Azure objects such as VMs and storage by preventing deletion. It is recommended to create the required Azure resources at the start of a project. When most add/move/changes are finished and the Azure deployment is stabilized all the resources can be locked. Only a super administrator can then unlock a resource and allow the resource (such as a VM) to be deleted.

https://blogs.msdn.microsoft.com/cloud_solution_architect/2015/06/18/lock-down-your-azure-resources/

It is considerably easier to implement these best practices before a system is live. It is possible to move Azure objects such as VMs between subscriptions or resources groups as illustrated below (full support for Managed Disk environments due early 2018. In the interim it is possible to download VHD files for a Managed Disk VM using the “Export” button in Azure portal).

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/move-vm

https://docs.microsoft.com/en-gb/azure/azure-resource-manager/resource-group-move-resources (pls refer Virtual machines limitations section)

6. AzCopy for Linux Released

AzCopy is a popular utility for copying blob objects within Azure or for uploading or downloading objects from on-premises to Azure. An example could include uploading R3load dump file from on-premises to Azure while migrating from UNIX/Oracle to Win/SQL on Azure.

AzCopy is now available for Linux platforms. This utility requires the .Net framework 2.0 for Linux to be installed

AzCopy for Windows is available here. To improve AzCopy throughput the /NC:<xx> parameter can be specified. Depending on the bandwidth and latency of a connection values between 16-32 could significantly improve throughput. Values too much higher than 32 may saturate most internet links

An alternative to AzCopy is Blobxfer

7. Read Only Domain Controllers – RODC: Is a RODC More Secure on Azure Than a DC?

Read Only Domain Controllers is a feature that has been available for many years. This feature is documented here:

The differences between a Read Only Domain Controller and a writeable domain controller are explained here:

Recently multiple customers have proposed to put RODC in Azure stating that they believe this to be “more secure”.

The security profile of a RODC and a writeable domain controller on Azure with an ExpressRoute connection back to on-premises Domain controllers is very similar. The only exception is the “Filtered Attribute Set” – some AD attributes may not be replicated to a RODC (but almost all attributes are replicated)

There are some recommendations for securing Domain Controllers on Azure and in general:

1. Intruders can query Active Directory RODC or Writeable DC equally – so called “surveying” trying to find vulnerable, weak or unsecured user accounts. IDS and IPS solutions should be deployed both on Azure and on-premises to detect surveying

2. One of the single biggest security enhancements possible is to implement Multi-factor authentication – Azure has built in services for Multi-factor authentication 

3. It is recommended to use Azure Disk Encryption on the boot disk and the disks containing the DS database, logs and SYSVOL. This prevents cloning the entire VM, then downloading the VHD files and starting up the RODC or writeable DC. Debugging tools can then be used to try to compromise the AD database

Summary: deploying a RODC instead of a writeable Domain Controller does not significantly change the security profile of a Active Directory solution on Azure deployments with ExpressRoute connections back to on-premises AD infrastructure. Instead use IDS, multi-factor authentication and Azure Disk Encryption in conjunction with other security measures to build a highly secure AD environment. Do not rely on the simple fact that a Domain Controller is read only as the sole security mechanism

8. Azure Site Recovery: Update on Support Status

Azure Site Recovery is a powerful platform feature that allows customers to achieve best-in-class Disaster Recovery capabilities at a fraction of the cost of competitive solutions.

A blog and a whitepaper has been released detailing how to deploy Azure Site Recovery for SAP applications

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-sap

http://aka.ms/asr-sap

There are several new features and capabilities that will be added to the Azure Site Recovery feature and some features that are already available:

1. Azure Disk Encryption is a feature that scrambles the contents of Azure boot and/or data disks. Support for this feature will be in preview soon. If this feature is required please contact Microsoft

2. Support for Storage Spaces and SIOS is Generally Available

3. Support for VMs with Managed Disks will be released soon

4. Cross subscription replication will be added in early 2018

5. Support for Suse 12.x will be added in 2018

More information on ASR and ADE

https://azure.microsoft.com/en-us/blog/tag/azure-site-recovery/

https://azure.microsoft.com/en-us/services/site-recovery/

https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-faq

9. Non-Production Hana Systems

It is supported to run Hana DBMS servers on non-Hana certified hardware and cloud platforms. This is documented in SAP Note 2271345 – Cost-Optimized SAP HANA Hardware for Non-Production Usage

It is recommended to review the PowerPoint and Word document attached to this SAP Note. The note states that the “whitebox” type servers typically used for Hyperscale cloud can be used for non-production systems and that virtualized solutions are also possible.

Therefore, it is completely possible to run non-production Hana systems on Azure VMs.

In general Disaster Recovery systems are considered to be Production as they could run production workloads.

10. Oracle Linux 7.x Certified on Azure for Oracle 11g & 12c

The Azure platform gives customers the widest choice of operating system and database support. Recently SAP has supported Oracle DBMS running on Linux VMs.

The full list of operating systems and database combinations supported on Azure is officially listed in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types 

Notes:

1. SAP + Oracle + Linux
+ Azure = Fully supported
and Generally Available

2. Oracle DBMS must be installed on Oracle Linux 7.x

3. Oracle Linux 7.x or Windows can be used for SAP application servers and standalone engines (see PAM for details)

4. It is strongly recommended to install the latest updates for Oracle Linux before starting SWPM

5. The Linux host monitoring agent must be installed as detailed in SAP Note 2015553 – SAP on Microsoft Azure: Support prerequisites 

6. Customers wishing to use Accelerated Networking with Oracle Linux should contact Microsoft

7. It is not supported to run Oracle DBMS on Suse or RHEL

Important SAP Notes and information:

https://wiki.scn.sap.com/wiki/display/ORA/Oracle

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions 

2069760 – Oracle Linux 7.x SAP Installation and Upgrade 

405827 – Linux: Recommended file systems

2171857 – Oracle Database 12c – file system support on Linux

2369910 – SAP Software on Linux: General information

1565179 – This note concerns SAP software and Oracle Linux

Note: SAP + Oracle + Windows + Azure = Fully supported and Generally Available (supported since a long time, many multi-terabyte customers live on Azure)

11. Oracle 12c Release 2 is Certified by SAP and Released on Windows 2016. ASM Support on Azure in Planning

SAP has certified Oracle 12c Release 2 for SAP NetWeaver applications as documented in SAP Note 2133079 – Oracle Database 12c: Integration in SAP environment

Oracle 12c Release 2 is supported on Windows 2016 in addition to certified Linux distributions.

Oracle Database version 12.2.0.1 (incl. RDBMS 12.2.0.1, Grid Infrastructure 12.2.0.1 and Oracle RAC 12.2.0.1) is certified for SAP NetWeaver based SAP products starting December 18th, 2017. The minimum initial RDBMS 12.2.0.1 SAP Bundle Patch (SBP) is SAP12201P_1711 (Unix) or PATCHBUNDLE12201_1711 (Windows).

At least SAP Kernel version 7.21_EXT is required for Oracle 12.2.0.1

SAP note 2470660 provides important technical information about using Oracle 12.2.0.1 in a SAP environment, like database installation/upgrade guidelines, software download, patches, feature support, OS prerequisites, etc.

The Oracle features supported in Oracle version 12.1 (like Oracle In-Memory, Oracle Multitenant, Oracle Database Vault and Oracle ILM/ADO) are supported for version 12.2.0.1 as well.

2470660 – Oracle Database Central Technical Note for 12c Release 2 (12.2)

2133079 – Oracle Database 12c: Integration in SAP environment

Microsoft are working to obtain certification of Oracle ASM on Azure. The first combination planned is Oracle Linux 7.4 and Oracle 12c R1/R2. This blogsite will be updated with more information later

998004 – Update the Oracle Instant Client on Windows

12. Accelerated Networking Recommended for Medium & Large SAP Systems

Accelerated Networking drastically reduces the latency and significantly increases the bandwidth between two Azure VMs

Accelerated Networking is Generally Available for Windows & Linux VMs

It is generally recommended to deploy Accelerated Networking for all new medium and large SAP projects

Additional points to note about Accelerated Networking:

1. It is not possible to switch on Accelerated Networking for existing VMs. Accelerated Networking must be enabled when a VM is created. It is possible to delete a VM (by default the boot and data disks are kept) and create the VM again using the same disks

2. Accelerated Networking is available for most new VM types such as Ev3, Dv3, M, Dv2 with 4 physical cpu or more (as at December 2017 – note E8v3 is 4 physical CPU with 8 hyperthreads)

3. Accelerated Networking is not available on G-series VM types

4. SQL Server running with datafiles stored directly on blob storage are likely to greatly benefit from Accelerated Networking

5. Suse 12 Service Pack 3 (Suse 12.3) is strongly recommended (Hana certification is still in progress as at December 2017). RHEL 7.4 recommended. Contact Microsoft for Oracle Linux.

6. It is possible to have one or more Accelerated Network NICs and a traditional non-accelerated network card on the same VM

7. Azure vNet UDR and/or other security and inspection devices should not sit between the SAP application servers and database server. This connection needs to be as high performance as possible

8. SAP application server to database server latency can be tested with ABAP report /SSA/CAT -> ABAPMeter

9. Inefficient “chatty” ABAP code or particularly intensive operations such as large Payroll jobs or IS-Utilities Billing jobs have shown very significant improvement after enabling Accelerated Networking

Additional useful information about Azure Networking can be found here:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-create-vm-accelerated-networking

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-optimize-network-bandwidth

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-bandwidth-testing

https://blogs.msdn.microsoft.com/igorpag/2017/04/06/my-personal-azure-faq-on-azure-networking-v3/

https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/21/moving-from-sap-2-tier-to-3-tier-configuration-and-performance-seems-worse/

Below is an example of very inefficient ABAP code that will thrash the network. Placing a SELECT statement inside a LOOP is very poor coding standards. Accelerated Networking will improve the performance of poorly coded ABAP, but it is generally recommended to avoid a SELECT statement inside a LOOP. This code is totally non-scalable and as the number of iterations increase the performance will degrade severely.

13. New Azure Features

The Azure platform has many new features and enhancements added continuously.

A good summary of many of the new features can be found here:

Several new interesting features for SAP customers include:

1. Two different datacenters can communicate over a vNet-to-vNet Gateway connection. An alternative currently in preview is Global Peering 

2. SoftNAS for storing files, DIR_TRANS and interfaces. Support NFS and SMB protocols

3. Azure Data Box is useful for data center migration scenarios

4. CIS images – these are hardened Windows images. These have not been fully tested with all SAP applications. These images should work for SAPWebDispatcher, SAP Router etc

5. SAP LaMa now has a connector for Azure 2343511 – Microsoft Azure connector for SAP Landscape Management (LaMa) 

6. A future blog will cover Hana Large Instance Networking, but addition information on Hub & Spoke Networking can be found here Whitepaper 

7. Azure Service Endpoints remove some public endpoints and move the service to an Azure vNet

Other useful links below

VMs: Performance| No Boot | Agents | Linux Support

Networking: ExpressRoute | Vnet Topologies | ARM LB Config | Vnet-to-Vnet VPN | VPN Devices | S2S VPN | ILPIPReserved IP  Network Security

Tools: PowerShell install | VPN diagnostics | PlatX-CLI | Azure Resource Explorer | ARM Json Templates | iPerf | Azure Diagnostics

Azure Security: Overview | Best Practices | Trust Center
Preview Features | Preview Support

Miscellaneous Topics

SAP on Windows & Oracle presentation covering Oracle features for Windows http://www.oracle.com/technetwork/topics/dotnet/tech-info/oow2016-whatsnew-db-windows-3628748.pdf

A new and interesting feature for UNIX/Oracle customers wishing to terminate UNIX platforms and move to Intel commodity servers is Oracle Cross Platform Transportable Tablespaces

The diagram shows the process for creating a backup that can be taken on a UNIX Big Endian system and successfully restored onto an Intel Little End system (Windows or Linux).

Additional information is in SAP Note 552464 – What is Big Endian / Little Endian? What Endian do I have?

A similar question is sometimes received from customers that have installed SAP Hana on IBM Power servers. These customers either want to move away from Hana on Power (a rather niche solution) or they wish to run DR on public cloud. SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery states that Hana 2.0 backups can be restores from IBM Power onto Intel based systems

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

SAP on SQL Server: General Update – January 2018

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the SAP on SQL Server platform. The key objective of the SAP on Windows SQL port is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. New Case Studies on SAP on SQL Server

Malaysia Airlines migrated their entire datacenter from on-premises to Azure. More than 100 applications were moved to Azure including a large SAP landscape. The project was executed by Tata Consulting Services. TCS SAP team demonstrated an outstanding skillset and capability during the entire project. SAP applications were migrated from DB2 to SQL Server 2016 with datafiles running on blob storage. Local High Availability is achieved by using AlwaysOn. SQL Server AlwaysOn is also used to replicate the databases to Hong Kong. A full case study can be found:

https://customers.microsoft.com/en-us/story/malaysia-airlines-berhad

https://partner.microsoft.com/it-it/case-studies/tata

Several other customers based in Malaysia in the Agricultural industry and shipping industry have also moved their SAP landscapes to Azure as well.

A useful blog discussing a large Australian Energy customer can be found here

2. Security Recommendations for Windows Servers

It is recommended to implement a few simple security settings on Windows servers. This will greatly increase the security of a SAP landscape running on Windows servers. An additional benefit is that it may not be required to implement Windows patches as frequently.

Recommendation #1: Disable SMB 1.0

SAP Note 2476242 – Disable windows SMBv1 describes how to disable legacy networking. SMB 1.0 is a protocol required for Windows NT 4.0 and should be disabled in all cases for SAP systems. There is no valid reason why SMB 1.0 should be running on any SAP server.

More information can be found here

Recommendation #2: Remove Internet Explorer

Open a command prompt with Administrative rights are run this command

dism /online /Disable-Feature /FeatureName:Internet-Explorer-Optional-amd64

On previous versions of Windows Server Windows Update may try to update earlier versions of IE to IE 11. This utility can prevent Windows Update from updating browser versions. Updating the browser is likely to require a restart of the operating system and should be avoided https://www.microsoft.com/en-us/download/details.aspx?id=40722

Additional useful information for Security & Networking can be found in these SAP OSS Notes

2532012 – SSL Security error while installing database instance on SQL Server

1570930 – SQL Server network encryption with SAP

2356977 – Error during connection of JDBC Driver to SQL Server

2385898 – SSL connection to SQL Server fails with: Could not generate secret

1702456 – Errors with Microsoft JDBC drivers connecting with SSL

1428134 – sqljdbc driver fails to establish secure connection

2559590 – SQL Server connection fails after TLS 1.2 has been enabled

Detailed SAP on Windows Security Whitepaper

3. Uninstall Internet Explorer from Windows Servers? – What about the new SAPInst?

SWPM 1.0 SP20 now has a Fiori based interface that is run from a Web Browser. It is a specific recommendation to remove Internet Explorer, third party browsers and any unnecessary software from all SAP servers including non-production servers. Fortunately there are several ways to run the new SWPM on a server where no browser is installed.

This SAP blog discusses several options

Option #1: Run SAPInst with the following command line option to load the previous gui

Sapinst.exe SAPINST_SLP_MODE=false

Option #2: Connect to SAPinst remotely from a Management Server with a browser

Run sapinst -nogui

Ensure the Windows Firewall has port 4237 open

From a dedicated Management Server open a browser and connect to https://<sap_server_hostname or_IP>:4237/JfIVyDkxlsXSBFYi/docs/index.html

(Note: the exact URL can be found in the logs of the SAPInst program starter)


The username and password are the OS level user and password on the server where SAPInst was started (eg. DOMAIN\sapadmin)

Hint: If there are problems running SWPM add the SAP Server hostname and/or IP to the Trusted Sites

4. SQL Server 2017 for SAP NetWeaver Systems

Microsoft has released SQL Server 2017. SQL Server 2017 has many new features

In general the required SAP Support Packs for SQL Server 2017 are the same as SQL Server 2016. Details can be found in 2492596 – Release planning for Microsoft SQL Server 2017

SAP and Microsoft plan to complete testing and make SQL Server 2017 generally available in January 2017 or shortly after

5. Power Options – Set for Maximum Performance

It is recommended to set SQL Server and SAP application server Power Plan to Maximum Performance.

SAP has released SAP Note 2497079 – Poor performance in SAP NetWeaver and/or MS SQL Server due to power settings

It is important to set the power settings to maximum performance at all layers of an infrastructure, for example the server BIOS, Hypervisor and Windows Operating system.

6. Business Suite 7 Maintenance, SAP NetWeaver Java Systems, Windows & SQL – End of Life & JDBC Drivers

SAP has documented the end of life of SAP Business Suite in SAP Note 1648480 – Maintenance for SAP Business Suite 7 Software.  The note states that the current generation of SAP applications running on “AnyDB” installed in over 200,000 customers worldwide will be out of support after 31st December 2025.

Many SAP customers have decided to migrate their SAP applications to Windows 2016, SQL Server 2016/2017 and upgraded to SAP versions that remain in support until 31st December 2025. Some of these customers plan to implement S4/HANA and wish to upgrade to a supported platform “stack” in the meantime. SAP NetWeaver 7.5 components, Windows 2016 and SQL 2016/2017 will remain in support until 2025. This means a customer can move to Windows 2016 and SQL Server 2016/2017 on Azure and never need to upgrade the OS, Database or Hardware until the end of life of the application.

Windows Server 2016 is in mainstream support until 11th January 2022 and extended maintenance until 11th January 2027 as documented here on the Microsoft Product Lifecycle tool

https://support.microsoft.com/en-us/lifecycle/search?alpha=Windows%20Server%202016%20Datacenter

https://support.microsoft.com/en-us/lifecycle/search

SQL Server 2016 final service pack (currently only SP1 is released) will remain in support until 2026. It is possible another SQL 2016 Service Pack will be released that might further extend the support lifetime of SQL Server 2016. SQL Server 2017 is in support until October 2027

The Azure platform will automatically upgrade hardware, networking and storage transparently over time.

Moving the current SAP applications to a stack that is fully supported until the end of life of the applications has allowed many customers to focus resources into planning for S/4HANA implementation projects.

Notes:

Java systems for 7.5x will be in maintenance until 31st December 2024

Java 7.0 EHP0, EHP1, EHP2 and EHP3 are out of support as of 31st December 2017 (support for Java 4.1 is terminated)

7. SAP ASCS File Share vs. ASCS Shared Disk

SAP has released documentation and a new Windows Cluster DLL that enables the SAP ASCS to leverage a SMB UNC source as opposed to a Cluster Shared Disk.

The solution has been tested and documented by SAP for usage in non-productive systems and can be used in Azure Cloud. This feature is for SAP NetWeaver components 7.40 and higher.

This feature is now fully Generally Available to all customers (both on-premises and on Azure) and is documented here

File Server with SOFS and S2D as an Alternative to Cluster Shared Disk for Clustering of an SAP (A)SCS Instance in Azure is Generally Available

High Available ASCS for Windows on File Share – Shared Disk No Longer Required

8. ReFS, Cluster, Continuous Access File Share and Windows Update Patches

SAP fully supports 1869038 – SAP support for ReFs filesystem

Some Antivirus software or other software that intercepts the Windows IO subsystem require this patch

It is therefore required to apply this patch on all Windows 2016 systems running ReFS

Older versions of the SWPM prerequisite checker will still warn that it is required to Disable the Windows Continuous Availability feature.

SAP now fully support Continuous Availability as documented in Note 2287140 – Support of Failover Cluster Continuous Availability feature (CA)

https://blogs.sap.com/2017/07/21/how-to-create-a-high-available-sapmnt-share/

https://wiki.scn.sap.com/wiki/display/SI/Should+I+run+the+Web+Dispatcher+as+a+standalone+installation+or+as+part+of+an+ABAP+or+J2EE+system

It is generally recommended to always use the latest SWPM available from here

Recent releases of SWPM should not request to disable this feature.

It is generally recommended to apply these updates to Windows 2012 R2 upgrades to cluster systems

http://aka.ms/2012R2ClusterUpdates

http://aka.ms/AzureClusterThreshold

Windows Server 2016 Long Term Servicing Branch is the support release for SAP applications. Do not use the Semi-Annual Channel

https://blogs.technet.microsoft.com/windowsitpro/2017/07/27/waas-simplified-and-aligned/

9. Adding Secondary IP Address onto Cluster Core Resource & Read Only Cluster Admins

Customers installing Windows Geoclusters on-premises and on Azure will need to a second IP address to the Cluster Core Resource

This is because the Primary and DR cluster nodes are typically on different subnets. To add a second cluster core resource IP address follow this guide here

The key point in the blog is this PS command:

PS > Add-ClusterResource –Name NewIP –ResourceType “IP Address” –Group “Cluster Group”

Some outsourced or managed service customers sometimes want to delegate readonly access

Grant-ClusterAccess -User DOMAIN.com\<non-admin-user> -ReadOnly

https://docs.microsoft.com/en-us/powershell/module/failoverclusters/grant-clusteraccess?view=win10-ps

To block cluster access to specific users (even if admins) run Block-ClusterAccess https://docs.microsoft.com/en-us/powershell/module/failoverclusters/block-clusteraccess?view=win10-ps

10. SQL Server: Important Patch Level for SQL 2016

Customers running SQL Server 2016 are strongly recommended to upgrade to at least SQL Server 2016 SP1 CU6.

There are multiple features that are improved and several bugs resolved. Customers using TDE, Backup Compression, Datafiles direct on Azure blobstorage or any combination of these should upgrade to the latest available CU, but at least SP1 CU6.

The latest CU and SP is always available here

SQL Server 2017 customers will receive the same corrections in SQL Server 2017 CU3

4025628            FIX: Error 9004 when you try to restore a compressed backup from multiple files for a large TDE-encrypted database in SQL Server

Miscellaneous Topics & Interesting SAP OSS Notes

Setting up SAP applications using virtual hostnames 1564275 – Install SAP Systems Using Virtual Host Names on Windows

Updating SAP cluster resource DLL is explained here in SAP Note 1596496 – How to update SAP Resource Type DLLs for Cluster Resource Monitor

A useful note on memory analysis 2488097 – FAQ: Memory usage for the ABAP Server on Windows

AlwaysOn alerting and monitoring is discussed here

Azure Support plans for SAP on Azure customers are https://azure.microsoft.com/en-us/support/plans/

A new format option is available for NTFS that will alleviate sparse file errors during Check DB on very large databases. The syntax for Large FRS Format <Drive:> /FS:NTFS /L (-UseLargeFRS) https://blogs.technet.microsoft.com/askcore/2015/03/12/the-four-stages-of-ntfs-file-growth-part-2/
https://technet.microsoft.com/en-us/library/dn466522(v=ws.11).aspx

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research


Setting Up Hana System Replication on Azure Hana Large Instances

$
0
0

This blog details a recent customer deployment by Cognizant SAP Cloud team  on HANA Large Instances. From an architecture perspective of HANA Large Instances, the usage of HANA System Replication as disaster recovery functionality between two HANA Large Instance units in two different Azure regions does not work. Reason is that the network architecture applied is not supporting transit routing between two different HANA Large Instance stamps that are located in different Azure regions. Instead the HANA Large Instance architecture offers the usage of storage replication between the two regions in a geopolitical area that offer HANA Large Instances. For details, see this article. However, there are customers who already have experience with SAP HANA System Replication and its usage as disaster recovery functionality. Those customers would like to continue the usage of HANA System Replication in Azure HANA Large Instance units as well. Hence the Cognizant team needed to solve the issue around the transit routing between the two HANA Large Instance units that were located in two different Azure regions. The solution applied by Cognizant was based on Linux IPTables. A Linux Network Address Translation based solution allows SAP Hana System Replication on Azure Hana Large Instances in different Azure datacenters. Hana Large Instances in different Azure datacenters do not have direct network connectivity because Azure ExpressRoute and VPN does not allow Transit Routing.

To enable Hana System Replication (HSR) to work a small VM is created that forwards traffic between the two Hana Large Instances (HLI).

Another benefit of this solution is that it is possible to enable access to SMT and NTP servers for patching and time synchronization.

The solution detailed below is not required for Hana systems running on native Azure VMs, only for the Azure Large Instance offering that offers bare metal Tailored Data Centre Integration (TDI) Infrastructure up to 20TB

Be aware that the solution described below is not part of the HANA Large Instance architecture. Hence support of configuring, deploying, administrating and operating the solution needs to be provided by the Linux vendor and the instances that deploys and operates the IPTables based disaster recovery solution.

High Level Overview

SAP Hana Database offers three HA/DR technologies: Hana System Replication, Host Autofailover and Storage based replication

The diagram below illustrates a typical HLI scenario with Geographically Dispersed Disaster Recovery solution. The Azure HLI solution offers storage based DR replication as an inbuilt solution however some customers prefer to use HSR which is a DBMS software based HA/DR solution.

HSR can already be configured within the same Azure datacenter following the standard HSR documentation

HSR cannot be configured between the primary Azure datacenter and the DR Azure datacenter without an IPTables solution because there is no network route from primary to DR. “Transit Routing” is a term that refers to network traffic that passes through two ExpressRoute connections. To enable HSR between two different datacenters a solution such as the one illustrated below can be implemented by a SAP System Integrator or Azure IaaS consulting company

Key components and concepts in the solution are:

1. A small VM running any distribution of Linux is placed in Azure on a VNET that has network connectivity to both HLI in both datacenters

2. The ExpressRoute circuits are cross connected from the HLI to a VNET in each datacenter

3. Standard Linux functionality IPTables is used to forward traffic from the Primary HLI -> IP Forwarding VM in Azure -> Secondary HLI (and vice versa)

4. Each customer deployment has individual differences, for example:

a. Some customers deploy two HLI in the primary datacenter, setup synchronous HSR between these two local HLI and then (optionally) configure Suse Pacemaker for faster and transparent failover. The third DR node for HSR typically is not configured with Suse Pacemaker

b. Some customers have a proxy server running in Azure and allow outbound traffic from Azure to Internet directly. Other customers force all http(s) traffic back to a Firewall & Proxy infrastructure on-premises

c. Some customers use Azure Automation/Scripts on a VM in Azure to “pull” backups from the HLI and store them in Azure blob storage. This removes the need for the HLI to use a proxy to “push” a backup into blob storage

All of the above differences change the configuration of the IPTables rules, therefore it is not possible to provide a single configuration that will work for every scenario

https://blogs.sap.com/2017/02/01/enterprise-readiness-with-sap-hana-host-auto-failover-system-replication-storage-replication/

https://blogs.sap.com/2017/04/12/revealing-the-differences-between-hana-host-auto-failover-and-system-replication/

The diagram shows a hub and spoke network topology and Azure ExpressRoute cross connect, a generally recommended deployment model

https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke

https://docs.microsoft.com/en-us/azure/expressroute/expressroute-faqs

Diagram 1. Illustration showing no direct network route from HLI Primary to HLI Secondary. With the addition of an IP forwarding solution it is possible for the HLI to establish TCP connectivity

Note: ExpressRoute Cross Connect can be changed to regional vnet peering when this feature is GA https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

Sample IP Table Rules Solution – Technique 1

The solution from a recent deployment by Cognizant SAP Cloud Team on Suse 12.x is shown below:

HLI Primary     10.16.0.4

IPTables VM     10.1.0.5

HLI DR         10.17.0.4

On HLI Primary

iptables -N input_ext

iptables -t nat -A OUTPUT -d 10.17.0.0/24 -j DNAT –to-destination 10.1.0.5

On IPTables

## this is HLI primary -> DR

echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -t nat -A PREROUTING -s 10.16.0.0/24 -d 10.1.0.0/24 -j DNAT –to-destination 10.17.0.4

iptables -t nat -A POSTROUTING -s 10.16.0.0/24 -d 10.17.0.0/24 -j SNAT –to-source 10.1.0.5

## this is HLI DR -> Primary

iptables -t nat -A PREROUTING -s 10.17.0.0/24 -d 10.1.0.0/24 -j DNAT –to-destination 10.16.0.4

iptables -t nat -A POSTROUTING -s 10.17.0.0/24 -d 10.16.0.0/24 -j SNAT –to-source 10.1.0.5

On HLI DR

iptables -N input_ext

iptables -t nat -A OUTPUT -d 10.16.0.0/24 -j DNAT –to-destination 10.1.0.5

The configuration above is not permanent and will be lost if the Linux servers are restarted.

On all nodes after the config is setup and correct:

iptables-save > /etc/iptables.local

add “iptables-restore -c /etc/iptables.local” to the /etc/init.d/boot.local

On the IPTables VM run this command to make ip forwarding a permanent setting

vi /etc/sysctl.conf

add net.ipv4.ip_forward = 1

The above example uses CIDR networks such as 10.16.0.0/24 It is also possible to specify an individual specific host such as 10.16.0.4

Sample IP Table Rules Solution – Technique 2

Another possible solution uses an additional IP address on the IPTables VM to forward to the target HLI IP address.

This approach has a number of advantages:

1. Inbound connections are supported, such as running Hana Studio on-premises connecting to HLI

2. IPTables configuration is only on the IPTables VM, no configuration is required on the HLI

To implement technique 2 follow this procedure:

1. Identify the target HLI IP address. In this case the DR HLI is 10.17.0.4

2. Add an additional static IP address onto the IPTables VM. In this case 10.1.0.6

Note: after performing this configuration it is not required to add the IP address in YAST. The IP address will not show in ifconfig

3. Enter the following commands on the IPTables VM

iptables -t nat -A PREROUTING -d 10.1.0.6 -j DNAT –to 10.17.0.4

iptables -t nat -A PREROUTING -d << a unique new IP assigned on the IP Tables VM>> -j DNAT –to <<any other HLI IP or MDC tenant IP>>

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

4. Ensure /proc/sys/net/ipv4/ip_forward = 1 on the IP Tables VM and save the configuration in the same way as Technique 1

5. Maintain DNS entries and/or hosts file to point to the new IP address on the IPTables VM – in this case all references to the DR HLI will be 10.1.0.6 (not the “real” IP 10.17.0.4)

6. Repeat this procedure for the reverse route (HLI DR -> HLI Primary) and for any other HLI (of MDC tenants)

7. Optionally High Availability can be added to this solution by adding the additional static IP addresses on the IPTables VM to an ILB

8. To test the configuration execute the following command:

ssh 10.1.0.6 -l <username> (address 10.1.0.6 will be NAT to 10.17.0.4)

after logging on confirm that the ssh session has connected to the HLI DR (10.17.0.4)

IMPORTANT NOTE: The SAP Application servers must use the “real” HLI IP (10.17.0.4) and should not be configured to connect via the IPTables VM.

Notes:

To purge ip tables rules:

iptables -t nat -F

iptables -F

To list ip tables rules:

iptables -t nat -L

iptables -L

ip route list

ip route del

http://www.netfilter.org/documentation/index.html

https://www.systutorials.com/816/port-forwarding-using-iptables/

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Security_Guide/s1-firewall-ipt-fwd.html

https://www.howtogeek.com/177621/the-beginners-guide-to-iptables-the-linux-firewall/

https://www.tecmint.com/linux-iptables-firewall-rules-examples-commands/

https://www.tecmint.com/basic-guide-on-iptables-linux-firewall-tips-commands/

What Size VM is Required for the IP Forwarder?

Testing has shown that network traffic during initial synchronization of a Hana DB is in the range of 15-25 megabytes/sec at peak. It is therefore recommended to start with a VM that has at least 2 cpu. Larger customers should consider a VM such as a D12v2 and enable Accelerated Networking to reduce network processing overhead

It is recommended to monitor:

1. CPU consumption on the IP Forwarding VM during HSR sync and subsequently

2. Network utilization on the IP Forwarding VM

3. HSR delay between Primary and Secondary node(s)

SQL: “HANA_Replication_SystemReplication_KeyFigures” displays among others the log replay backlog (REPLAY_BACKLOG_MB).

As a fallback option you can use the contents of M_SERVICE_REPLICATION to determine the log replay delay on the secondary site:

SELECT SHIPPED_LOG_POSITION, REPLAYED_LOG_POSITION FROM M_SERVICE_REPLICATION

Now you can calculate the difference and multiply it with the log position size of 64 byte:

(SHIPPED_LOG_POSITION – REPLAYED_LOG_POSITION) * 64 = <replay_backlog_byte>

1969700 – SQL Statement Collection for SAP HANA

This SAP Note contains a reference to this script that is very useful for monitoring HSR key figures. HANA_Replication_SystemReplication_KeyFigures_1.00.120+_MDC

If high CPU or network utilization is observed it is recommended to upgrade to a VM Type that supports Accelerated Networking

Required Reading, Documentation and Tips

Below are some recommendations for those setting up this solution based on test deployments:

1. It is strongly recommended that an experienced Linux engineer and a SAP Basis consultant jointly setup this solution. Test deployments have shown that there is considerable testing of the iptables rules required to get HSR to connect reliably. Sometimes troubleshooting has been delayed because the Linux engineer is unfamiliar with Hana System Replication and the Basis consultant may have only very limited knowledge of Linux networking

2. A simple and easy way to test if ports are opened and correctly configured is to ssh to the HLI using the specific port. For example from the primary HLI run this command: ssh <secondary hli>:3<sys nr.>15 If this command times out then there is likely an error in the configuration. If the configuration is correct the ssh session should connect briefly

3. The NSG and/or Firewall for the IP forwarder VM must be opened to allow the HSR ports to/from the HLIs

4. More information Network Configuration for SAP HANA System Replication

5. How To Perform System Replication for SAP HANA

6. If there is a failure on the IPTables VM Hana will treat this the same way as any other network disconnection or interruption. When the IPTables VM is available again (or networking is restored) there is considerable traffic while HSR replicates queued transactions to the DR site.

7. The IPTables solution documented here is exclusively for Asynchronous DR replication. We do not recommend using such a solution for the other types of replication possible with HSR such as Sync in Memory and Synchronous. Asynchronous replication across geographically dispersed locations with true diverse infrastructure such as power supply always has the possibility of some data loss as the RPO <> 0. This statement is true for any DR solution on any DBMS with or without using an IP forwarder such as IPTables. It is expected that IPTables would have a negligible impact on RPO assuming the CPU and Network on the IPTable VM is not saturated https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html

8. Some recommended SAP OSS Notes:

2142892 – SAP HANA System Replication sr_register hangs at “collecting information” (Important note)

An alternative to this SAP Note is to use this command on the IPTables VM: iptables -t mangle -A POSTROUTING -p tcp –tcp-flags SYN,RST SYN -o eth0 -j TCPMSS –set-mss 1500

2222200 – FAQ: SAP HANA Network

2484128 – hdbnsutil hangs while registering secondary site

2382421 – Optimizing the Network Configuration on HANA- and OS-Level

2081065 – Troubleshooting SAP HANA Network

2407186 – How-To Guides & Whitepapers For SAP HANA High Availability

9. HSR can enable a multi-tier HSR scenario even on small and low cost VMs as per SAP Note 1999880 – FAQ: SAP HANA System Replication – buffer for replication target has a minimum size of 64GB or row store + 20GB (whichever is higher) [no preload of data]

10. Refer to existing SAP Notes and documentation for limitations on HSR and storage based snapshot backup restrictions and MDC

11. The IPTables rules above forward entire CIDR networks. It is also possible to specify individual hosts.

Sample Network Diagrams

The following two diagrams show two possible Hub & Spoke Network topologies for connecting HLI.

The second diagram differs from the first in that the HLI is connected to a Spoke VNET. Connecting the HLI to a Spoke on a Hub & Spoke topology might be useful if IDS/IPS inspection and monitoring Virtual Appliances or other mechanisms were used to secure the hub network and traffic passing from the Hub network to on-premises

Note: In the normal deployment of a Hub and Spoke network leveraging network appliances for routing, a User defined route is required on the expressroute gateway subnet to force all traffic though network routing appliances. If a user defined route has been applied, it will apply to the traffic coming from the HLI expressroute and this may lead to significant latency between the DBMS server and the SAP application server. Ensure an additional user defined route allows for the HLI’s to have direct routing to the application servers without having to pass through the network appliances.

Thanks to:

Rakesh Patil – Azure CAT Team Linux Expert for his invaluable help.

Peter Lopez – Microsoft CSA

For more information on the solution deployed:

Sivakumar Varadananjayan – Cognizant Global SAP Cloud and Technology Consulting Head https://www.linkedin.com/in/sivakumarvaradananjayan/detail/recent-activity/posts/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Very Large Database Migration to Azure – Recommendations & Guidance to Partners

$
0
0

SAP systems moved onto Azure cloud now commonly include large multinational “single global instance” systems and are many times larger than the first customer systems deployed when the Azure platform was first certified for SAP workloads some years ago

Very Large Databases (VLDB) are now commonly moved to Azure. Database sizes over 20TB require some additional techniques and procedures to achieve a migration from on-premises to Azure within an acceptable downtime and a low risk.

The diagram below shows a VLDB migration with SQL Server as the target DBMS. It is assumed the source systems are either Oracle or DB2

A future blog will cover migration to HANA (DMO) running on Azure. Many of the concepts explained in this blog are applicable to HANA Migrations

This blog does not replace the existing SAP System Copy guide and SAP Notes which should be reviewed and followed.

High Level Overview

A fully optimized VLDB migration should achieve around 2TB per hour migration throughput per hour or possibly more.

This means the data transfer component of a 20TB migration can be done in approximately 10 hours. Various postprocessing and validation steps would then need to be performed.

In general with adequate time for preparation and testing almost any customer system of any size can be moved to Azure.

VLDB Migrations require do considerable skill, attention to detail and analysis. For example the net impact of Table Splitting must be measured and analyzed. Splitting a large table into more than 50 parallel exports may considerably decrease the time taken to Export a table, but too many Table Splits may result in drastically increased Import times. Therefore the net impact of table splitting must be calculated and tested. An expert licensed OS/DB migration consultant will be familiar with the concepts and tools. This blog is intended to be a supplement to highlight some Azure specific content for VLDB migrations

This blog deals with Heterogeneous OS/DB Migration to Azure with SQL Server as the target database using tools such as R3load and Migmon. The steps performed here are not intended for Homogenous System Copies (a copy where the DBMS and Processor Architecture (Endian Order) stays the same). In general Homogeneous System Copies should have very low downtime regardless of DBMS size because log shipping can be used to synchronize a copy of the database in Azure.

A block diagram of a typical VLDB OS/DB migration and move to Azure is illustrated below. The key points illustrated below:

1.The current source OS/DB is often AIX, HPUX, Solaris or Linux and DB2 or Oracle

2. The target OS is either Windows, Suse 12.3, Redhat 7.x or Oracle Linux 7.x

3. The target DB is usually either SQL Server or Oracle 12.2

4. IBM pSeries, Solaris SPARC hardware and HP Superdome thread performance is drastically lower than low cost modern Intel commodity servers, therefore R3load is run on separate Intel servers

5. VMWare requires special tuning and configuration to achieve good, stable and predictable network performance. Typically physical servers are used as R3load server and not VMs

6. Commonly four export R3load servers are used, though there is no limit on the number of export servers. A typical configuration would be:

-Export Server #1 – dedicated to the largest 1-4 tables (depending on how skewed the data distribution is on the source database)

-Export Server #2 – dedicated to tables with table splits

-Export Server #3 – dedicated to tables with table splits

-Export Server #4 – all remaining tables

7. Export dump files are transferred from the local disk in the Intel based R3load server into Azure using AzCopy via public internet (this is typically faster than via ExpressRoute though not in all cases)

8. Control and sequencing of the Import is via the Signal File (SGN) that is automatically generated when all Export packages are completed. This allows for a semi-parallel Export/Import

9. Import to SQL Server or Oracle is structured similarly to the Export, leveraging four Import servers. These servers would be separate dedicated R3load servers with Accelerated Networking. It is recommended not to use the SAP application servers for this task

10. VLDB databases would typically use E64v3, m64 or m128 VMs with Premium Storage. The Transaction Log can be placed on the local SSD disk to speed up Transaction Log writes and remove the Transaction Log IOPS and IO bandwidth from the VM quota.  After the migration the Transaction Log should be placed onto persisted disk

Source System Optimizations

The following guidance should be followed for the Source Export of VLDB systems:

1. Purge Technical Tables and Unnecessary Data – review SAP Note 2388483 – How-To: Data Management for Technical Tables

2. Separating the R3load processes from the DBMS server is an essential step to maximize export performance

3. R3load should run on fast new Intel CPU. Do not run R3load on UNIX servers as the performance is very poor. 2-socket commodity Intel servers with 128GB RAM cost little and will save days or weeks of tuning/optimization or consulting time

4. High Speed Network ideally 10Gb with minimal network hops between the source DB server and the Intel R3load servers

5. It is recommended to use physical servers for the R3load export servers – virtualized R3load servers at some customer sites did not demonstrated good performance or reliability at extremely high network throughput (Note: very experienced VMWare engineer can configure VMWare to perform well)

5. Sequence larger tables to the start of the Orderby.txt

6. Configure Semi-parallel Export/Import using Signal Files

6. Large exports will benefit from Unsorted Export on larger tables. It is important to review the net impact of Unsorted Exports as importing unsorted exports to databases that have a clustered index on the primary key will be slower

7. Configure Jumbo Frames between source DB server and Intel R3load servers. See “Network Optimization” section later

8. Adjust memory settings on the source database server to optimize for sequential read/export tasks 936441 – Oracle settings for R3load based system copy

Advanced Source System Optimizations

1. Oracle Row ID Table Splitting

SAP have released SAP Note 1043380 which contains a script that converts the WHERE clause in a WHR file to a ROW ID value. Alternatively the latest versions of SAPInst will automatically generate ROW ID split WHR files if SWPM is configured for Oracle to Oracle R3load migration. The STR and WHR files generated by SWPM are independent of OS/DB (as are all aspects of the OS/DB migration process).

The OSS note contains the statement “ROWID table splitting CANNOT be used if the target database is a non-Oracle database”. Technically the R3load dump files are completely independent of database and operating system. There is one restriction however, restart of a package during import is not possible on SQL Server. In this scenario the entire table will need to be dropped and all packages for the table restarted. It is always recommended to kill R3load tasks for a specific split table, TRUNCATE the table and restart the entire import process if one split R3load aborts. The reason for this is that the recovery process built into R3load involves doing single row-by-row DELETE statements to remove the records loaded by the R3load process that aborts. This is extremely slow and will often cause blocking/locking situations on the database. Experience has shown it is faster to start the import of this specific table from the beginning, therefore the limitation mentioned in Note 1043380 is not a limitation at all

ROW ID has a disadvantage that calculation of the splits must be done during downtime – see SAP Note 1043380.

2. Create multiple “clones” of the source database and export in parallel

One method to increase export performance is to export from multiple copies of the same database. Provided the underlying infrastructure such as server, network and storage is scalable this approach is linearly scalable. Exporting from two copies of the same database will be twice as fast, 4 copies will be 4 times as fast. Migration Monitor is configured to export on a select number of tables from each “clone” of the database. In the case below the export workload is distributed approximately 25% on each of the 4 DB servers.

-DB Server1 & Export Server #1 – dedicated to the largest 1-4 tables (depending on how skewed the data distribution is on the source database)

-DB Server2 & Export Server #2 – dedicated to tables with table splits

-DB Server3 & Export Server #3 – dedicated to tables with table splits

-DB Server4 & Export Server #4 – all remaining tables

Great care must be taken to ensure that the databases are exactly and precisely synchronized, otherwise data loss or data inconsistencies could occur. Provided the steps below are precisely followed, data integrity is provided.

This technique is simple and cheap with standard commodity Intel hardware but is also possible for customers running proprietary UNIX hardware. Substantial hardware resources are free towards the middle of an OS/DB migration project when Sandbox, Development, QAS, Training and DR systems have already moved to Azure. There is no strict requirement that the “clone” servers have identical hardware resources. So long as there is adequate CPU, RAM, disk and network performance the addition of each clone increases performance

If additional export performance is still required open an SAP incident in BC-DB-MSS for additional steps to boost export performance (very advanced consultants only)

Steps to implement a multiple parallel export:

1. Backup the primary database and restore onto “n” number of servers (where n = number of clones). In the case illustrated 3 is chosen making a total of 4 DB servers

2. Restore backup onto 3 servers

3. Establish log shipping from the Primary source DB server to 3 target “clone” servers

4. Monitor log shipping for several days and ensure log shipping is working reliably

5. At the start of downtime shutdown all SAP application servers except the PAS. Ensure all batch processing is stopped and all RFC traffic is stopped

6. In transaction SM02 enter text “Checkpoint PAS Running”. This updates table TEMSG

7. Stop the Primary Application Server. SAP is now completely shutdown. No more write activity can occur in the source DB. Ensure that no non-SAP application is connected to the source DB (there never should be, but check for any non-SAP sessions at the DB level)

8. Run this query on the Primary DB server SELECT EMTEXT FROM <schema>.TEMSG;

9. Run the native DBMS level statement INSERT INTO <schema>.TEMSG “CHECKPOINT R3LOAD EXPORT STOP dd:mm:yy hh:mm:ss” (exact syntax depends on source DBMS. INSERT into EMTEXT)

10. Halt automatic transaction log backups. Manually run one final transaction log backup on the Primary DB server. Ensure the log backup is copied to the clone servers

11. Restore the final transaction log backup on all 3 nodes

12. Recover the database on the 3 “clone” nodes

13. Run the following SELECT statement on *all* 4 nodes SELECT EMTEXT FROM <schema>.TEMSG;

14. With a phone or camera photograph the screen results of the SELECT statement for each of the 4 DB servers (the Primary and the 3 clones). Be sure to carefully include each hostname in the photo – these photographs are proof that the clone DB and the primary are identical and contain the same data from the same point in time. Retain these photos and get customer to sign off the DB replication status

15. Start export_monitor.bat on each Intel R3load export server

16. Start the dump file copy to Azure process (either AzCopy or Robocopy)

17. Start import_monitor.bat on the R3load Azure VMs

Diagram showing existing Production DB server log shipping to “clone” databases. Each DB server has one or more Intel R3load servers

Network Upload Optimizations

Jumbo Frames are ethernet frames larger than the default 1500 bytes. Typical Jumbo Frame sizes are 9000 bytes. Increasing the frame size on the source DB server, all intermediate network devices such as switches and the Intel R3load servers reduces CPU consumption and increases network throughput. The Frame Size must be identical on all devices otherwise very resource intensive conversion will occur.

Additional networking features such as Receive Side Scaling (RSS) can be switched on or configured to distribute network processing across multiple processors  Running R3load servers on VMWare has proven to make network tuning for Jumbo Frames and RSS more complex and is not recommended unless there very expert skill level available

R3load exports data from DBMS tables and compresses this raw format independent data in dump files. These dump files need to be uploaded into Azure and imported to the Target SQL Server database.

The performance of the copy and upload to Azure of these dump files is a critical component in the overall migration process.

There are two basic approaches for upload of R3load dump files:

1.Copy from on-premises R3load export servers to Azure blob storage via Public Internet with AzCopy

On each of the R3load servers run a copy of AzCopy with this command line:

AzCopy /source:C:\ExportServer_1\Dumpfiles /dest:https://<storage_account>/ExportServer_1/Dumpfiles /destkey:xxxxxx /S /NC:xx /blobtype:page

The value for /NC: determines how many parallel sessions are used to transfer files. In general AzCopy will perform best with a larger number of smaller files and /NC values between 24-48. If a customer has a powerful server and very fast internet this value can be increased. If this value is increased too high connection to the R3load export server will be lost due to network saturation. Monitor the network throughput in Windows Task Manager. Copy throughput of over 1Gigabit per second per R3load Export Server can be easily achieved. Copy throughput can be scaled up by having more R3load servers (4 are depicted in the diagram above)

A similar script will need to be run on the R3load Import servers in Azure to copy the files from Blob onto a file system that R3load can access.

2. Copy from on-premises R3load export servers to an Azure VM or blob storage via a dedicated ExpressRoute connection using AzCopy, Robocopy or similar tool

Robocopy C:\Export1\Dump1 \\az_imp1\Dump1 /MIR /XF *.SGN /R:20 /V /S /Z /J /MT:8 /MON:1 /TEE /UNILOG+:C:\Export1\Robo1.Log

The block diagram below illustrates 4 Intel R3load servers running R3load. In the background Robocopy is started uploading dump files. When entire split tables and packages are completed the SGN file is copied either manually or via a script. When the SGN file for a package arrives on the import R3load server this will trigger import for this package automatically

Note: Copying files over NFS or Windows SMB protocols is not as fast or robust as mechanisms such as AzCopy. It is recommended to test performance of both file upload techniques. It is recommended to notify Microsoft Support for VLDB migration projects as very high throughput network operations might be mis-identified as Denial of Service attacks.

Target System Optimizations

1. Use latest possible OS with latest patches

2. Use latest possible DB with latest patches

3. Use latest possible SAP Kernel with latest patches (eg. Upgrade from 7.45 kernel to 7.49 or 7.53)

4. Consider using the largest available Azure VM. The VM type can be lowered to a smaller VM after the Import process

5. Create multiple Transaction Log files with the first transaction log file on the local non-persistent SSD. Additional Transaction Log files can be created on P50 disks.  VLDB migrations could require more than 5TB of Transaction Log space. It is strongly recommended to ensure there is always a large amount of Transaction Log space free at all times (20% is a safe figure). Extending Transaction Log files during an Import is not recommended and will impact performance

6. SQL Server Max Degree of Parallelism should usually be set to 1. Only certain index build operations will benefit from MAXDOP and then only for specific tables

7. Accelerated Networking is mandatory for DB and R3load servers

8. It is recommended to use m128 3.8TB as the DB server and E64v3 as the R3load servers (as at March 2018)

9. Limit the maximum memory a single SQL Server query can request with Resource Governor. This is required to prevent index build operations from requesting very large memory grants

10. Secondary indexes for very large tables can be removed from the STR file and built ONLINE with scripts after the main portion of the import has finished and post processing tasks such as configuring STMS are occurring

11. Customers using SQL Server TDE are recommended to pre-create the database and Transaction Log files, then enable TDE prior to starting the import. TDE will run for a similar amount of time on a DB that is full of data or empty. Enabling TDE on a VLDB can lead to blocking/locking issues and it is generally recommended to import into a TDE database. The overhead importing to a TDE database is relatively low

12. Review the latest OS/DB Migration FAQ

Recommended Migration Project Documents

VLDB OS/DB migrations require additional levels of technical skill and also additional documentation and procedures. The purpose of this documentation is to reduce downtime and eliminate the possibility of data loss. The minimum acceptable documentation would include the following topics:

1. Current SAP Application Name, version, patches, DB size, Top 100 tables by size, DB compression usage, current server hardware CPU, RAM and disk

2. Data Archiving/Purging activities completed and the space savings achieved

3. Details on any upgrade, Unicode conversion or support packs to be applied during the migration

4. Target SAP Application version, Support Pack Level, estimated target DB size (after compression), Top 100 tables by size, DB version and patch, OS version and patch, VM sku, VM configuration options such as disk cache, write accelerator, accelerated networking, type and quantity of disks, database file sizes and layout, DBMS configuration options such as memory, traceflags, resource governor

5. Security is typically a separate topic, but network security groups, firewall settings, Group Policy, DBMS encryption settings

6. HA/DR approach and technologies, in addition special steps to establish HA/DR after the initial import is finished

7. OS/DB migration design approach:

-How many Intel R3load export servers

-How many R3load import VMs

-How many R3load processes per VM

-Table splitting settings

-Package splitting settings

-export and import monitor settings

-list of secondary indexes to be removed from STR files and created manually

-list of pre-export tasks such as clearing updates

9. Analysis of last export/import cycle. Which settings were changed? What was the impact on the “flight plan”? Is the configuration change accepted or rejected? Which tuning & configuration is planned for next test cycle?

10. Recovery procedures and exception handling – procedures for rollback, how to handle exceptions/issues that have occurred during previous test cycles

It is typically the responsibility of the lead OS/DB migration consultant to prepare this documentation. Sometimes topics such as Security, HA/DR and networking are handled by other consultants. The quality of such documentation has proven to be a very good indicator of the skill level and capability of the project team and the risk level of the project to the customer.

Migration Monitoring

One of the most important components of a VLDB migration is the monitoring, logging and diagnostics that is configured during Development, Test and “dry run” migrations.

Customers are strongly advised to discuss with their OS/DB migration consultant implementation and usage of the steps in this section of the blog. Not to do so exposes a customer to a significant risk.

Deployment of the required monitoring and interpretation of the monitoring and diagnostic results after each test cycle is mandatory and essential for optimizing the migration and planning production cutover. The results gained in test migrations are also necessary to be able to judge whether the actual production migration is following the same patterns and time lines as the test migrations. Customers should request regular project review checkpoints with the SAP partner.  Contact Microsoft for a list of consultants that have demonstrated the technical and organizational skills required for a successful project.

Without comprehensive monitoring and logging it would be almost impossible to achieve safe, repeatable, consistent and low downtime migrations with a guarantee of no data loss. If problems such as long runtimes of some packages were to occur, it is almost impossible for Microsoft and/or SAP to assist with spot consulting without monitoring data and migration design documentation

During the runtime of an OS/DB migration:

OS level parameters on DB and R3load hosts: CPU per thread, Kernel time per thread, Free Memory (GB), Page in/sec, Page out/sec, Disk IO reads/sec, Disk IO write/sec, Disk read KB/sec, Disk write KB/sec

DB level parameters on SQL Server target: BCP rows/sec, BCP KB/sec, Transaction Log %, Memory Grants, Memory Grants pending, Locks, Lock memory, locking/blocking

Network monitoring normally handled by network team. Exactly configuration of network monitoring depends on customer specific situation.

During the runtime of the DB import it is recommended to execute this SQL statement every few minutes and screenshot anything abnormal (such as high wait times)

select session_id, request_id,start_time,
status,
command, wait_type, wait_resource, wait_time, last_wait_type, blocking_session_id from sys.dm_exec_requests
where session_id >
49 orderby wait_time desc;

During all migration test cycles a “Flight Plan” showing the number of packages exported and imported (y-axis) should be plotted against time (x-axis). The purpose of this graph is to establish an expected rate of progress during the final production migration cutover. Deviation (either positive or negative) from the expected “Flight Plan” during test or the final production migration is easily detected using this method. Other parameters such as CPU, disk and R3load rows/sec can be overlaid on top of the “Flight Plan”

At the conclusion of the Export and Import the migration time reports must be collected (export_time.html and import_time.html) https://blogs.sap.com/2016/11/17/time-analyzer-reports-for-osdb-migrations/

VLDB Migration Do’s & Don’t

The guidelines contained in this blog are based on real customer projects and the learnings derived from these projects. This blog instructs customers to avoid certain scenarios because these have been unsuccessful in the past. An example is the recommendation not to use UNIX servers or virtualized servers as R3load export servers:

1. Very often the export performance is a gating factor on the overall downtime. Often the current hardware is more than 4-5 years old and is prohibitively expensive to upgrade

2. It is therefore important to get the maximum export performance that is practical to achieve

3. Previous projects have spent man-weeks or even man-months trying to tune R3load export performance on UNIX or virtualized platforms, before giving up and using Intel R3load servers

4. 2-socket commodity Intel servers are very inexpensive and immediately deliver substantial performance gains, in some cases many orders of magnitude greater than minor tuning improvements possible on UNIX or virtualized servers

5. Customers often have existing VM farms but most often these do not support modern offload or SRIOv technologies. Often the VMWare version is old, unpatched or not configured for very high network throughput and low latency. R3load export servers require very fast thread performance and extremely high network throughput. R3load export servers may run for 10-15 hours at nearly 100% CPU and network utilization. This is not the typical use case of most VMWare farms and most VMWare deployments were never designed to handle a workload such as R3load.

RECOMMENDATION: Do not invest time into optimizing R3load export performance on UNIX or virtualized platforms. Doing so will waste not only time but will cost much more than buying low cost Intel servers at the start of the project. VLDB migration customers are therefore requested to ensure the project team has fast modern R3load export servers available at the start of the project. This will lower the total cost and risk of the project.

Do:

1. Survey and Inventory the current SAP landscape. Identify the SAP Support Pack levels and determine if patching is required to support the target DBMS. In general the Operating Systems Compatibility is determined by the SAP Kernel and the DBMS Compatibility is determined by the SAP_BASIS patch level.

Build a list of SAP OSS Notes that need to be applied in the source system such as updates for SMIGR_CREATE_DDL. Consider upgrading the SAP Kernels in the source systems to avoid a large change during the migration to Azure (eg. If a system is running an old 7.41 kernel, update to the latest 7.45 on the source system to avoid a large change during the migration)

2. Develop the High Availability and Disaster Recovery solution. Build a PowerPoint that details the HA/DR concept. The diagram should break up the solution into the DB layer, ASCS layer and SAP application server layer. Separate solutions might be required for standalone solutions such as TREX or Livecache

3. Develop a Sizing & Configuration document that details the Azure VM types and storage configuration. How many Premium Disks, how many datafiles, how are datafiles distributed across disks, usage of storage spaces, NTFS Format size = 64kb. Also document Backup/Restore and DBMS configuration such as memory settings, Max Degree of Parallelism and traceflags

4. Network design document including VNet, Subnet, NSG and UDR configuration

5. Security and Hardening concept. Remove Internet Explorer, create a Active Directory Container for SAP Service Accounts and Servers and apply a Firewall Policy blocking all but a limited number of required ports

6. Create an OS/DB Migration Design document detailing the Package & Table splitting concept, number of R3loads, SQL Server traceflags, Sorted/Unsorted, Oracle RowID setting, SMIGR_CREATE_DDL settings, Perfmon counters (such as BCP Rows/sec & BCP throughput kb/sec, CPU, memory), RSS settings, Accelerated Networking settings, Log File configuration, BPE settings, TDE configuration

7. Create a “Flight Plan” graph showing progress of the R3load export/import on each test cycle. This allows the migration consultant to validate if tunings and changes improve r3load export or import performance. X axis = number of packages complete. Y axis = hours. This flight plan is also critical during the production migration so that the planned progress can be compared against the actual progress and any problem identified early.

8. Create performance testing plan. Identify the top ~20 online reports, batch jobs and interfaces. Document the input parameters (such as date range, sales office, plant, company code etc) and runtimes on the original source system. Compare to the runtime on Azure. If there are performance differences run SAT, ST05 and other SAP tools to identify inefficient statements

9. SAP BW on SQL Server. Check this blogsite regularly for new features for BW systems including Column Store

10. Audit deployment and configuration, ensure cluster timeouts, kernels, network settings, NTFS format size are all consistent with the design documents. Set perfmon counters on important servers to record basic health parameters every 90 seconds. Audit that the SAP Servers are in a separate AD Container and that the container has a Policy applied to it with Firewall configuration.

11. Do check that the lead OS/DB migration consultant is licensed! Request the consultant name, s-user and certification date. Open an OSS message to BC-INS-MIG and ask SAP to confirm the consultant is current and licensed.

12. If possible, have the entire project team associated with the VLDB migration project within one physical location and not geographically dispersed across several continents and time zones.

13. Make sure that there is a proper fallback plan is in place and that it is part of the overall schedule.

14. Do select fast thread count Intel CPU models for the R3load export servers. Do not use “Energy Saver” CPU models as they have much lower performance and do not use 4-socket servers. Intel Xeon E5 Platinum 8158 is a good example

Do not:

1. VLDB OS/DB migration requires an advanced technical skillset and very strong process, change control & documentation. Do not do “on the job training” with VLDB migrations

2. Do not subcontract one consulting organization to do the Export and subcontract another consulting organization to do the Import. Occasionally the Source system is outsourced and managed by one consulting organization or partner and a customer wishes to migrate to Azure and switch to another partner. Due to the tight coupling between Export and Import tuning and configuration it is very unlikely assigning these tasks to different organizations will produce a good result

3. Do not economize on Azure hardware resources during the migration and go live. Azure VMs are charged per minute and can be reduced in size very easily. During a VLDB migration leverage the most powerful VM available. Customers have successfully gone live on 200-250% oversized systems, then stabilized while running significantly oversized systems. After monitoring utilization for 4-6 weeks, VMs are reduced in size or shutdown to lower costs

Required Reading, Documentation and Tips

Below are some recommendations for those setting up this solution based on test deployments:

Check the SAP on Microsoft Azure blog regularly https://blogs.msdn.microsoft.com/saponsqlserver/

Read the latest SAP OS/DB Migration FAQ https://blogs.msdn.microsoft.com/saponsqlserver/tag/migration/

A useful blog on DMO is here https://blogs.sap.com/2017/10/05/your-sap-on-azure-part-2-dmo-with-system-move/

Information on DMO https://blogs.sap.com/2013/11/29/database-migration-option-dmo-of-sum-introduction/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Columnstore became default in SAP BW

$
0
0

Overview

The following features have been optionally available in SAP BW on Microsoft SQL Server for several years:

The impact of these features is described in https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server. Customer experience showed, that using these features almost always resulted in BW query performance improvements. Therefore, these features are turned on by default after applying SAP Note 2582158 – Make Columnstore and Flat Cube default.

Columnstore became default only on SQL Server 2016 and newer:

Creating a New Cube

A new cube will be automatically created as a Columnstore Cube, except in the following cases:

  • The new behavior has been explicitly turned off by setting the following RSADMIN parameter
    • MSS_DEFAULT_CS = FALSE
  • APO Cubes
    Cubes of the SAP application Advanced Planning and Optimization (APO) never use the Columnstore
  • Realtime Cubes
    SAP Realtime Cubes only use the Columnstore on the e-fact table, unless you have set the RSADMIN parameter
    • MSS_REALTIME_FFACT_CS = X

    For details, see SAP Note 2371454 – Columnstore and Realtime Cubes

Transporting a Cube

A cube is automatically converted to Columnstore (on the target system), if it is empty (on the target system).

Converting all Cubes to Columnstore

There are 3 different ways for converting all cubes to Columnstore. In the second tab of report MSSCSTORE, you can define a global setting for all SAP BW cubes. We aware, that this is a global setting for all (existing and future) cubes, not just a default for new cubes.
When choosing “Always Column Store (CS)” and pressing F8, all cubes are defined in SAP BW as Columnstore Cubes. However, the Columnstore Indexes are not created on the database yet. Report MSSCSTORE should finish within a few minutes, since only the cube definition is changed.

  • Converting via process chain
    All indexes of a BW cube are created when running the BW process chain Type “Create Index” or executing “Repair DB indexes” in SAP transaction RSA1. This allows a gradual conversion of the cube indexes, once you have changed the Columnstore definition in report MSSCSTORE as described above.
  • Immediate conversion
    You might additionally select “Repair indexes of all cubes” in report MSSCSTORE. Hereby, the Columnstore Indexes are created on the database immediately. In this case, the runtime of report MSSCSTORE can easily take a few hours. Therefore, you should run report MSSCSTORE as a batch job (by pressing F9). Make sure that the SQL Server transaction log is large enough and that log backups are performed regularly.
  • Conversion during R3load System Copy
    You can automatically convert all cubes to Columnstore during an R3load-based system copy. Therefore, you have to choose “SQL Server 2016 (all column-store)” as the database version in report SMIGR_CREATE_DDL.

Conclusion

For SAP BW releases below 7.40 SP8 you should convert all BW cubes to Columnstore. For newer SAP BW releases you should consider applying the Flat Cube. A Flat Cube always uses the Columnstore.

Flat Cube became default in SAP BW

$
0
0

Overview

The following features have been optionally available in SAP BW on Microsoft SQL Server for several years:

The impact of these features is described in https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server. Customer experience showed, that using these features almost always resulted in BW query performance improvements. Therefore, these features are turned on by default after applying SAP Note 2582158 – Make Columnstore and Flat Cube default.

Flat Cube became default only on SQL Server 2016 and newer:

Creating a New Cube

A new cube will be created as a Flat Cube if you mark the checkbox “Flat InfoCube”.

The default value of this checkbox has been changed. It is now turned on. You can revert to the old behavior by setting the following RSADMIN parameter (In this case, only the default setting of the checkbox changes):

  • MSS_DEFAULT_FLAT = FALSE

A Flat Cube always has a columnstore index. You can choose the Flat option in combination with the RealTime option. Keep in mind, that you cannot create aggregates on a Flat Cube.

Transporting a Cube

The flat property of a BW cube can be transported from an SAP source system to an SAP target system once you have applied SAP Note 2550929 – Inconsistent metadata in case of transport of flat cubes non HANA landscape. A Flat Cube in the source system will be created as a Flat Cube in the target system. A non-flat cube will be created as non-flat in the target system. However, the flat property is not changed in the target system when transporting a cube, if the cube already exists in the target system and is not empty (means, the fact tables contain data).

Converting all Cubes to Flat Cube

The procedure of converting a cube to a Flat Cube is described in https://blogs.msdn.microsoft.com/saponsqlserver/2018/01/03/improve-sap-bw-performance-by-applying-the-flat-cube:

  • Use SAP report RSDU_REPART_UI for converting a single cube
  • Use SAP report RSDU_IC_STARFLAT_MASSCONV for converting many or all cubes

Keep in mind, that an automatic conversion to Flat Cube during an R3load based system copy or database migration is not possible. You have to convert the cubes after the database migration on the target system.

Conclusion

Since the conversion to Flat Cube can be very time-consuming, you often do not want to perform this on all your cubes. You may want to start using the Flat Cube for your most important cubes. For all non-flat cubes you should at least apply the Columnstore (which is a quite fast and simple operation). A Flat Cube always uses the Columnstore.

SAP on Azure: General Update – June 2018

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the Azure platform. The key objective of the Azure cloud platform is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. M-Series is Certified for SAP Hana – S4, BW4, BWoH and SoH up to 3.8TB RAM

SAP Hana customers can run S4HANA, BW4Hana, BW on Hana and Business Suite on Hana in Production in many of the Azure datacenters in Americas, Europe and Asia. https://azure.microsoft.com/en-us/global-infrastructure/regions/ More information in this blog: https://azure.microsoft.com/en-us/blog/azure-m-series-vms-are-now-sap-hana-certified/

Requirements: Write Accelerator must be used for the Transaction Log disk only. Suse 12.3 or RHEL 7.3 or higher.

The SAP IaaS catalogue now includes M-series and Hana Large Instances

More information on the Write Accelerator can be found here:

https://azure.microsoft.com/en-us/blog/write-accelerator-for-m-series-virtual-machines-now-generally-available/

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/how-to-enable-write-accelerator

The central Note for SAP Hana on Azure VMs is Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types https://launchpad.support.sap.com/#/notes/0001928533

The Note for Hana Large Instances for memory up to 20TB scale up is Note 2316233 – SAP HANA on Microsoft Azure (Large Instances) https://launchpad.support.sap.com/#/notes/2316233

Summary of M-Series VMs for SAP NetWeaver and SAP Hana

M-Series running SAP Hana

1. Transaction Log disk(s) must have Azure Write Accelerator Enabled https://docs.microsoft.com/en-us/azure/virtual-machines/linux/how-to-enable-write-accelerator

2. Azure Write Accelerator must not be activated on disks holding DBMS datafiles, temp files etc

3. Azure Accelerated Networking should always be enabled on M-Series VMs running Hana

4. The precise OS releases that are supported for Hana can be found in the SAP Hana IaaS Directory https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure

5. Where there is any discrepancy between any SAP OSS Note such as 1928533 or other source the SAP Hana IaaS Directory takes precedence

M-Series running AnyDB (SQL, Oracle, Sybase etc)

1. Windows 2012 2016, Suse 12.3, RH 7.x and Oracle Linux are all supported

2. Transaction Log disk(s) could have Azure Write Accelerator Enabled https://docs.microsoft.com/en-us/azure/virtual-machines/linux/how-to-enable-write-accelerator

3. Azure Write Accelerator must not be activated on Data disks

4. Azure Accelerated Networking should always be enabled on M-Series VMs running AnyDB

5. If running Oracle Linux the RHEL kernel must be used as at June 2018 instead of Oracle UEK4 kernel. Oracle UEK5 will support Accelerated Networking with Oracle 7.5

Additional Small Certified M-Series VMs

Small M-Series VMs are certified:

1. M64ls with 64vCPU and 512GB

2. M32ls with 32vCPU and 256GB

3. M32ts with 32vCPU and 192GB

Disk configuration and additional information on these new smaller M-Series VMs can be found here https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations

2. SAP NetWeaver on Windows Hyper-V 2016 Fully Supported

Windows Server 2016 Hyper-V is now fully supported as a Hypervisor for SAP applications running on Windows.

Hyper-V 2016 is a powerful component for customers wishing to deploy a hybrid cloud environment with some components on-premises and some components on Azure.

A special fix for Windows 2016 is required before using Hyper-V 2016. Apply the latest update for “Windows 10 1607/Windows Server 2016″ but at least this patch level https://support.microsoft.com/en-us/help/4093120/windows-10-update-kb4093120

https://wiki.scn.sap.com/wiki/display/VIRTUALIZATION/SAP+on+Microsoft+Hyper-V

1409604 – Virtualization on Windows: Enhanced monitoring https://launchpad.support.sap.com/#/notes/0001409604

1409608 – Virtualization on Windows https://launchpad.support.sap.com/#/notes/0001409608

More information on Windows 2016 for SAP is here:

https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/07/windows-2016-is-now-generally-available-for-sap/

https://blogs.sap.com/2017/05/06/performance-tuning-guidelines-for-windows-server-2016-hyper-v/

3. Build High Availability Capability within Azure Regions with Availability Zones

Availability Zones are Generally Available in many Azure Regions and are being deployed to most Azure regions shortly.

Availability Zones are physically separated datacenters with independent network and power infrastructure

VMs running in an Availability Zone achieve an SLA of 99.99% https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_8/

A very good overview of Availability Zones on Azure and the interaction with other Azure components is detailed in this blog

https://blogs.msdn.microsoft.com/igorpag/2018/05/03/azure-availability-zones-quick-tour-and-guide/

More information below

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-availability-zones

https://blogs.msdn.microsoft.com/igorpag/2017/10/08/why-azure-availability-zones/

https://azure.microsoft.com/en-us/global-infrastructure/availability-zones/

A typical topology is depicted below.


The Azure Standard Internal Load Balancer is used for workloads that are distributed across Availability Zones. Even when deploying VMs into an Azure region that does not yet have Availability Zones it is recommended to use the Standard Internal Load Balancer in Zone-redundant mode. This allows

To view the VM types that are available in each Availability Zone in a Datacenter run this PowerShell command

Get-AzureRmComputeResourceSku | where {$_.Locations.Contains(“southeastasia”)-and $_.ResourceType.Equals(“virtualMachines”) -and $_.LocationInfo[0].Zones -ne $null }

Similar information can be seen in the Azure Portal when creating a VM

Customers building High Availability solutions with Suse 12.x operating system can review documentation on how to deploy single SID and Multi SID Suse Pacemaker clusters

The Microsoft documentation discusses the scenario “Microsoft SAP Fencing Agent + single iSCSI” STONITH configuration

An alternative deployment scenario “Two iSCSI devices in different Availability Zones”.

A Suse bug fix may be required to configure two iSCSI devices:
https://www.suse.com/support/kb/doc/?id=7022477

https://ptf.suse.com/f2cf38b50ed714a8409693060195b235/sles12-sp3-hae/14410/x86_64/20171219  (a user id is needed)

A recommended deployment configuration is to place each iSCSI source in a different Availability Zone.

4. Sybase ASE 16.3 PL3 “Always-on” on Azure – 2 Node HA + 3rd Async Node for DR

A new blog with step-by-step instructions on how to install and configure a 2 node HA Sybase cluster with a third node for DR has been released.

https://blogs.msdn.microsoft.com/saponsqlserver/2018/05/18/installation-procedure-for-sybase-16-3-patch-level-3-always-on-dr-on-suse-12-3-recent-customer-proof-of-concept

5. Very Useful Links for SAP on Azure Consultants

The listing below is a comprehensive collection of links that has proved very useful for many consultants working at System Integrators.

SAP on Azure Reference Architectures

SAP S/4HANA for Linux Virtual Machines on Azure https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/sap-s4hana

Run SAP HANA on Azure Large Instances https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/hana-large-instances

Deploy SAP NetWeaver (Windows) for AnyDB on Azure Virtual Machines https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/sap-netweaver

High Availability SAP Netweaver Any DB

High-availability architecture and scenarios for SAP NetWeaver

Azure Virtual Machines high availability architecture and scenarios for SAP NetWeaver https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse

Azure infrastructure preparation for SAP NetWeaver high-availability deployment

Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and shared disk for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk

Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-file-share

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster framework for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse#setting-up-a-highly-available-nfs-server

Installation of an SAP NetWeaver high availability system in Azure

Install SAP NetWeaver high availability by using a Windows failover cluster and shared disk for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-shared-disk

Install SAP NetWeaver high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-file-share

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse#prepare-for-sap-netweaver-installation

High Availability SAP Hana

HANA Large Instance

SAP HANA Large Instances high availability and disaster recovery on Azure https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery

High availability set up in SUSE using the STONITH https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/ha-setup-with-stonith

SAP HANA high availability for Azure virtual machines https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-overview

SAP HANA availability within one Azure region https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-one-region

SAP HANA availability across Azure regions https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-across-regions

Disaster Recovery

Protect a multi-tier SAP NetWeaver application deployment by using Site Recovery https://docs.microsoft.com/en-gb/azure/site-recovery/site-recovery-sap

SAP HANA Large Instances high availability and disaster recovery on Azure https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery

Setting Up Hana System Replication on Azure Hana Large Instances https://blogs.msdn.microsoft.com/saponsqlserver/2018/02/10/setting-up-hana-system-replication-on-azure-hana-large-instances/

Monitoring

New Azure PowerShell cmdlets for Azure Enhanced Monitoring https://blogs.msdn.microsoft.com/saponsqlserver/2016/05/16/new-azure-powershell-cmdlets-for-azure-enhanced-monitoring/

The Azure Monitoring Extension for SAP on Windows – Possible Error Codes and Their Solutions https://blogs.msdn.microsoft.com/saponsqlserver/2016/01/29/the-azure-monitoring-extension-for-sap-on-windows-possible-error-codes-and-their-solutions/

Azure Extended monitoring for SAP https://blogs.msdn.microsoft.com/saponsqlserver/2014/06/24/azure-extended-monitoring-for-sap/

https://docs.microsoft.com/en-us/azure/operations-management-suite/

https://azure.microsoft.com/en-us/services/monitor/

https://azure.microsoft.com/en-us/services/network-watcher/

Automation

https://azure.microsoft.com/en-us/services/automation/

Automate the deployment of SAP HANA on Azure https://github.com/AzureCAT-GSI/SAP-HANA-ARM

Migration from on-premises DC to Azure

Transfer data with the AzCopy https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy

Azure Import/Export service https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service

Very Large Database Migration to Azure https://blogs.msdn.microsoft.com/saponsqlserver/2018/04/10/very-large-database-migration-to-azure-recommendations-guidance-to-partners/

SAP on Azure – DMO with System Move https://blogs.sap.com/2017/10/05/your-sap-on-azure-part-2-dmo-with-system-move/

SAP on Azure certification

SAP Certified IaaS Platforms https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure

SAP Note #1928533 – SAP Applications on Azure: Supported Products and Azure VM types  https://launchpad.support.sap.com/#/notes/1928533

SAP certifications and configurations running on Microsoft Azure https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-certifications

Azure M-series VMs are now SAP HANA certified https://azure.microsoft.com/en-us/blog/azure-m-series-vms-are-now-sap-hana-certified/

Backup Solutions

Azure VM backup for OS https://azure.microsoft.com/en-gb/services/backup/

HANA VM Backup – overview https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-guide

HANA VM backup to file https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level

HANA VM backup based on storage snapshots https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-backup-storage-snapshots

HANA Large Instance (HLI) Backup – overview https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery#backup-and-restore

HLI backup based on storage snapshots https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery#using-storage-snapshots-of-sap-hana-on-azure-large-instances

Use third party backup tools: Commvault, Veritas, etc.

All the major third-party backup tools are supported in Azure and have agents for SAP HANA, SQL, Oracle, Sybase etc

Commvault

Azure: https://documentation.commvault.com/commvault/v11/article?p=31252.htm

SAP HANA: https://documentation.commvault.com/commvault/v11/article?p=22305.htm

Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/commvault.commvault?tab=Overview

Veritas NetBackup

Azure: https://www.veritas.com/support/en_US/article.100041400

HANA: https://www.veritas.com/content/support/en_US/doc/16226696-127422304-0/v88504823-127422304

Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/veritas.veritas-netbackup-8-s?tab=Overview

Security

Network

Logically segment subnets https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#logically-segment-subnets

Control routing behavior https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#control-routing-behavior

Enable Forced Tunneling https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#enable-forced-tunneling

Use virtual network appliances https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#use-virtual-network-appliances

Deploy DMZs for security zoning https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#deploy-dmzs-for-security-zoning

Avoid exposure to the Internet with dedicated WAN links https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#avoid-exposure-to-the-internet-with-dedicated-wan-links

Optimize uptime and performance https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#optimize-uptime-and-performance

HTTP-based Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#http-based-load-balancing

External Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#external-load-balancing

Internal Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#internal-load-balancing

Use global load balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#use-global-load-balancing

Disable RDP/SSH Access to Azure Virtual Machines https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#disable-rdpssh-access-to-azure-virtual-machines

Enable Azure Security Center https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#enable-azure-security-center

Securely extend your datacenter into Azure https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#securely-extend-your-datacenter-into-azure

Operational

Monitor, manage, and protect cloud infrastructure https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#monitor-manage-and-protect-cloud-infrastructure

Manage identity and implement single sign-on https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#manage-identity-and-implement-single-sign-on

Trace requests, analyze usage trends, and diagnose issues https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#trace-requests-analyze-usage-trends-and-diagnose-issues

Monitoring services https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#monitoring-services

Prevent, detect, and respond to threats https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#prevent-detect-and-respond-to-threats

End-to-end scenario-based network monitoring https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#end-to-end-scenario-based-network-monitoring

Azure Security Center https://azure.microsoft.com/en-us/blog/protect-virtual-machines-across-different-subscriptions-with-azure-security-center/

https://azure.microsoft.com/en-us/blog/how-azure-security-center-helps-detect-attacks-against-your-linux-machines/

New VM Type for single tenant isolated VM https://azure.microsoft.com/en-us/blog/new-isolated-vm-sizes-now-available/

Azure Active Directory

Azure Active Directory integration with SAP HANA https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-saphana-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Cloud Platform Identity Authentication https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Business ByDesign https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sapbusinessbydesign-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Cloud for Customer for SSO functionality https://blogs.sap.com/2017/08/02/azure-active-directory-integration-with-sap-cloud-for-customer-for-sso-functionality/

S/4HANA environment – Fiori Launchpad SAML Single Sign-On with Azure AD https://blogs.sap.com/2017/02/20/your-s4hana-environment-part-7-fiori-launchpad-saml-single-sing-on-with-azure-ad/

Very good rollup article on Azure Networking https://blogs.msdn.microsoft.com/igorpag/2017/04/06/my-personal-azure-faq-on-azure-networking-v3/

Special thanks to Ravi Alwani for collating these links

6. New Microsoft Features for SAP Customers

Microsoft has released many new features for SAP customers:

Azure Site Recovery Azure-2-Azure – Support for Suse 12.x has been released! https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix

Global vNet Peering – previously it was not possible to Peer vNets from other datacenters. This is now Generally Available in some datacenters and is being deployed globally. One of the biggest advantages of Global vNet Peering is that network traffic will be carried across the Azure network backbone.

https://blogs.msdn.microsoft.com/wushuai/2018/02/04/provide-cross-region-low-latency-service-based-on-azure-vnet-peering/

https://azure.microsoft.com/en-us/blog/global-vnet-peering-now-generally-available/

The new Standard Internal Load Balancer (ILB) is Availability Zone aware and has better performance than the regular Basic ILB

https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#load-balancer

https://github.com/yinghli/azure-vm-network-performance (scroll down to review performance)

SQL Server 2016 Service Pack 2 (SP2) released https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2016-service-pack-2-sp2-released/

Linux customers are recommended to setup Azure Serial Console. This allows access to a Linux VM when the network stack is not working. This feature is the equivalent of an RS-232/COM port cable connection https://docs.microsoft.com/en-us/azure/virtual-machines/linux/serial-console

Azure Storage Explorer provides easier management of blob objects such as backups on Azure blob storage https://azure.microsoft.com/en-us/features/storage-explorer/

Azure now offers Trusted Execution Environment leveraging Intel Xeon Processors with Intel SGX technology. So far this is not tested with SAP yet, but may be validated in the future https://azure.microsoft.com/en-us/blog/azure-confidential-computing/

More information on new Network features can be found here https://azure.microsoft.com/en-us/blog/azure-networking-may-2018-announcements/

https://azure.microsoft.com/en-us/blog/monitor-microsoft-peering-in-expressroute-with-network-performance-monitor-public-preview/

7. New SAP Features

SAP has released a new version of SWPM. It is recommended to use this version for all new Installations. The tool can be downloaded from https://support.sap.com/en/tools/software-logistics-tools.html

1680045 – Release Note for Software Provisioning Manager 1.0 (recommended: SWPM 1.0 SP 23) https://launchpad.support.sap.com/#/notes/0001680045

Customers interested in automating SWPM can review 2230669 – System Provisioning Using a Parameter Input File https://launchpad.support.sap.com/#/notes/2230669

SAP has released new SAP Downwards Compatible Kernels. SAP has provided guidance to switch to using the new 7.53 kernel for all new Installations:

SAP recommends using the latest SP stack kernel (SAPEXE.SAR and SAPEXEDB.SAR), available in the Support Packages & Patches section of the SAP Support Portal https://launchpad.support.sap.com/#/softwarecenter.

For existing installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 749 PL 500. For details, see release note 2626990.
For new installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 753 PL 100. For details, see DCK note 2556153 and release note 2608318.
For AS ABAP 7.52 this is SAP Kernel 753 PL 100. For details, see release note 2608318.

2083594 – SAP Kernel 740, 741, 742, 745, 749 and 753: Versions and Kernel Patch Levels https://launchpad.support.sap.com/#/notes/2083594

2556153 – Using kernel 7.53 instead of kernel 7.40, 7.41, 7.42, 7.45, or 7.49 https://launchpad.support.sap.com/#/notes/0002556153

2350788 – Using kernel 7.49 instead of kernel 7.40, 7.41, 7.42 or 7.45 https://launchpad.support.sap.com/#/notes/0002350788

1969546 – Release Roadmap for Kernel 74x and 75x https://launchpad.support.sap.com/#/notes/1969546

https://wiki.scn.sap.com/wiki/display/SI/SAP+Kernel:+Important+News

8. Recommended Hana on Azure Disk Design Template

The Excel spreadsheet HANA-Disk-Design-Template-for-Azure contains a useful model template for customers planning to deploy Hana on Azure VMs.

The spreadsheet contains a sample Hana deployment on Azure M-series with details such as stripe sizes, Write Accelerator and other useful configuration settings

Further information and details can be found in the Azure documentation here: https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations

The spreadsheet is a sample only and should be adapted as required, for example the Availability Zone column will likely need to be updated.

Special thanks to Archana from Cognizant for creating this template

9. Disable 8.3 Filename Generation

Very old Windows releases were limited to filenames of 8.3 characters. Some applications that call very old file handling APIs can only reference files by their 8.3 format filenames.

Up to and including Windows Server 2016 all files with more than 8.3 characters will have a 8.3 format filename created.

This operation becomes very expensive if there are very large numbers of files in a directory. Frequently SAP customers will keep job logs or interface files on the /sapmnt share and the total number of files can reach hundreds of thousands.

It is recommended to disable 8.3 filename generation on all existing Windows servers and to include disabling 8.3 filename generation as part of the standard build of new Windows servers alongside steps such as removing Internet Explorer and disabling SMB v1.0.

662452 – Poor file system performance/errors during data accesses https://launchpad.support.sap.com/#/notes/662452

https://support.microsoft.com/en-us/help/121007/how-to-disable-8-3-file-name-creation-on-ntfs-partitions

10. Hundreds of vCPU and 12TB Azure VMs Certified for SAP Hana Announced

In a blog by Corey Sanders Microsoft confirms a new generation of M-Series VMs supporting hundreds of vCPU and 12TB of RAM. The blog “Why you should bet on Azure for your infrastructure needs, today and in the future” announces the following:

1. Next Generation M-Series based on Intel Skylake CPU supporting up to 12TB of RAM

2. New Hana Large Instance TDIv5 appliances 6 TB, 12 TB and 18 TB

3. New Standard SSD based storage – suitable for backups and bulk storage https://azure.microsoft.com/en-us/blog/preview-standard-ssd-disks-for-azure-virtual-machine-workloads/

4. New smaller M-Series VMs suitable for non-production, Solution Manager and other smaller SAP Hana databases

https://azure.microsoft.com/en-us/blog/why-you-should-bet-on-azure-for-your-infrastructure-needs-today-and-in-the-future/

https://azure.microsoft.com/en-us/blog/offering-the-largest-scale-and-broadest-choice-for-sap-hana-in-the-cloud/

Miscellaneous Topics, Notes & Links

2343511 – Microsoft Azure connector for SAP Landscape Management (LaMa) https://launchpad.support.sap.com/#/notes/0002343511

Optimizing SAP for Azure https://www.microsoft.com/en-us/download/details.aspx?id=56819

Useful link on setting up LVM on Linux VMs https://docs.microsoft.com/en-us/azure/virtual-machines/linux/configure-lvm

Updated SQL Column Store documentation – recommended for all BW on SQL customers 2116639 – SQL Server Columnstore Documentation https://launchpad.support.sap.com/#/notes/0002116639

Viewing all 90 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>