Quantcast
Channel: Running SAP Applications on the Microsoft Platform
Viewing all 90 articles
Browse latest View live

Installation Procedure for Sybase 16. 3 Patch Level 3 Always-on + DR on Suse 12. 3 – Recent Customer Proof of Concept

$
0
0

In recent months we saw several customers with large investments into Hana technologies approach Microsoft for information about deploying large mission critical SAP applications on Azure with the Sybase ASE database.

SAP Hana customers are typically able to deploy Sybase ASE at little or no additional cost if they have licensed Hana Database.

Many of the customers that have contacted Microsoft are shutting datacenters or terminating UNIX platforms and moving ECC or BW systems in the size range of 25-45TB DB volume to Azure. An earlier blog describes some of the requirements and best practices for VLDB migrations to Azure. https://blogs. msdn. microsoft. com/saponsqlserver/2018/04/10/very-large-database-migration-to-azure-recommendations-guidance-to-partners/

Until recently there was no simple documented straight forward installation procedure for a typical two node High-Availability pair with Synchronous replication and a third node with Asynchronous replication. This is quite a common requirement for SAP customers.

This blog is designed to supplement the existing SAP provided documentation and to provide some hints and additional information. The SAP Sybase team are continuously updating and improving the Sybase documentation, so it is always recommended to start with the official documentation and then cross reference this documentation. This document is based on real deployments from Cognizant and DXC. The latest version of Sybase & Suse were then installed in a lab test environment to provide screenshots

High Level Overview of Installation Steps

The high-level installation process for a 3 tier SAP Distributed Installation is:

  1. Read required OSS Notes, Installation Guides, Download Installation Media and the SAP on Sybase Business Suite documentation
    1. For SUSE Linux Release 12 with SP3 release note : https://www. suse. com/releasenotes/x86_64/SUSE-SLES/12-SP3/
  2. Provision Azure VMs with Suse for SAP Applications 12. 3 with Accelerated Networking Enabled
  3. Perform OS patching and preparation steps detailed below
  4. Run SWPM Distributed Install and install the ASCS Instance
  5. Export the /sapmnt NFS share
  6. Mount the /sapmnt NFS share on the Primary, Secondary and DR DB server
  7. Run SWPM Distributed Install and install the Primary DB Instance
  8. Run SWPM Distributed Install and install the Primary Application Server (Optional: add additional App servers)
  9. Perform Sybase Always-on preparation steps on Primary DB Instance
  10. Run setuphadr on Primary DB Instance
  11. Run SWPM Distributed Install and install the Secondary DB Instance
  12. Perform Sybase Always-on preparation steps on Secondary DB Instance
  13. Run setuphadr on Secondary DB Instance
  14. Run SWPM Distributed Install and install the DR DB Instance
  15. Perform Sybase Always-on preparation steps on DR DB Instance
  16. Run setuphadr on DR DB Instance
  17. Run post steps such as installing Fault Manager

Deployment Config

  1. Suse 12. 3 with latest updates
  2. Sybase 16. 03. 03
  3. SWPM version 22 or 23. SAP Kernel 7. 49 patch 500. NetWeaver ABAP 7. 50
  4. Azure Ev3 VMs with Accelerated Networking and 4 vcpu
  5. Premium Storage – each DB server has 2 x P20 disks (or more as required). App server has only a boot disk
  6. Official Sybase Documentation (some steps do not work, supplement with this blog) https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US
  7. Sample Response Files are attached here: Sybase-Sample-Response-Files. It is recommended to download and review these files
  8. Sybase Always-on does not leverage OS level clustering technologies such as Pacemaker or Windows cluster. The Azure ILB is not used. Instead the SAP workprocess is aware of the Primary and Secondary Sybase server. The DR node does not support automatic failover and this is a manual process to setup and configure SAP app servers to connect to the DR node
  9. This installation shows a “Distributed” installation. If the SAP Central Services should be highly available, follow the SAP on Azure documentation for Pacemaker
  10. Sybase Fault Manager is automatically installed on the SAP PAS during installation
  11. Be careful of Linux vs. Windows End of Life characters. Use Linux command cat -v response_file. rsIf ^M are seen then there are Windows EOL characters.

    Example:cat -v test. sh

    Output:

    Line 1 ^M

    Line 2 ^M

    Line 3 ^M

    (Note: CTRL+M is a single character in Linux, which is carriagereturn in Windows. This needs to be fixed before utilizing the file in Linux )

        To fix the issue

            $> dos2unix test. sh

            Output

                Line 1

                Line 2

                Line 3

  12. Hosts file configuration used for this deployment

    Example: <IP Address><FQDN> <SHORTNAME> <#Optional Comments>

    10. 1. 0. 9     sybdb1. hana. com     sybdb1    #primary DB

    10. 1. 0. 10   sybapp1. hana. com    sybapp1    #SAP NW 7. 5 PAS

    10. 1. 0. 11   sybdb2. hana. com     sybdb2        #secondary DB

    10. 1. 0. 12   sybdb3. hana. com    sybdb3        #tertiary DB for DR

    Common Prepare Steps on all Suse Servers

sudo zypper install -y glibc-32bit

sudo zypper install -y libgcc_s1-32bit

#these two glib 32bit are mandatory otherwise Always-on will not work

sudo zypper up -y

Note : It is mandatory to reboot the server if kernel patches are applied.

#resize the boot disk. The default linux root disk of 30GB is too small. Shutdown the VM and edit the disks in Azure Portal or Powershell. Increase the size of the disk to 60-100GB. Restart the VM and run the commands below. There is no benefit or advantage to provisioning an additional separate disk for a SAP application server

sudo fdisk /dev/sda

##delete the existing partition (this will not delete the data) and create [n] new primary [p] partition with defaults and write [w] config

sudo resize2fs /dev/sda2

sudo reboot

#Check Accelerated Networking is working

/sbin/ethtool -S eth0 | grep vf_

#Add these entries to the hosts file

sudo vi /etc/hosts

10. 1. 0. 9     sybdb1. hana. com     sybdb1    #primary DB

10. 1. 0. 10   sybapp1. hana. com    sybapp1    #SAP NW 7. 5 PAS

10. 1. 0. 11   sybdb2. hana. com     sybdb2        #secondary DB

10. 1. 0. 12   sybdb3. hana. com    sybdb3        #tertiary DB for DR

 #edit the waagent to create a swapfile

sudo vi /etc/waagent. conf

line to look for>>

ResourceDisk. EnableSwap=n

ResourceDisk. SwapSizeMB=

<<

Modify the above values Note : Swap size must be given in MB size only.

#enable the swapfile and set a size of 2GB or more. Example:

ResourceDisk. EnableSwap=y

ResourceDisk. SwapSizeMB=2000

Once done restart of the agent is necessary to get the swap file up and active.

sudo systemctl restart waagent

Other Services to be enabled and restarted are:

sudo systemctl restart nfs-server

sudo systemctl enable nfs-server

sudo systemctl status uuidd

sudo systemctl enable uuidd

sudo systemctl start uuidd

sudo systemctl status uuidd

##run sapcar and unpack SWPM 22 or 23

sapcar -xvf SWPM10SP22_7-20009701. SAR

SAP APP Server ASCS Install

sudo /source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Open a web browser from a Management Server and enter the Suse os-user name and password https://10. 1. 0. 10:4237/sapinst/docs/index. html

##after install exportthe NFS Share for /sapmnt

sudo vi /etc/exports

#add this line /sapmnt*(rw,no_root_squash)

## open port 2049 for nfs on NSG if required [by default VMs on same vnet can talk to each other]

 sudo systemctl restart nfs-server

SAP DB Instance Install

##do common preparation steps such as zypper and hosts file etc

#create disks for sybase

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc  -> n, p, w

sudo fdisk /dev/sdd  -> n, p, w

#It is generally recommended to use LVM and create pv, lv etc here so we can test performance later with striping additional disks.

Note: if multiple disk used in creating data / Backup / Log storage, make a necessary striping enabled to get optimal performance.

Example:

vgcreate VG_DATA /dev/sdc /dev/sdd

lvcreate -l 100%F VG_DATA -n lv_data -i 2 -I 256

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

Edit /etc/fstab and add the entries for the created disks.

Option 1:

Identify based on created volume group and lv details.

Ex: ls /dev/mapper/

And fetch the right devices

Ex: syb_data_vg-syb_data_lv

Add the the entries into /etc/fstab

sudo vi /etc/fstab

Add the lines.

/dev/mapper/syb_data_vg-syb_data_lv /hana/data xfs defaults,nofail 1 2

Option 2:

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

##create a directory for the source files.

sudo mkdir /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Open web browser and start installation

SAP PAS Install

##do same preparations as ASCS for zypper and hosts file etc

sudo /source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

https://10. 1. 0. 10:4237/sapinst/docs/index. html

AlwaysOn Install Primary

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##Login as syb<sid> – in this case the <sid> = ase

sybdb1 /sybase% whoami

sybase

sybdb1 /sybase% pwd

/sybase

sybdb1 /sybase% ls

ASEsourcesybdb1_dma.rssybdb1_setup_hadr. rs

sybdb1 /sybase% cat sybdb1_dma.rs | grep USER_INSTALL_DIR

USER_INSTALL_DIR=/sybase/ASE

sybdb1 /sybase%

sybdb1 /sybase% source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup.bin -f /sybase/sybdb1_dma. rs -i silent

 

Note: if the command does not run put several <space> characters before the -i silent
Full path to setup.bin from ASE. ZIP file. Full path to response file otherwise it will fail with non-specific error message

 

 

##run this command to unlock the sa account. Command will fail if “-X” is not specified

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

##If any errors occur review this note

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

setuphadr /sybase/sybdb1_setup_hadr.rs

AlwaysOn Install Secondary

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##do same preparations as ASCS for zypper and hosts file etc

#create disks for sybase

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc -> n, p, w

sudo fdisk /dev/sdd -> n, p, w

#only 1 disk, but created pv, lv etc here so we can test performance later with striping additional disks

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

sudo mount -a

sudo df -h

##create a directory for the source files.

sudo mkdir /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Stop the autofs and unmount the /sapmnt – sapinst will continue

The /sapmnt must be mounted again shortly after

##Login as syb<sid> – in this case the <sid> = ase

/sybase/source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup. bin -f /sybase/sybdb2_dma. rs -i silent

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

setuphadr /sybase/sybdb2_setup_hadr.rs

Do not restart the RMA – this is not required

AlwaysOn FM Install & Post Steps

The Sybase documentation for these steps is here.

https://help. sap. com/viewer/efe56ad3cad0467d837c8ff1ac6ba75c/16. 0. 3. 3/en-US/286f4fc8b3ab4439b3400e97288152dc. html

The documentation is not complete. After doing the steps on the documentation link review this Note

1959660 – SYB: Database Fault Management

su – aseadm

rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb1:~ #su – aseadm

sybdb1:aseadm 1> rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

sybdb1:aseadm 2> rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb1:aseadm 3>

sybdb2:~ #su – aseadm

sybdb2:aseadm 1> rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

sybdb2:aseadm 2> rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb2:aseadm 3>

## Run AlwaysOn Tuning & Configuration script on Primary and Companion

isql -UDR_admin -PSAPHana12345 -Ssybdb1:4909

sap_tune_rs Site1, 16, 4

isql -UDR_admin -PSAPHana12345 -Ssybdb2:4909

sap_tune_rs Site2, 16, 4

sybdb2:aseadm 3> isql -UDR_admin -PSAPHana12345 -Ssybdb2:4909

1> sap_tune_rs Site2, 16, 4

2> go

TASKNAMETYPE

VALUE

———————– —————–

————————————————————

Tune Replication Server Start Time

Sun Apr 29 06:20:37 UTC 2018

Tune Replication Server Elapsed Time

00:07:11

TuneRSTask Name

Tune Replication Server

TuneRSTask State

Completed

TuneRSShort Description

Tune Replication Server configurations.

TuneRSLong Description

Waiting 180 seconds: Waiting Replication Server to fully up.

TuneRSTask Start

Sun Apr 29 06:20:37 UTC 2018

TuneRSTask End

Sun Apr 29 06:27:48 UTC 2018

TuneRSHostname

sybdb2

(9 rows affected)

## On the APP server only

sudo vi . dbenv. csh

setenv dbs_syb_ha 1

setenv dbs_syb_server sybdb1:sybdb2

## Restart the SAP App server

sapcontrol -nr 00 -function StopSystem ALL

sapcontrol -nr 00 -function StartSystem ALL

https://help. sap. com/viewer/efe56ad3cad0467d837c8ff1ac6ba75c/16. 0. 3. 3/en-US/41b39cb667664dc09d2d9f4c87b299a7. html

sybapp1:aseadm 6> rsecssfx list

|———————————————————————————|

| Record Key | Status | Time Stamp of Last Update |

|———————————————————————————|

| DB_CONNECT/DEFAULT_DB_PASSWORD | Encrypted| 2018-04-2903:07:11UTC |

|———————————————————————————|

| DB_CONNECT/DEFAULT_DB_USER | Plaintext| 2018-04-2903:07:07UTC |

|———————————————————————————|

| DB_CONNECT/SYB/DR_PASSWORD | Encrypted| 2018-04-2906:18:26UTC |

|———————————————————————————|

| DB_CONNECT/SYB/DR_USER | Plaintext| 2018-04-2906:18:22UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SADB_PASSWORD | Encrypted| 2018-04-2903:07:19UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SADB_USER | Plaintext| 2018-04-2903:07:14UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SAPSID_PASSWORD | Encrypted| 2018-04-2903:07:42UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SAPSID_USER | Plaintext| 2018-04-2903:07:37UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SSODB_PASSWORD| Encrypted| 2018-04-2903:07:27UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SSODB_USER| Plaintext| 2018-04-2903:07:22UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SYBSID_PASSWORD | Encrypted| 2018-04-2903:07:34UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SYBSID_USER | Plaintext| 2018-04-2903:07:30UTC |

|———————————————————————————|

| SYSTEM_PKI/PIN | Encrypted| 2018-04-2722:36:39UTC |

|———————————————————————————|

| SYSTEM_PKI/PSE | Encrypted (binary) | 2018-04-2722:36:45UTC |

|———————————————————————————|

Summary

——-

ActiveRecords : 14 (Encrypted: 8, Plain: 6, Wrong Key: 0, Error: 0)

Defunct Records : 12 (180+ days: 0; Show: “list -withHistory”, Remove: “compact”)

## Run the Fault Manager Installation steps on the SAP PAS application server

sybapp1:aseadm 24> pwd

/sapmnt/ASE/exe/uc/linuxx86_64

sybapp1:aseadm 25> whoami

aseadm

sybapp1:aseadm 26> . /sybdbfm install

replication manager agent user DR_admin and password set in Secure Store.

Keep existing values (yes/no)? (yes)

SAPHostAgent connect user: (sapadm)

Enter password for user sapadm.

Password:

Enter value for primary database host: (sybdb1)

Enter value for primary database name: (ASE)

Enter value for primary database port: (4901)

Enter value for primary site name: (Site1)

Enter value for primary database heart beat port: (13777)

Enter value for standby database host: (sybdb2)

Enter value for standby database name: (ASE)

Enter value for standby database port: (4901)

Enter value for standby site name : (Site2)

Enter value for standby database heart beat port: (13787)

Enter value for fault manager host: (sybapp1)

Enter value for heart beat to heart beat port: (13797)

Enter value for support for floating database ip: (no)

Enter value for use SAP ASE Cockpit if it is installed and running: (no)

installation finished successfully.

Restart the SAP Instance – FM is added to the ASCS start profile

sybapp1:aseadm 32> sybdbfm status

fault manager running, pid = 4338, fault manager overall status = OK, currently executing in mode PAUSING

*** sanity check report (5)***.

node 1: server sybdb1, site Site1.

db host status: OK.

db status OK hadr status PRIMARY.

node 2: server sybdb2, site Site2.

db host status: OK.

db status OK hadr status STANDBY.

replication status: SYNC_OK.

AlwaysOn Install 3rd Node (DR) Async

Official SAP Sybase documentation and Links:

https://blogs. sap. com/2018/04/19/high-availability-disaster-recovery-3-node-hadr-with-sap-ase-16. 0-sp03/

Documentation https://help. sap. com/viewer/38af74a09e48457ab699e83f6dfb051a/16. 0. 3. 3/en-US

https://help. sap. com/viewer/38af74a09e48457ab699e83f6dfb051a/16. 0. 3. 3/en-US/6ca81e90696e4946a68e9257fa2d3c31. html

1. Install the DB host using SWPM in the same way as the companion host

2. Copy the companion host response file

3. Duplicate the section with all the COMP entries and add it at the bottom and rename at section of the newly copied COMPs to DR (for example). Leave the old COMP and PRIM entries as is.

4. Change the setup site to DR

5. All other entries from PRIM and COMP must remain the same since the setuphadr run for 3rd node needs to know about previous 2 hosts.

6. Execute setuphadr

Review the Sample Response File attached to this blog

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##do same preparations as ASCS for zypper and hosts file etc

#create disks for sybase

Note : when multiple disks are added for data/log/backup to create a single volume, use right striping method to get better performance

Example:

vgcreate VG_DATA /dev/sdc /dev/sdd

lvcreate -l 100%F VG_DATA -n lv_data -i 2 -I 256

(for log use –l 32 )

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc -> n, p, w

sudo fdisk /dev/sdd -> n, p, w

#only 1 disk, but created pv, lv etc here so we can test performance later with striping additional disks

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

edit /etc/fstab and add the entries for the created disks.

Option 1 :

Identify based on created volume group and lv details.

Ex: ls /dev/mapper/

And fetch the right devices

Ex: syb_data_vg-syb_data_lv

Add the the entries into /etc/fstab

sudo vi /etc/fstab

Add the lines.

/dev/mapper/syb_data_vg-syb_data_lv /hana/data xfs defaults,nofail 1 2

Option 2 :

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

sudo mount -a

sudo df -h

Note: mount points are visible only when the folders are accessed in df –h command when auto mount is enabled.

##create a directory for the source files.

sudo mkdir -p /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Stop the autofs and unmount the /sapmnt – sapinst will continue

The /sapmnt must be mounted again shortly after

## Install the DMA on the DR Node

##Login as syb<sid> – in this case the <sid> = ase

source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup. bin -f /sybase/sybdb3_dma. rs -i silent

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

sybdb3 /sybase% uname -a

Linux sybdb3 4. 4. 120-92. 70-default #1 SMP Wed Mar 14 15:59:43 UTC 2018 (52a83de) x86_64 x86_64 x86_64 GNU/Linux

sybdb3 /sybase% whoami

sybase

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

sybdb3 /sybase% setuphadr /sybase/sybdb3_setup_hadr.rs

AlwaysOn Testing & Useful Command Syntax

In the section below planned and unplanned failovers as well as monitoring commands are used.

It is recommended to review the Sybase documentation and also to review these SAP Notes:

1982469 – SYB: Updating SAP ASE with saphostctrl

1959660 – SYB: Database Fault Management

2179305 – SYB: Usage of saphostctrl for SAP ASE and SAP Replication Server

## Check if Fault Manager is running on the SAP PAS with this command

ps -ef | grep sybdbfm

executable in /usr/sap/<SID>/ASCS00/work

sybdbfm is copied to sybdbfm. sap<SID>_ASCS00

cd /usr/sap/<SID>/ASCS00/work

. /sybdbfm. sapASE_ASCS00 status

. /sybdbfm. sapASE_ASCS00 hibernate

. /sybdbfm. sapASE_ASCS00 resume

login as syb<sid> in this case sybase

## Login to the RMA

isql -UDR_admin -P<<password>> -SASE_RMA_Site1 -I DM/interfaces -X -w999

## to see all the components that are running

sap_version all

go

## to see the status of a replication path

sap_status path

go

## to see the status of resources

sap_status resource

go

## Login to ASE

The syntax “-I DM/interfaces” does a lookup in the Sybase AlwaysOn configuration database to find the host and TCP port

isql -UDR_admin -P<<password>> -SASE_Site1 -I DM/interfaces -X-w999

## to clear down the transaction log run this command

dump tran ASE with truncate_only

go

## to show freespace in DB

sp_helpdb ASE

go

## Transaction log backups are needed on all replicas otherwise the Trans Log will become full

## to start/stop/get info on Sybase DB (and all required components for Always on like RMA) – run this on the DB host

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function StartDatabase -dbname ASE -dbtype syb

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function StartDatabase -dbname ASE_REP -dbtype syb

## to get Sybase DB status

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function GetDatabaseStatus -dbname ASE -dbtype syb

## to get Sybase DB replication status

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function LiveDatabaseUpdate -dbname ASE -dbtype syb -updatemethod Check -updateoption TASK=REPLICATION_STATUS

## to send a trace ticket logon to RMA and execute these commands

sap_send_trace Site1

go

sap_status active

go

## during HADR testing leave tail running on the file /usr/sap/<SID>/ASCS00/work

tail -100f dev_sybdbfm

## to force a shutdown of the DB engine run the command below. Always-on will try to stop a normal shutdown of the DB

shutdown with wait nowait_hadr

go

## to do a planned failover from Primary to Companion DB the normal sequence is:

1. Failover from Primary to Companion

2. Drain logs from Primary to the DR site

3. Reverse Replication Route to start synchronization from the new Primary to the Companion and DR

— There is a new command that does all these steps automatically:

/usr/sap/hostctrl/exe/saphostctrl -user sapadm – -function LiveDatabaseUpdate -dbname ASE -dbtype syb -updatemethod Execute -updateoption TASK=FAILOVER -updateoption FAILOVER_FORCE=1 -updateoption FAILOVER_TIME=300

## it is recommended to use this command. If there are errors check in the path /usr/sap/hostctrl/work for log files

##other useful commands:

## to disable/enable replication from a Site to all routes

sap_disable_replication Site1, <DB>

sap_enable_replication Site1,Site2,<DB>

## command to manually failover

sap_failover <primary>,<standby>,<timeout>, [force], [unplanned]

## Materialize is a “dump and load” to reinitialize Sybase Alwayson replica.

sap_materialize auto,Site1,Site2,master

sap_materialize auto,Site1,Site2,<SID>

Sybase How To & Links

Customers familiar with SQL Server AlwaysOn should note that although it is possible to take a DB or Log backup from a replica, these backups are not compatible between Primary <-> Replica databases. It is also a requirement to run transaction log backups on the replica nodes unlike SQL Server.

SAP Notes:

2134316 – Can SAP ASE run in a cloud environment? – SAP ASE

1554717 – SYB: Planning information for SAP on ASE

1706801 – SYB: SAP ASE released for virtual systems

1590719 – SYB: Updates for SAP Adaptive Server Enterprise (SAP ASE)

1959660 – SYB: Database Fault Management

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

2489781 – SAP ASE 16. 0 SP03 Supported Operating Systems and Versions

DBA Cockpit doesn’t work by default after installation.

Setup DBA Cockpit as per:
2293673 – SYB: DBA Cockpit Correction Collection SAP Basis 7. 50

1605680 – SYB: Troubleshoot the setup of the DBA Cockpit on Sybase ASE

1245200 – DBA: ICF Service Activation for WebDynpro DBA Cockpit

For SUSE Linux Release 12 with SP3 release note : https://www. suse. com/releasenotes/x86_64/SUSE-SLES/12-SP3/

SAP Software Downloads https://support. sap. com/en/my-support/software-downloads. html

SWPM Download https://support. sap. com/sltoolset

Sybase Release Matrixhttps://wiki. scn. sap. com/wiki/display/SYBASE/Targeted+ASE+16. 0+Release+Schedule+and+CR+list+Information

Sybase Official Documentation https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US

Special thanks to Wajeeh Samdani from SAP Sybase Development in Walldorf

Special thanks to Cognizant SAP Cloud Team for their input and review of this blog

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research


Installation and Setup of Oracle 12.2 ASM on Oracle Linux on Azure – Installation Videos & Backup Strategies

$
0
0

This short blog contains videos showing the end to end installation process to setup Oracle 12.2 ASM on Azure.

The videos and documentation in this blog are being used to get formal certification for Oracle 12.2 ASM on Azure. This process is still ongoing

It is important to follow the procedure documented in this blog and not to follow the existing generic Oracle ASM on Azure documentation

The generic Oracle ASM on Azure documentation will cause many errors during SWPM installation.

Oracle Linux Preparation, Grid Installation, Oracle 12.2 Installation and SAP Installation

These three videos show the process flow for installation of an SAP NetWeaver 7.5 on Oracle 12.2 ASM

The installation sequence is:

  1. Oracle Linux 7.4 Preparation Steps
  2. Install the ASCS
  3. Install the DB Instance
    1. When prompted to install Oracle DBMS with RUNINSTALLER, first install the Grid Infrastructure https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/installing-oracle-grid-infrastructure-for-a-standalone-server-with-a-new-database-installation.html#GUID-0B1CEE8C-C893-46AA-8A6A-7B5FAAEC72B3
    2. After installing Grid Infrastructure configure ASM disk groups as required
    3. Start RUNINSTALLER and install DBMS
    4. Continue SAPInst
  4. Install PAS
  5. Patch Grid Infrastructure and Oracle DB to latest released Oracle patch

The videos below illustrate the process

NOTE: Oracle Client 12.2 is not released yet, therefore the Oracle Client 12.1 should be used and configured. Be sure to specify DB ENGINE = 12.2 and DB CLIENT = 12.1 during setup. Do not attempt to follow Oracle ASM 11.x or ASM 12.1 documentation as there are large differences with ASM 12.2

1.ASCS-OracleASM-Install

2.DB-OracleASM-Install

3.APP-OracleASM-Install

Patching Oracle 12.2 Grid Infrastructure and Oracle 12.2 DB Components

The process for patching the Grid Infrastructure (G.I) and DBMS components is illustrated below.

The latest Oracle patches supported for SAP applications is usually available here: http://service.sap.com/oracle

509314 – Downloading Oracle Patches from the SAP Support Portal https://launchpad.support.sap.com/#/notes/509314

The sequence of patching is (1) Patch the Grid Infrastructure (2) After Grid Infrastructure patch the DBMS

Patching-Oracle-Grid-and-DBMS-12.2

Backup Solutions for Oracle 12.2 ASM on Azure

Three different backup solutions have been tested with Oracle ASM on Azure.

  1. Native Oracle RMAN Backup
  2. SAP BRTools (which is configured to call RMAN)
  3. Azure CommVault Virtual Appliance

During testing backup times for RMAN and BRTools was around 11 minutes. CommVault was around 30 minutes

Oracle-ASM-Backup-Scenarios

CommVault-Oracle-ASM-Backup

Links & SAP Notes

Details of the VM setup & configuration

Machine Name Internal IP Purpose Data Disks VM Size OS
sapappl4 10.0.0.13 App + ASCS    Standard E8s v3 (8 vcpus, 64 GB memory) Oracle-Linux:7.4:7.4.20170828
oradb4 10.0.0.10 Oracle DB 10 * P20(512GB) Standard E16s v3 (16 vcpus, 128 GB memory) Oracle-Linux:7.4:7.4.20170828
SAPORAJmp Jump VM / Downloads   Standard D2s v3 (2 vcpus, 8 GB memory) Win 2016

Note: 10 * P20 Premium Disks are allocated as follows: 1 x P20 for /oracle, 3 x P20 for DATA, 3 x P20 for ARCH, 3 x P20 for REDO

It is recommended to use Moba Xterm to do the GI and Oracle DB installation https://mobaxterm.mobatek.net/

2507228 – Database: Patches for 12.2.0.1 https://launchpad.support.sap.com/#/notes/0002507228

2470718 – Oracle Database Parameter (12.2) https://launchpad.support.sap.com/#/notes/0002470718

2558521 – Grid Infrastructure: Patches for 12.2.0.1 https://launchpad.support.sap.com/#/notes/0002558521

2477472 – Oracle Database Upgrade with Grid Infrastructure (12.2) https://launchpad.support.sap.com/#/notes/0002477472

105047 – Support for Oracle functions in the SAP environment https://launchpad.support.sap.com/#/notes/0000105047

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions https://launchpad.support.sap.com/#/notes/0002039619

1915301 – Database Software 12c Installation on Unix https://launchpad.support.sap.com/#/notes/0001915301

1915317 – Migrating Software Owner to ‘oracle’ https://launchpad.support.sap.com/#/notes/0001915317

1905855 – Oracle database doesn´t start in ASM https://launchpad.support.sap.com/#/notes/0001905855

1853538 – Oracle RAC controlfiles on ASM with multiple failure groups https://launchpad.support.sap.com/#/notes/0001853538

1738053 – SAPinst for Oracle ASM installation https://launchpad.support.sap.com/#/notes/0001738053

1550133 – Using Oracle Automatic Storage Management (ASM) with SAP NetWeaver based Products https://launchpad.support.sap.com/#/notes/1550133

2087004 – BR*Tools support for Oracle 12c https://launchpad.support.sap.com/#/notes/0002087004

2007980 – SAP Installation with Oracle Single Instance on Oracle Exadata and Oracle Database Appliance https://launchpad.support.sap.com/#/notes/0002007980

819829 – Oracle Instant Client Installation and Configuration on Unix or Linux https://launchpad.support.sap.com/#/notes/0000819829

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/index.html

https://www.sap.com/community/topic/oracle.html

Recommended yum packages:

sudo yum update

sudo yum install libaio.x86_64 -y

sudo yum install uuid* -y

sudo yum install nfs-utils -y

#sudo yum install ksh -y

sudo yum install libstdc++-devel.x86_64 -y

sudo yum install xorg-x11-xauth.x86_64 -y

sudo yum install libaio-devel.x86_64 -y

sudo yum install sysstat.x86_64 -y

sudo yum install smartmontools.x86_64 -y

sudo yum install tcsh.x86_64 -y

sudo yum install xorg-x11-utils.x86_64 -y

sudo yum install ksh.x86_64 -y

sudo yum install glibc-devel.x86_64 -y

sudo yum install compat-libcap1.x86_64 -y

sudo yum install xorg-x11-apps.x86_64 -y

Special Credit & Thanks to Ravi Alwani from Azure CAT GSI Team for creating these videos and lab testing.

 

SAP on SQL Server on Azure – How to Bypass Proxy Server During Backup to URL

$
0
0

This short blog discusses how to avoid overloading an on-premises or Azure Proxy server with Backup to URL. Creating a typical .Net configuration file to disable a proxy server will allow BackupToURL.exe to communicate via https directly to Azure blog storage.

1. SQL Server Backup to URL

Modern releases of SQL Server support backups to Azure Blob storage. This method is convenient and popular for SAP on SQL customers running on Azure for full database backups and/or transaction log backups.

Customers running SQL Server Backup to URL are strongly recommended to update to the latest support pack and cumulative update, especially if the databases are running Transparent Database Encryption.

https://blogs.msdn.microsoft.com/saponsqlserver/2017/04/04/more-questions-from-customers-about-sql-server-transparent-data-encryption-tde-azure-key-vault/

SAP always support the latest Service Pack and Cumulative Update for SQL Server https://launchpad.support.sap.com/#/notes/62988

2. Proxy Server Configuration & Backup to URL

The Backup to URL feature is implemented as a separate executable called BackupToURL.exe. This executable will call a standard Windows API when sending HTTPS traffic. By default this API will read the Windows Proxy Server configuration from Control Panel -> Internet Options -> Connections -> LAN Settings

Customers running SAP on SQL Server on Azure generally have one of two possible scenarios for the proxy server:

1. Azure is leveraging the central corporate proxy server that is kept in an existing on-premises data center.

2. A separate proxy server has been setup in Azure for Azure VMs/services to leverage.

In either case the proxy is unnecessary and will slow down the backup performance considerably. If the proxy server is on-premises the https call to the proxy server involves a two way transit of the ExpressRoute link. This adds to data costs for the link.

Modern versions of SQL Server support backup to many URL targets simultaneously and the traffic volume can be considerable. Customers have noticed that the proxy server and/or ExpressRoute link can become saturated.

It is generally recommended to disable the proxy server for BackupToURL.exe only and allow SQL Server Backup to URL to communicate directly with the target storage account. There are several ways to do this, but the recommended procedure is documented below

3. How to Disable Bypass Proxy Server for SQL Server Backup to URL

To disable the BackupToURL.exe from using the default proxy server create a file in the following path:

C:\Program Files\Microsoft SQL Server\MSSQL13.\<InstanceName>\MSSQL\Binn

The actual requirement is that the file be in the same directory as BackupToURL.exe for a particular version of SQL Server and Instance Name

The filename must be:

BackuptoURL.exe.config

The file contents should be:

<?xml version =”1.0″?>

<configuration>

<system.net>

<defaultProxy enabled=”false” useDefaultCredentials=”true”>

<proxy usesystemdefault=”true” />

</defaultProxy>

</system.net>

</configuration>

Additional information can be found here: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url-best-practices-and-troubleshooting?view=sql-server-2017

Depending on the exact customer configuration it is possible that SQL Server executable does require a valid proxy server to do certain activities such as reading/writing to the Azure Key Vault for TDE. Care must be taken when completely disabling the proxy server via Control Panel

Additional options for controlling routing could include configurating a UDR. Another option is to create a private local endpoint for the blob storage account on the vnet of the SQL VM. Since the IP address is now local to the VM the proxy server will not be used.

Additional Links & Information

See point #5 for Backup to URL tuning https://blogs.msdn.microsoft.com/saponsqlserver/2017/04/04/sap-on-sql-general-update-for-customers-partners-march-2017/

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017

https://docs.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql?view=sql-server-2017

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/deleting-backup-blob-files-with-active-leases?view=sql-server-2017

Managed Backups to Azure Blob Storage https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-managed-backup-to-microsoft-azure?view=sql-server-2017

SAP Support for SQL Server 2017

$
0
0

We are happy to share that SAP released support for SQL Server 2017 for some SAP NetWeaver based systems. In this blog we will talk about how you can update your existing installations on SQL Server to SQL Server 2017, what you need to do when installing a new SAP NetWeaver based system on SQL Server 2017 and what to consider when upgrading an SAP NetWeaver based system which is already running on SQL Server 2017. We will also list all supported products and the required Support Package Stacks.

For the first time, you do not need to apply new Support Packages Stacks if you are already running on a supported SPS for SQL Server 2016. The minimum SPS are the same. You only need a new SAP Kernel and ODBC driver.

SQL Server RDBMS DVD

Important: Please read SAP Note 2139358 (Login required) and SAP Note 2534720 (Login required) on more information about how and where to download the SQL Server 2017 RDBMS DVD.

Installation of SAP NetWeaver

The following SAP NetWeaver based products are supported on SQL Server 2017:

  • SAP products based on SAP NetWeaver Java for releases SAP NetWeaver 7.1 or higher
  • SAP products based on SAP NetWeaver ABAP for releases SAP NetWeaver 7.0 and higher
  • SAP NetWeaver PI for releases SAP NetWeaver 7.1 and higher
  • SAP Solution Manager 7.2

The following SAP NetWeaver based products are not supported on SQL Server 2017:

  • SAP Solution Manager 7.1 and earlier releases of SAP Solution Manager
  • SAP CRM 5.0 and SAP CRM 6.0 (also known as SAP CRM 2007)
  • SAP SCM 5.0 and SAP SCM 5.1 (also known as SAP SCM 2007)
  • SAP SRM 5.0
  • SAP NetWeaver Developer Workplace
  • SAP NetWeaver Java 7.0x

Before you start with the installation, please read the following SAP Notes:

  • Release planning Note – SAP Note 2492596 (Login required)
  • Setting up Note – SAP Note 2484674 (Login required)
  • Required SQL Server patches – SAP Note 62988 (Login required)
  • SWPM Note and required Kernel DVD – SAP Note 1680045 (Login required)
  • Central Note for SL Toolset – SAP Note 1563579 (Login required)

Requirements

  • Windows Server 2012 64-bit or higher
  • SQL Server 2017 Enterprise Edition

Preparation

SAP Software Provisioning Manager

See SAP Note SAP Note 1680045 (Login required) on where to download the SWPM. Please make sure to always download the latest SWPM and the SWPM that matches the SAP NetWeaver version that you want to install (e.g. for SAP NetWeaver 7.0 based products you need to download 70SWPM).
You need to download at least SAP Software Provisioning Manager 1.0 SP23.

Kernel DVD

For installations with SWPM 7.0x (70SWPM), see SAP Note SAP Note 1680045 (Login required) on where to download the latest Kernel DVD.
Use the archive based installation for installations with SWPM.

For SAP products based on SAP NetWeaver 7.4 and higher, use the 7.49 DCK Kernel
For SAP products based on EHP1 for SAP NetWeaver 7.3 and lower use the 721_EXT Kernel DVD (only the EXT Kernel is supported)

SAP recommends using the latest SP stack kernel (SAPEXE.SAR and SAPEXEDB.SAR), available in the Support Packages & Patches section of the SAP Support Portal https://launchpad.support.sap.com/#/softwarecenter.

For existing installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 749 PL 500. For details, see release note 2626990.
For new installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 753 PL 100. For details, see DCK note 2556153 and release note 2608318.
For AS ABAP 7.52 this is SAP Kernel 753 PL 100. For details, see release note 2608318.

Java DVD

See SAP Note SAP Note 1680045 (Login required) on where to download the Java DVD for the product you want to install.

ODBC driver

If you want to install a distributed SAP System (database and SAP application Server are running on different hosts), make sure that the latest ODBC driver for Microsoft SQL Server is installed on the host running your SAP application Server. See SAP Note 1902794 (Login required)

The ODBC driver can also be downloaded from the Microsoft Download Center

Installation

Install your SAP system using

  • The SAP Software Provisioning Manager SAR archive (SWPM.SAR or 70SWPM.SAR) 1.0 SP23 or higher
  • The SAP Software Provisioning Manager Kernel DVD (for installations with 70SWPM.SAR)
  • Kernel archives of SAP Kernel 7.21 EXT, 7.22EXT, 7.49 or 7.53
  • The Java DVD as described in SAP Note 1680045 (Login required)
  • The DVDs that were originally shipped with the SAP product (Export DVD, language DVDs…)

To install the SAP NetWeaver based system, follow these steps:

  1. After the Installation upgrade your kernel to the latest 7.21 EXT/7.22 EXT Kernel or 7.49 Kernel. 7.22 EXT Kernel is recommended for SAP System based on SAP NetWeaver 7.31 or lower.
  2. ABAP/ABAP+JAVA: Connect your new SAP System to your SAP Solution Manager (only required if you do not install SAP Solution Manager)
  3. ABAP/ABAP+JAVA: Create a maintenance stack to implement at least (only required if you do not install SAP Solution Manager):
    • SAP Business Suite 2005
      • SAP ERP 6.0 SPS 28 or higher
    • SAP Business Suite 7 Support Release 1
      • SAP CRM 7.0 SPS 18 or higher
      • EHP4 for SAP ERP 6.0 SPS 18 or higher
      • SAP SCM 7.0 SPS 18 or higher
      • SAP SRM 7.0 SPS 19 or higher
    • SAP Business Suite 7i 2010
      • EHP1 for SAP CRM 7.0 SPS 15 or higher
      • EHP5 for SAP ERP 6.0 SPS 15 or higher
      • EHP1 for SAP SCM 7.0 SPS 15 or higher
      • EHP1 for SAP SRM 7.0 SPS 15 or higher
    • SAP Business Suite 7i 2011
      • EHP2 for SAP CRM 7.0 SPS 16 or higher
      • EHP6 for SAP ERP 6.0 SPS 16 or higher
      • EHP2 for SAP SCM 7.0 SPS 16 or higher
      • EHP2 for SAP SRM 7.0 SPS 16 or higher
    • SAP Business Suite 7i 2013
      • EHP3 for SAP CRM 7.0 SPS 10 or higher
      • EHP7 for SAP ERP 6.0 SPS 10 or higher
      • EHP3 for SAP SCM 7.0 SPS 10 or higher
      • EHP3 for SAP SRM 7.0 SPS 10 or higher
    • SAP Business Suite 7i 2016
      • EHP4 for SAP CRM 7.0 SPS 01 or higher
      • EHP8 for SAP ERP 6.0 SPS 01 or higher
      • EHP4 for SAP SCM 7.0 SPS 01 or higher
      • EHP4 for SAP SRM 7.0 SPS 01 or higher
    • SAP NetWeaver 7.1 SPS 20 or higher
    • SAP NetWeaver 7.1 including EHP1 SPS 15 or higher
    • SAP NetWeaver 7.3 SPS 14 or higher
    • SAP NetWeaver 7.3 including EHP1 SPS 17 or higher
    • SAP NetWeaver 7.4 SPS 12 or higher
    • SAP NetWeaver 7.5 SPS 01 or higher
    • SAP NetWeaver 7.51 and higher do not need additional support packages
  4. ABAP/ABAP+JAVA: Use the Software Update Manager (SUM) 1.0 SP17 or higher to implement the maintenance stack (only required if you do not install SAP Solution Manager)
    DO NOT use SPAM to implement the support packages
    DO NOT update SPAM manually but let the SUM update the SPAM as part of the maintenance stack implementation
  5. SAP Solution Manager 7.2 only: Use the Software Update Manager (SUM) 1.0 SP17 to install SAP Solution Manager 7.2 SR1 which already contains the required SPS 01

Post Steps

Configure your SQL Server as described in SAP Note 2484657 (Login required)

System Copy of SAP NetWeaver

You can also copy your SAP System that is running on an older SQL Server release to a machine running a new SQL Server with the following steps:

  • Make sure that your source system is at least on the support package stack as described in the installation section if this blog.
  • Copy your SAP system using
    • The SAP Software Provisioning Manager 1.0 SP23 or higher SAR archive (SWPM.SAR or 70SWPM.SAR)
    • The SAP Software Provisioning Manager Kernel DVD for installations with 70SWPM or the lastest archives of the 7.21 EXT, 7.22 EXT, 7.49 or 7.53 kernel
    • The Java DVD as described in SAP Note 1680045 (Login required)
    • The DVDs that were originally shipped with the SAP product (Export DVD, language DVDs…)
  • After the Installation upgrade your kernel to the latest 7.21 EXT, 7.22 EXT Kernel, 7.49 or 7.53 Kernel. 7.22 EXT Kernel is recommended for SAP System based on SAP NetWeaver 7.31 or lower.

The steps for a System Copy where the source SAP system is already running on SQL 2017 are the same.

Update or Upgrade of SAP NetWeaver

Only updates or upgrades that are supported by the SAP Software Update Manager are supported for SQL Server 2017. Please read SAP Note SAP Note 1563579 (Login required) to find the SAP Note for the latest SAP Software Update Manager that describes the supported update and upgrade scenarios.

As an example, the following upgrades are not supported by SAP Software Manager and are therefore not supported for SQL Server 2017:

Upgrade of SAP CRM 5.0 to SAP CRM 7.0 or EHP1 for SAP CRM 7.0

Upgrade of SAP SCM 5.0 to SAP SCM 7.0 or EHP1 for SAP SCM 7.0

Upgrade of SAP SRM 5.0 to SAP SRM 7.0 or EHP1 for SAP SRM 7.0

Use at least SAP Software Update Manager 1.0 SP22 or SAP Software Manager 2.0 SP02

Orica’s S/4HANA Foundational Architecture Design on Azure

$
0
0

This blog is a customer success story detailing how Cognizant and Orica have successfully deployed and gone live with a global S/4HANA transformation project on Azure. This blog contains many details and analysis of key decision points taken by Cognizant and Orica over the last two years leading to their successful go live in August 2018.

This blog below written by Sivakumar Varadananjayan Siva is Global head of Cognizant SAP Cloud Practice and He Personally involves in Orica 4s Program from day 1 as Presales head and now as Chief Architect for Orica’s S/4HANA on Azure Adoption

Over the last 2 years, Cognizant has partnered and engaged as a trusted technology advisor and managed cloud platform provider to build Highly Available, Scalable, Disaster Proof IT platforms for SAP S/4HANA and other SAP applications in Microsoft Azure. Our customer Orica is the world’s largest provider of commercial explosives and innovative blasting systems to the mining, quarrying, oil and gas and construction markets, a leading supplier of sodium cyanide for gold extraction, and a specialist provider of ground support services in mining and tunneling. As a part of this program, Cognizant has built Orica’s new SAP S/4HANA Platform on Microsoft Azure and provides a Managed Public Cloud Platform as a Service (PaaS) offering.

Cognizant started the actual cloud foundation work during December 2016. In this blog article, we will cover some of the best practices that Cognizant adopted and share key learnings which may be essential for any customer planning to deploy their SAP workloads on Azure.

The following topics will be covered:

  • Target Infrastructure Architecture Design
    • Choosing the right Azure Region
    • Write Accelerator
    • Accelerated Networking
  • SAP Application Architecture Design
    • Sizing Your SAP Landscape for the Dynamic Cloud
    • Increasing/decreasing capacity
  • HA / DR Design (SUSE HA Cluster)
    • SUSE cluster
    • Azure Site Recovery (ASR)
  • Security on Cloud
    • Network Security Groups
    • Encryption – Disk, Storage account, HANA Data Volume, Backup
    • Role-Based Access Control
    • Locking resources to prevent deletion
  • Operations & Management
    • Reporting
    • Costing
    • Creation of clone environments
    • Backup & restore

Target Infrastructure Architecture Design

The design of a fail-proof infrastructure architecture involves visualizing the end-state with great detail. Capturing key business requirements and establishing a set of design principles will clarify objectives and help in proper prioritization while making design choices. Such design principles include but are not limited to choosing a preferred Azure Region for hosting the SAP Applications, as well as determining preferences of Operating System, database, end user access methodology, application integration strategy, high availability, disaster recovery strategy, definition of system criticality and business impacts of disruption, definition of environments, etc. During the Design phase, Cognizant involved Microsoft and SUSE along with other key program stakeholders to finalize the target architecture based on the customer’s business & security requirements. As part of the infrastructure design, critical foundational aspects such as Azure Region, ExpressRoute connectivity with Orica’s MPLS WAN, and integration of DNS and Active Directory domain controllers were finalized.

At the time of discussing the infrastructure preparation, various topics including VNet design (subnet IP ranges), host naming convention, storage requirements, and initial VM types based on compute requirements were derived. In the case of Orica’s 4S implementation, Cognizant implemented a three tier subnet architecture – Web Tier, Application Tier and Database Tier. The three tier subnet design was applied for each of Sandpit, Development, Project Test, Quality and Production so that it provides the flexibility for Orica to deploy fine-grained NSGs at subnet levels as per security requirements. Having a clearly defined tier-based subnet architecture will also enable to avoid complex NSGs being defined for individual VM hosts.

The Web Tier subnet is intended to host the SAP Web Dispatcher VMs; the Application Tier is intended to host the Central Services Instance VMs, Primary Application Server VMs and any additional application server VMs, the Database Tier is intended to host the database VMs. This is supplemented by additional subnets for infrastructure and management components, such as jump servers, domain controllers, etc.

Choosing the Right Azure Region

Although Azure operates over several regions, it is essential to choose a primary region into which main workloads will be deployed. Choosing the right Azure region for hosting the SAP Application is a vital decision to be made. The following factors must be considered for choosing the Right Azure Region for Hosting: (1) Legal and regulatory requirements dictating physical residence, (2) Proximity to the company’s WAN points of presence and end users to minimize latency, (3) Availability of VMs and other Azure Services, and (4) Cost. For more information on availability of VMs, refer to the section “Sizing Your SAP Landscape for the Dynamic Cloud” under SAP Application Architecture Design.

Accelerated Networking

Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. Without accelerated networking, all network traffic in and out of the VM must traverse the host machine and the virtual switch.
With accelerated networking, network traffic arrives at the VM’s network interface (NIC), and is then forwarded directly to the guest VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. While essential for good and predictable HANA performance, not all VM types and operating system versions support Accelerated Networking, and this must be taken into account for the infrastructure design. Also, it is important to note that Accelerated Networking helps to minimize latency for network communication within the same Azure Virtual Network (VNet). This technology has minimal impact to overall latency during network communication over multiple Azure VNets.

Storage

Azure provides several storage options including Azure Disk Storage – Standard, Premium and Managed (attached to VMs), Azure Blob Storage, etc. At the time of writing this article, Azure is the only public cloud service provider that offers Single VM SLA of 99.9% under the condition of operating the VM with Premium Disks attached to it. The cost value proposition of choosing Premium Disk over Standard Disk for the purpose of getting an SLA for Single VM SLA is significantly beneficial and hence Cognizant recommends provisioning all VMs with Premium Disks for application and database storage. Standard Disks are appropriate to store Backups of Databases, and Azure Blob is used for Snapshots of VMs and transferring Backups and storing them as per the Retention Policy. For achieving an SLA of > 99.9%, High Availability techniques can be used. Refer to the section ‘High Availability and Disaster Recovery’ in this article for more information.

Write Accelerator

Write Accelerator is a disk capability for M-Series Virtual Machines (VMs) on Azure running on Premium Storage with Azure Managed Disks exclusively. As the name states, the purpose of the functionality is to improve the I/O latency of writes against Azure Premium Storage. Write Accelerator is ideally suited to be enabled for disks to which database redo logs are written to meet the performance requirement of modern databases such as HANA. For production usage, it is essential that the final VM infrastructure thus setup should be verified using SAP HANA H/W Configuration Check Tool (HWCCT). These results should be validated with relevant subject matter experts to ensure the VM is capable of operating production workloads and is thus certified by SAP as well.

SAP Application Architecture Design

The SAP Application Architecture Design must be based on the guiding principles that must be adopted for building the SAP applications, systems and components. To have a well laid out SAP Application Architecture Design, you must determine the list of SAP Applications that are in scope for the implementation.

It is also essential to review the following SAP Notes that provide important information on deploying and operating SAP systems on public cloud infrastructure:

  • SAP Note 1380654 – SAP support in public cloud environments
  • SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types
  • SAP Note 2316233 – SAP HANA on Microsoft Azure (Large Instances)
  • SAP Note 2235581 – SAP HANA: Supported Operating Systems
  • SAP Note 2369910 – SAP Software on Linux: General information

    Choosing the OS/DB Mix of your SAP Landscape

    Using this list, the SAP Product Availability Matrix can be leveraged to determine whether the preferred Operating System and Database is supported for each of the SAP application in scope. From an ease of maintenance and management perspective, you may want to consider not having more than two variants of databases for your SAP application database. SAP has started providing support for SAP HANA database for most of the applications and since SAP HANA supports multi-tenant database, you might as well want to have most of your SAP applications run on SAP HANA database platform. For some applications that do not support HANA database, other databases might be required in the mix. SAP’s S/4HANA application runs only on HANA database. Orica chose to run HANA for every SAP application where supported and SQL Server otherwise – as this was in line with the design rationale and simplified database maintenance, backups, HA/DR configuration, etc.

    With SAP HANA 2.0 becoming mainstream (it is also mandatory for S/4HANA 1709 and higher), fewer operating systems are supported than with SAP HANA 1.0. For example SUSE Enterprise for SAP Applications is now the only flavor of SUSE supported, while “normal” SUSE Enterprise was sufficient for HANA 1.0. This may have a licensing impact for the customers, as Azure only provides BYO Subscription images. Hence customers must supply their own operating system licenses.

    Type of Architecture

    SAP offers deploying its NetWeaver Platform based applications either in a Central System Architecture (Primary Application Server and Database in same host) or in a Distributed System Architecture (Primary Application Server, Additional Application Servers and Database in separate hosts). You need to choose the type of architecture based on a thorough cost value proposition, business criticality and application availability requirements. You also need to determine the number of environments that each SAP application will require such as Sandbox, Development, Quality, Production, Training, etc. This is predominantly determined based on the change governance that you plan to setup for the project. Systems that are business critical and have requirements for high availability such as the Production Environment must always be considered to be deployed in a Distributed System Architecture Scenario with High Availability Cluster. In the case of public cloud infrastructure, this is even more critical as VMs tend to fail much more frequently than traditional “expensive” on-premises kit (e.g. IBM p-Series). In the past one could afford to be lax about HA, because individual servers tended to fail only rarely. However we’re seeing a relatively higher rate of server failure in public cloud, so if uptime is important, then HA must be set up for business critical systems. For both critical and non- critical systems, parameters should be enabled to ensure the application and database starts automatically in the event of an inadvertent server restart. Disaster Recovery is often recommended for most of the SAP Applications that are business critical based on Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

    Cognizant designed 4 system landscape and distributed SAP architecture for Orica. We separated the SAP Application and DB servers, because when taken in context of HANA MDC and running everything on HANA by default, a Central System Architecture no longer makes sense. We have also named HANA Database SIDs without any correlation to the tenants that each HANA database holds. This is done with the intention of future proofing and allowing the tenants to change the HANA hosts in future if needed. In the case of Orica, we have also implemented custom scripting for automated start of SAP applications which can further be controlled (disabled or enabled) by a centrally located parameter file. High availability is designed for production and quality environments. Disaster recovery is designed as per Orica’s Recovery Point Objective (RPO) and Recovery Time Objective (RTO) defined by business requirements.

    Sizing Your SAP Landscape for the Dynamic Cloud

    Once you have determined the type of SAP architecture, you will now have a fair idea about the number of individual Virtual Machines that will be required to deploy each of these components. From an infrastructure perspective, the next step that you will need to perform is to size the Virtual Machines. You can leverage standard SAP methodologies such as Quick Sizer using Concurrent User or Throughput Based sizing. Best Practice is doing the sizing using Throughput based sizing. This will provide you the SAPS and memory requirement for the application and database components and memory requirement in the case of HANA database. Tabulate the critical piece of sizing information in a spreadsheet and refer to the standard SAP notes to determine the equivalent VM types in Azure Cloud infrastructure. Microsoft is getting SAP certification for new VMs on regular basis so it is always advisable to check the recent SAP notes for latest information. For HANA databases, you may most often require VMs with E-Series (Memory Optimized) and
    M-Series (Large Memory Optimized) based on the size of the database. At the time of writing this article, the maximum capacity supported with E-Series and M-Series are 432 GB and 3.8 TB respectively. E-series offers better cost value proposition compared to the earlier GS-series VMs offered by Azure. At this point you need to evaluate that the resulting VMs are available in the Azure region that you have preferred to host your SAP landscape. In some cases, depending upon the geography there is a possibility that some of these VM types may not be available and it is essential to be careful and choose the right Geography and Azure Region where all the required VM types are available. However, remember that Public Cloud offers great scalability and elasticity. You do not need an accurate peak sizing to provision your environments. You always have the room to scale-up or scale-down your SAP Systems based on the actual usage by monitoring the utilization metrics such as CPU, Memory and Disk Utilization. Within the same Virtual Machine series, this can be done just by powering off the VM, changing the VM Size and powering on the VM. Typically, the whole VM resizing procedure does not take more than a few minutes. Ensure that your system will fit into what’s available in Azure at any point of time. For instance, spinning up a 1 TB M-series and then finding that a 1.7TB instance is needed instead does not cause much of an hassle as it can be easily re-sized. However, if you are not sure if your system will grow beyond 3.8 TB (maximum capacity of M-Series), then it puts you in a bigger risk as complications will start of creep up (Azure Large Instances may be needed for rescue in such cases). Reserved Instances are also available in Azure, and can be leveraged for further cost optimization if accurate sizing of actual hardware requirements is performed before purchasing (to avoid over-committing).

High Availability and Disaster Recovery

Making business critical systems such as SAP S/4HANA highly available with > 99.9% high availability requires a well-defined High Availability architecture design. As per Azure, VM clusters deployed in an availability set within an availability zone in a region offers 99.95% availability. Azure offers an SLA of 99.99% when the compute VMs are deployed within a region in multiple Availability Zones. For achieving this, it is recommended to look for availability of Availability Zones in the region that is chosen for hosting the SAP applications. Note that Azure Availability Zones are still being rolled out by Microsoft and they will eventually arrive in all regions over a period of time. Also, components that are Single Point of Failure (SPOF) in SAP must be deployed in a cluster such as SUSE Cluster. Such cluster must reside within an availability set to attain 99.95% availability. To achieve the High Availability for Azure Infrastructure level all the VMs are added in availability set and exposed with Azure Internal Load Balancer (ILB). These components include (A)SCS Cluster, DB Cluster and NFS. It is also recommended to provision at least two application servers within an availability set (Primary Application Server and Additional Application Server), so as to ensure the Application Servers are redundant. Cognizant, Microsoft and SUSE worked together to build a collaborative solution based on Multi-Node iSCSI server configuration. This Multi-Node iSCSI server HA configuration for SAP Applications in Orica were the first to be deployed with this configuration in Azure Platform.

As discussed earlier, in cases where SAP components are not prevented from failure using High Availability setup, it is recommended to provision such VMs with Premium Storage Disks attached to it to take advantage of the Single VM SLA. All VMs at Orica use Premium Disks for their application and database volumes because this is the only way they would be covered by the SLA, and we also found performance to be better and more consistent.

Details about SUSE Cluster is described below

SUSE Cluster Setup for HA:

(A)SCS Layer high availability is achieved using SUSE HA Extension cluster. DRBD technology for replication is not used for replication of application files such as SAP kernel files. This design is based on the recommendation from Microsoft and it is supported by SAP as well. The reason for not enabling DRBD replication is due to potential performance issues that could pop-up when a synchronous replication is configured and a recovery could not be guaranteed which such configuration when ASR is enabled for Disaster Recovery replication at Application layer. NFS Layer high availability is achieved using SUSE HA Extension cluster. DRBD technology for replication is used for data replication. It is also recommended by Microsoft to use single NFS Cluster to cater for multiple SAP Systems to reduce complexity of the overall design.

HA testing needs to be performed thoroughly, and must be simulated for many different failure situations beyond a simple clean shutdown of the VM. E.g. Usage of halt command to simulate a VM power off, adding firewall rules in the NSG to simulate problems with the VM’s network stack, etc.

We are excited to announce that Orica is the first customer on the Multi-SID SUSE HA cluster configuration.

More details on the technical configuration of setting up HA for SAP is described here. The pacemaker on SLES in Azure is recommended to be setup with an SBD device, the configuration details are described here. Alternatively, if you do not want to invest in one additional virtual machine, you can also use the Azure Fence agent. The downside with Azure Fencing Agent is that a failover can take between 10 to 15 minutes if a resource stop fails or the cluster nodes cannot communicate which each other anymore.

Another important aspect on ensuring application availability is during a disaster through a well architected DR solution which can be invoked through a well-orchestrated Disaster Recovery Plan.

Azure Site Recovery (ASR):

Azure Site Recovery assists in business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. At the time of failover, apps are started in the secondary location and accessed from there after making relevant changes in the cluster configuration and DNS. After the primary location is running again, you can fail back to it. ASR was not tested for Orica at the time of current Go-Live as the support for SLES 12.3 was made GA by Microsoft too close to Cut-Over. However, we are currently evaluating this feature and we will be using this for DR at the time of Go-Live of the next phase.

Security on Cloud

Most of the traditional security concepts such as security at physical, server, hypervisor, network, compute and storage layers are applicable for overall security of the cloud. These are provided by the public cloud platform inherently and are audited as well by 3rd party IT security certification providers. Security on the Cloud will help you to protect the hosted applications on the cloud by leveraging features and customization aspects that are available through the cloud provider and those security features provided within the applications hosted on cloud.

Network Security Groups

Network Security Groups (NSGs) are rules applied in Networking Layer that will control traffic and communication with VMs hosted in Azure. In Azure, Separate NSGs can be associated for Prod, Non-Prod, Infra, management and DMZ environment. It is important to arrive at a strategy for defining the NSG rules in such a way that it is modularized and easy to comprehend and implement. Strict procedures need to be implemented to control these rules. Otherwise, you may often end-up with unnecessary redundant rules which will make it harder to troubleshoot any network communication related issues.

In the case of Orica, an initiative was implemented to optimize the number of NSG rules by adding multiple ports for the same source and destination ranges in the same rule. A change approval process was introduced once the NSG was associated. All the NSG rules are maintained in a custom formatted template (CSVs) which is utilized by a script for actual configuration in Azure. We expect it will be too difficult doing this manually for multiple VNets across multiple regions (e.g. primary, DR, etc.).

Encryption of Storage Account and Azure Disk

Azure Storage Service Encryption (SSE) is recommended to be enabled for all the Azure Storage Accounts. Through this, Azure Blobs will be encrypted in the Azure Storage. Any data that is written to the storage after enabling the SSE will be encrypted. SSE for Managed Disks is enabled by default.

Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets in your key vault subscription. Encryption of the OS volume will help protect the boot volume’s data at rest in your storage. Encryption of data volumes will help protect the data volumes in your storage. Azure Storage automatically encrypts your data before persisting it to Azure Storage, and decrypts the data before retrieval.

SAP Data at Rest Encryption

Data at rest is encrypted for SAP Applications by encrypting the database. SAP HANA 2.0 and SQL Server natively support data at rest encryption and they provide the additional security that is needed in case of a data theft. In addition to that the backups of both these databases are encrypted and secured by a Pass Phrase to ensure these backups are only readable and can be leveraged by authentic users.

In the case of Orica, both Azure Storage Service Encryption and Azure Disk Encryption were enabled. In addition to this, SAP Data at Rest Encryption was enabled in SAP HANA 2.0 and TDE encryption was enabled in SQL Server database.

Role-Based Access Control (RBAC)

Azure Resource Manager provides a granular Role-Based Access Control (RBAC) model for assigning administrative privileges at the resource level (VMs, Storage etc.). Using an RBAC model (For e.g. service development team, App development team) can help in segregation and control of duties and grant only the amount of access to users/groups that they need to perform their jobs in selected resources. This enforces the principle of least privilege.

Resource Lock

An administrator may need to lock a subscription, resource group, or resource to prevent other users in organization from accidentally deleting or modifying critical resources. We can set the lock level to CanNotDelete or ReadOnly. In the portal, the locks are called Delete and Read-only respectively. Unlike RBAC, Locking Resources would prevent intentional and accidental deletion of resources for all the users including the users who have owner access as well. CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource. ReadOnly means authorized users can read a resource, but they can’t delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role. For Orica, we have configured this for critical pieces of Azure infrastructure, to provide an additional layer of safety.

Operations & Management

Cognizant provides Managed Platform as a Service (mPaaS) for Orica through Microsoft Azure Cloud. Cognizant has leveraged several advantages of operating SAP systems in public cloud including scheduled automated Startup and Shutdown, automated backup management, monitoring and alerting, automated technical monitoring for optimizing the overall cost of technical operations and management. Some of the recommendations are described below.

Azure Costing and Reporting

Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary, allows you to track cloud usage and expenditures for your Azure resources and other cloud providers including AWS and Google. Monitoring your usage and spending is critically important for cloud infrastructures because organizations pay for the resources they consume over time. When usage exceeds agreement thresholds, unexpected cost overages can quickly occur.

Reports help you monitor spending to analyze and track cloud usage, costs, and trends. Using Over Time reports, you can detect anomalies that differ from normal trends.
More
detailed, line-item level data may also be available in the EA Portal (https://ea.azure.com) which are more flexible compared to Cloudyn reports and could be more useful.

Backup & Restore

One of the primary requirements of system availability management as part of technical operations is to protect the systems from accidental data loss due to factors such as infrastructure failure, data corruption or even complete loss of the systems in the event of a disaster. While concepts such as High Availability and Disaster Recovery will help to mitigate infrastructure failures, for handling events such as data corruption, loss of data, etc. a robust backup and restore strategy is essential. Availability of backups allow us to technically restore an application back to working state in case of a system corruption and present the “last line of defense” in case of a disaster recovery scenario. The main goal of backup/restore procedure is to restore the system to a known-working state.

Some of the key requirements for Backup and Restore Strategy include:

  • Backup should be restorable
  • Prefer to use native database backup and restore tools
  • Backup should be secure and encrypted
  • Clearly defined retention requirements

    VM Snapshot Backups

    Azure Infrastructure offers native backup for VMs (inclusive of disks attached) using VM Snapshots. VM Snapshot backups are stored within Azure Vaults which are part of the Azure Storage architecture and are geo-redundant by default. It is to be noted that Microsoft Azure does not support traditional data retention medium such as tapes. Data retention in cloud environment is achieved using technologies such as Azure Vault and Azure Blob which are part of Azure Storage Account architecture. In general, all VMs provisioned in Microsoft Azure (including databases) should be included as part of the VM Snapshot backup plan although the frequency can vary based on the criticality of the environment and criticality of the application. Encryption should be enabled at Azure Storage Account level so that the backups when stored in the Azure Vault are also encrypted when accessed outside the Azure Subscription.

    Database Backups

    While the restorability of file system and database software can be achieved using VM Snapshot process described above, VMs containing database may not be able to restore the database to a consistent state. Hence, backup of databases is highly essential to guarantee restorability of databases. It is advisable to have all databases in the landscape to be included as part of the Full Database Backups, the schedules for must be described based on the business criticality and requirements for the application. Consistency of the database backup file should be checked after the database backup is taken. This is to ensure restorability of the database backup.

    In addition to Full Database backups, it is recommended to perform transaction log backups at regular intervals. This frequency must be higher for a production environment to support point in time recovery requests and the frequency can be relatively lower for non-production environments.

    Both Full Database Backups and Transaction Log Backups must be transferred to an offline device (such as Azure Blob) and retained as per data retention requirement. It is recommended to have all database backups to be encrypted using Native Database Backup Data Encryption methodology if the database supports it. SAP HANA 2.0 supports Native DB Backup Encryption.

    Database Backup Monitoring and Restorability Tests

    Backup Monitoring is essential to ensure the backups are occurring as per frequency and schedule. This can be automated through scripts. Restorability Test of backups will assist in guaranteeing the restorability of an application in the event of a disaster or data loss or data corruption.

    Conclusion

    Cognizant SAP Cloud Practice in collaboration with SAP, Microsoft and SUSE leveraged and built some of the best practices for deploying SAP landscape in Azure for Orica’s 4S Program. Through this article, some of the key topics that are very relevant for architecting an SAP landscape on Azure are exhibited. Hope you found this blog article useful. Feel free to add your comments.

Moving SAP on SQL Server and Windows into Azure – what SQL Server and Windows release should I use

$
0
0

As Azure became a very popular secure and scalable platform to deploy mission critical workload like SAP application on it, Microsoft itself tries to lower the bar to move your application to Azure even more. One of the measures Microsoft took was announced in this blog:

https://azure.microsoft.com/en-us/blog/announcing-new-offers-and-capabilities-that-make-azure-the-best-place-for-all-your-apps-data-and-infrastructure/

In essence, the blog announces free extended Security Patches for Windows Server 2008(R2) and SQL Server 2008(R2) beyond the end of the usual 10 year support life cycle period. Though this sounds appealing in a first glance, there are many reasons why you as SAP customer should not take this extension as motivation to delay a move of your mission critical system to more recent Windows Server and SQL Server releases.

Why do we not recommend leveraging these older Window and SQL Server releases in Azure for SAP workload other similar large scale Enterprise Mission Critical applications? Short answer is that we introduced a lot of features and functionalities into the Windows Server 2012 family and Windows Server 2019 and more recent SQL Server releases that either accommodate and improve the handling of SAP workload or improve the integration into Azure as an IaaS platform. SAP NetWeaver based systems typically run your company’s most critical business processes and shouldn’t be left running on 10 year old operating system and database platform.

When we look into details of the changes in more recent Windows Server versions and Azure that improve integration, scalability and reliability then the following documented changes are noteworthy:

These are the most obvious reasons. Besides those there were a lot of improvements that made it into more recent Windows kernels that were resulting out of issues SAP customers cases. Severe improvements regarding scalability and reliability are part these newer OS releases which in a lot of cases improve the efficiency of SAP workload in virtualized environments. As you read this list, it is clear that we highly recommend using Windows Server 2016 as operating system for your SAP deployments in Azure.

On SQL Server side, the changes are even more impressive. When we look at the list, then the first striking improvements with SQL Server 2012 and expanded and improved in more recent releases are:

SQL Server Column store: SQL Server Column Store is a ground breaking feature that can be leveraged with SAP BW, but also with other NetWeaver based SAP software. The best SAP centric documentation can be found in these articles: https://blogs.msdn.microsoft.com/saponsqlserver/tag/columnstore/ .

Besides columnstore for SAP BW, SQL Server 2016 introduced modifiable non-clustered columnstore indexes that can be applied to indexes to SAP ERP systems and other non-BW SAP NetWeaver applications.

  • SQL Server AlwaysOn: Is a major improvement to SQL Server Database Mirroring. SQL Server AlwaysOn is not only a complete high availability and disaster recovery functionality, but also allows moving backups to the secondary nodes. Combined with Windows Server 2016 Cloud Witness and SQL Server Distributed Availability Groups, this is the ideal functionality to configure high availability and disaster recovery in Azure IaaS deployments.
  • SQL Server Query Store: allows you to track long term query performance. In cases of performance degradation the new functionality allows you to rollback the execution of a query to the best performing one. More details can be read in: https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
  • Resumable online Index builds allow to build indexes piecemeal in chunks with lower workload. This functionality got introduced in SQL Server 2017. More details can be found here: https://www.mssqltips.com/sqlservertip/4987/sql-server-2017-resumable-online-index-rebuilds/
  • SQL Server consistency checks (checkdb) were improved majorly in SQL Server 2016. The improvements resulted in way better throughput of checkdb and better resource control.

Other very important functionality that helps SQL Server to leverage Azure capabilities can be listed as:

Other functionality which is less more obvious for SAP workloads are

  • SQL Server 2016 introduced functionality to accelerate execution of SAP CDS views and functions related to CDS views
  • In the latest SQL Server releases, DMVs got changed and added to improve functionality of NetWeaver’s DBACockpit

Given all the new functionalities and the improvements that make a big difference to run SAP workload, it is highly recommended moving to a minimum of SQL Server 2016 or consider SQL Server 2017 when moving your SAP workload to Azure IaaS. Instead of staying on nearly 10 year old releases of Windows and SQL Server that never got really adapted to the usage in Azure or for more recent SAP releases or features (like SAP CDS view).

SAPS ratings on Azure VMs – where to look and where you can get confused

$
0
0

Moving SAP software to Azure, you need to decide which Azure VM types you want to use. One of the major criteria for selecting a certain VM is the throughput requirement of the SAP workload. SAP decided decades ago, to characterize the throughput in SAPS (SAP Application Performance Standard). To measure the SAPS a server or an Azure VM can deliver, server manufacturers or cloud service providers need to run the SAP SD Standard Application Benchmark. One hundred SAPS are defined as 2,000 fully business processed order line items per hour (https://www.sap.com/about/benchmark/measuring.html ).

As we certify Azure VMs for SAP NetWeaver and SAP HANA, one of the main pieces of information we need to deliver is the throughput measured in SAPS of the VM to be certified. SAP again, uses that data to document it for Azure in SAP note #1928533. This note is the official source to find the SAPS for SAP NetWeaver certified VMs. This note lists the number of vCPUs, memory and SAPS for each of the Azure VM types that are certified to run the SAP NetWeaver workload.

So where is the confusion then?

First area of confusion is created by the way publicly released SAP SD Standard Application benchmarks are documented by SAP in case of hyperscale clouds like Azure. When you follow this link, you are getting a list of SAP SD benchmarks which were released on Azure VMs. Note, we are not forced to release benchmarks conducted for certification of Azure VMs. As a result, you won’t find benchmarks released in this location for every certified Azure VM type. As you click onto a single record of a benchmark, data like this will be displayed:

image

And this is where the problem starts. Nowhere in this displayed data would you find the vCPU and memory definition of the VM that got tested. Instead you find the technical data of the host that the VM ran on. In terms of the number of CPUs and cores, the data is also not reflecting the real configuration of the host. But it does list the theoretical capacity for the case when Intel Hyperthreading is configured on the host. Means at the end, you can’t draw any conclusions on the number of vCPUs or the volume of memory for e.g. the M128s VM type (as shown above) out of the official SAP benchmark webpage. In order to get that data, you would need to check on the Azure pricing page or SAP note #1928533. So at the end, the data displayed on the official SAP benchmark webpage just confuses for the purpose of getting the SAPS of a specific Azure VM type for SAP sizing purposes. Though it displays the SAPS values of a certain VM type (which you can get in SAP note #1928533 as well), they however were reported with the host technical data; data that is unnecessary to size Azure VM infrastructure according to SAPS requirements.

Second area of confusion is created when you compare different VM types and calculate the SAPS/vCPU. Please note, we are always talking about vCPUs or CPU threads as a single unit that shows up in e.g. Windows Task Manager. We avoid the term ‘core’ for VMs on purpose because it has a very fixed meaning on bare-metal. In the Intel bare-metal world you can have one core that is representing one CPU thread in the case that the server is configured w/o Hyperthreading. Or you can have a core representing two CPU threads in case the server is configured with Hyperthreading enabled.

As you calculate the SAPS a single vCPU of an Azure VM can deliver as throughput, you can get to a somewhat surprising results when comparing different Azure VM types. E.g., let’s check DS14v2, which in SAP note #1928533 is reported with:

  • 16 vCPUs, 112GiB memory and 24180 SAPS.
  • We look at 24180 / 16 = 1511 SAPS per vCPU as throughput

As we move to DS16v3 which has the same number of vCPUs (though only 64GiB memory), you calculate only:

  • 17420 SAPS / 16 = 1088 SAPS throughput per CPU thread

This is kind of surprising since the DS16v3 is a more recent VM type and you would expect that performance or throughput per vCPU is improving all the time. Reason that this is not the case, is, that we introduced VM types that are running on hosts where Intel Hyperthreading is enabled. Means one physical processor core on the host server represents two CPU threads on the host. With that a single vCPU in an Azure VM is mapped into one of the two CPU threads of a hyperthreaded core on the host server. Hyperthreading on bare-metal server improves the overall throughput. But does not double the throughput as it does double the number of CPU threads of the host. The throughput improvement by Hyperthreading under classical SAP workload is ranging from 30-40%. As a result one core with two hyperthreaded CPU threads on the host will deliver 130-140% of the throughput the same processor core delivers without Hyperthreading. At the end, this means that a single CPU thread of a hyperthreaded core will deliver between 65-70% of what a non-hyperthreaded core delivers with a single CPU thread.

That is exactly the difference you are seeing between the SAPS a single vCPU delivers on a DS14v2 compared to a DS16v3.

So far, the following NetWeaver and/or SAP HANA certified Azure VM families are running on host hardware where Intel Hyperthreading is enabled:

  • D(S)v3
  • E(S)v3
  • M-Series

As mentioned earlier, the detailed data that is displayed by SAP on their benchmark webpage is not giving any indication whether Hyperthreading is enabled or not by just listing the theoretical capacity of the host. Indications whether Hyperthreading is enabled on the Azure hosts running certain VM families is published as well in the Azure pricing webpage. You can get indications like shown in the screenshot below:

clip_image004[4]

IMPORTANT:

Keep in mind that VM types have certain bandwidth limitations as well. In general, we can state that the smaller the VM in a VM family is, the smaller the storage and network bandwidth. In case of large VMs, like M128s or M128ms, or ES64v3 the VM is the only VM running on a host. As a result, it benefits from the complete network and storage bandwidth the host has available. In case of smaller VMs, the network and storage bandwidth need to be divided across multiple VM. Especially for SAP HANA, but also for SAP NetWeaver, it is vitally important that a VM running intensive workload does not steal CPU, memory, network and storage bandwidth from other VMs running on the same host. As a result, in sizing a VM, you also need to take the required network and storage bandwidth into account. Detailed data of network and storage throughput for each VM type can be found here:

We hope this clarifies aspects of the different SAPS per vCPU throughput calculations and measurements. And gives some explanations why the SAP benchmark webpages are not suited to check for sizing. Instead SAP note #1928533 should be consulted.

S/4H Installation in Azure – SETUP AND CONFIG IN ONE DAY

$
0
0

Overview

You want to perform a setup of SAP S/4 HANA in Azure. And you do want to do it quick so that you can experience the overall process and get ready for your landscape deployment. If so, this document is for you. This document enables you to perform the setup and configuration of the S/4H in Azure.

In this setup, we used the embedded option of SAP Fiori. What is embedded option? Don’t worry, we will cover the basics later in this document. This is a hybrid mode installation where SAP Application layer runs on Windows and Large Instances on linux operating system.

Yes! You can accomplish this all in less than 1 day. Excited? Let’s begin…

Read full documentation here: S4H-in-Azure-Setup-and-Config-in-One-day


SAP on Azure High Availability Systems with Heterogenous Windows and Linux Clustering and SAP HANA

$
0
0

SAP HANA is the well-known technology from SAP, powering different SAP applications like SAP S/4 HANA, SAP BW for HANA, SAP Business Suite on HANA, etc. 

High availability (HA) is a current commodity and must-have feature expected by many SAP customers running their productive SAP HANA systems in the Azure cloud.

SAP HANA runs only on Linux distributions.

Typically, in an HA scenario we would cluster SAP Single Points of Failures (SPOFS) like DBMS and SAP central services (SAP ASCS / SCS instance), and we have at least two redundant SAP application servers.

If you would deploy an SAP system completely on Linux OS (using SLES or Red Hat), the SAP HA architecture would include:

  •  [Linux] Clustered SAP HANA with Pacemaker on Linux
  •  [Linux] Clustered SAP ASCS/SCS with Pacemaker on Linux
    •  [Linux] A file share for SAP GLOBAL host
  •  [Linux] At least two SAP application servers on Linux

So, what does the Windows and Windows Failover Clustering have to do with SAP HA systems running on HANA?

Well, another possible HA setting with SAP HANA is to:
  • [Linux] Cluster SAP HANA with Pacemaker on Linux
  • [Windows] Use Windows Failover Cluster for SAP ASCS/SCS instance
    •  One option is to use HA file share, for SAP GLOBAL host
    •  Another option is to use shared disks
  • [WindowsAt least two SAP application servers are on Windows


In below architecture, we use a file share construct (instead of share disks) for the SAP ASCS/SCS instance.

The highly available SMB file share is implemented using Windows Scale-Out File server (SOFS) and Storage Spaces Direct (S2D).

This scenario is supported by SAP, as HANA client is available and supported on Windows OS.

The main question is – why would some customer run such a heterogenous OS/clustering scenario?

Well, there are a few cool features on Windows Clustering for SAP ACS/SCS instance that bring valuable benefits that are unfortunately not available on Linux Pacemaker cluster for ASCS/SCS, like:

  •  Fault-Tolerant SAPMNT file share on Windows Cluster
    In a Windows failover cluster, there is a feature called Continuous Availability (CA) which offers fault-tolerant SAPMNT share during the failover.

    CA is available on :
    –  Clustered File Server for general use  (in combination with shared disk)
     –  and clustered Scale-Out File Share (in combination with S2D )

    A typical benefit would be, for example, for active SAP batch process that continuously write their logs on SAP GLOBAL host via SAPMNT file share.Without this feature, failover of SAPMNT file share will cause cancelation of active SAP batch (due to loss of file handle), and you need to restart them from the beginning, which is a not a funny situation if you run business critical reports that must be finished on time.With an enabled CA feature, batch jobs are not canceled during the SAPMNT file share fail over (for unplanned or planned downtime) and are running happily until they are finished!

    More details can be found in this blog New Failover Clustering Improvements in Windows Server 2012 and Its Benefits for SAP NetWeaver High Availability and SAP note 2287140 – Support of Failover Cluster Continuous Availability feature (CA).This feature is available inWindows 2012 and higher releases.

  • The SAP Installer fully supports Windows failover clustering with shared disks as well as file share, which greatly simplifies deployment procedure.

  • SAP documentation fully describes high availability with Windows clustering for SAP ASCS/SCS instance

  • On Azure Windows Failover Cluster supports Multi-SID option,

    e.g. ability to install more SAP ASCS/SCS instances belonging to different SAP systems into one cluster. In this way, customers can consolidate SAP load and reduce the cost in Azure.

    More information on Multi-SID can be found in official SAP on Azure guides:
    SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on Azure
    SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk on Azure

  • Azure Cloud Witness
    Cloud Witness is a type of Failover Cluster quorum witness that uses Microsoft Azure to provide a vote on cluster quorum. For more information check Deploy a Cloud Witness for a Failover Cluster.
    Azure cloud witness replaces the cluster Quorum with file share on the stand alone VM.  As Azure cloud witness is cloud service, the whole solution is much easier to manage and overall TCO is reduced.


The following table gives a comparison overview:

  Windows Failover Cluster Linux Pacemaker Cluster
Cluster software included in OS License Yes Yes
Integrated with Enqueue Replication Service (ERS instance)
(Fault tolerant SAP transaction lock)
Yes Yes
Fault Tolerant SAPMNT File Share
(No downtime for SAP Batch Jobs)
Yes
(Continuous Availability feature)
No

(cancelation of SAP Batch Jobs)

Clustering without
Cluster Share Disk
Yes Yes
Direct support by SAP Yes No

(partner)

Integrated in SAP installer Yes
(SWPM supports both
file share and shared disks)
No

(manual procedure)

Described in SAP Installation Guides Yes No
(partner)
Regular Installation & upgrade test by SAP Yes No

(partner)

Can be used for NW on HANA Yes Yes
ASCS/SCS Multi-SID support in Azure
(Consolidation & TCO Reduction)
Yes No
Cloud Cluster Quorum Service Yes No*

* although Azure Fence is supported, SBD fencing (which requires extra VM) is preferred option due to faster failover

In addition to SAP HANA, the same HA architecture can be used with any other DB running on Linux that supports DB client on Windows OS like:

  • Oracle
  • SAP Sybase
  • SAP Max DB
  • IBM DB2

New BLOG location

$
0
0
After many years being hosted on blogs.msdn.com, we moved our blog to the Techcommunity site of Microsoft. The link to our new blog site, which also contains the old articles is https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/bg-p/SAPApplications We hope to serve you well on the new site... Read more
Viewing all 90 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>