0% found this document useful (0 votes)
198 views57 pages

SAP in AWS - SOE

This document provides a reference architecture for running SAP in AWS. It details standards and best practices for aspects like networking, storage, backups, security, and high availability. The document has undergone multiple revisions to update information based on lessons learned and address questions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
198 views57 pages

SAP in AWS - SOE

This document provides a reference architecture for running SAP in AWS. It details standards and best practices for aspects like networking, storage, backups, security, and high availability. The document has undergone multiple revisions to update information based on lessons learned and address questions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Project: <<doc property: Project>>

Document: Standard Operating Environment for SAP in


AWS - Reference Architecture
Author: BP Shared Services
Strategy and Architecture
&
BAS Technical Environments

SAP SOE in AWS Reference A


rchitecture
SAP in AWS Page 2

Amendment History
Versio Date Comment By
n
0.1 25 Jan 2016 Initial version Jeff Forrest

0.2 17 May 2016 Updates from SOE Governance sessions : John Davie

1) Large File Transfers


2) Standard OS – SUSE
3) NFS Utility Server

0.21 19 May 2016 Added CIDR Block for dev/test VPC Jeff Forrest

0.22 19 May 2016 Added details on new utility server for SAPNAS plus saprouter John Davie
connection to SAP for support purposes

0.23 24 May 2016 Added sections under storage and network Stephen Head

0.24 03 Jun 2016 Added information on using scp to transfer files from bp to AWS John Davie

0.25 07 Jun 2016 Various additions from the AWS question tracker. Stephen Head

0.26 13 Jun 2016 Added information on default OS shell, SAP install type and S3 backup John Davie
example script

0.27 24 Jun 2016 Added SNAP based backup script John Davie

0.38 04 Jul 2016 1) Added new architectural decision on Web Dispatcher single John Davie
SID per tier
2) Added information on standard AWS Security groups to be
used
3) Added information on Volume naming convention

0.39 05 Jul 2016 Added section on IAM role for AWS instances John Davie

0.40 5th July 2016 Updated DR section. Jeff Forrest

0.41 11 July 2016 Various additions from the AWS question tracker. Stephen Head

0.42 14 July 2016 Updated security group ports, added non-sap security group and John Davie
added information on Windows Firewall setting. Also Volume naming
updated for non-SAP

0.43 3 Aug 2016 Updated the storage encryption statement Stephen Head

0.44 3 Aug 2016 Updated section on Volume Group/Logical Volumes John Davie

0.45 4 Aug 2016 Added the SWD installation guide URL. Updated the DNS standards. Stephen Head

0.46 5 Aug 2016 Various additions from the AWS question tracker. Stephen Head

0.47 5 Aug 2016 Added instance naming convention John Davie

0.48 11 Aug 2016 Various additions from the AWS question tracker. Stephen Head

0.49 12 Aug 2016 Following changes were made : John Davie

1) Volume encryption standard changed (all volumes to be


encrypted at AWS

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 3
2) Added new Livecache security group
3) Comment on clustering and SAP enqueuer replication in HA
section
4) Updated DR and HA section with latest information
5) Added information on root volumes

0.50 25 Aug 2016 Updated the tagging section Stephen Head

0.51 26 Aug 2016 Updated the secure saprouter details Stephen Head

0.52 6 Sept 2016 Added information on the AWS Data Provider John Davie

0.53 12 Sept 2016 Added information on setting up SSL for SAP Host Agent John Davie

0.54 21 Sept 2016 Added sapinst port 21212 to Security group definitions John Davie

0.55 17 Oct 2016 Added information on encryption of HTTP to the backend John Davie

0.56 21 Oct 2016 1) Added information on new Security Groups in CSL v3 John Davie
2) Added information on HANA instance number standards

0.57 27 Oct 2016 1) Added new ports to the SAP SG in the CSL VPC John Davie
2) Added information on new SAPNAS in CSL VPC

0.58 02 Nov 2016 Added information about standard SNAP backups taken in CSL v3 John Davie
instances

0.59 07 Nov 2016 Added reminder around umask John Davie

0.60 11 Nov 2016 Added further information on UID/GIDs to be used Stephen Head

0.61 15 Nov 2016 Added standard on setting “Cloudwatch Detailed Monitoring” John Davie

0.62 21 Nov 2016 General updates from the AWS questions sheet Stephen Head

0.63 22 Nov 2016 Update with latest Backup/Restore Strategy John Davie

0.64 01 Dec 2016 Updated with latest NFS standards John Davie

0.65 2 Dec 2016 Updated the tagging sections with the CSL account tagging info Stephen Head

0.66 8 Dec 2016 General updates for CSL3 deployments Stephen Head

0.67 9 Dec 2016 Add advice on EBS Optimised Volumes John Davie

0.68 13 Dec 2016 Added some more details to the scope of this document section Stephen Head

0.69 13 Dec 2016 Added link to DS Controls standards for security John Davie

0.70 13 Dec 2016 Updated VPCs and Storage sections Jeff Forrest

0.71 16 Dec 2016 Updates after Dec16 BTDA. Added Instance stop/start/termination Stephen Head
info

0.72 19 Dec 2016 Added the following updates : John Davie

1) Updated firewall section with link to SAP firewall document


2) Updated security section with root access information
3) Updated support responsibility section
4) Updated backup section with comment on no 3rd party
backup tool for DB backups
5) Updated email section to mention no use of SES
6) Added information on SAP Web Dispatcher redirect and
HTTP Port Requirement

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 4
7) Updated AWS Data Provider for SAP section to mention this
software is now part of the base AMI

1.0 19 Dec 2016 First published version Stephen Head

1.01 6 Jan 2017 Updated NFS recommendations and clarified Web Dispatcher John Davie
recommendations

1.02 10 Jan 2017 - Updated OS users section Stephen Head


- Updated the backup section

1.03 11 Jan 2017 - Updated the external document links as the SOE and related Stephen Head
docs have moved to the TE Sharepoint.

1.04 11 Jan 2017 - Updated with basic information on creating/using S3 buckets John Davie
in CSL v3 (naming convention etc.)

1.05 13 Jan 2017 - Updated the backup section Stephen Head

1.06 18 Jan 2017 - Added link the SAP ASE SOE doc Stephen Head
- Added info regarding the requirement for a SAP DB Gateway

1.07 18 Jan 2017 - Added info on using scp instead of SFTP John Davie
- Added information on RSYNC

1.08 26 Jan 2017 - Updated the backup section cleanup script and details Stephen Head

1.09 27th Jan 2017 - Updated storage section to show Hana backup shared over Jeff Forrest
NFS on scaleout.

1.10 27 Jan 2017 - Added the Hana SOE document link Stephen Head

1.11 27 Jan 2017 - Added confirmation that a Gateway Process is still required John Davie
in the ASCS

1.12 31 Jan 2017 - Added the ASE SRS install/config guide URL Stephen Head

1.13 1 Feb 2017 - Updated VIP/virtual hostname usage info Stephen Head

1.14 Feb 6 2017 - Added Subnets and Security groups IDs John M.

1.15 Feb 6 2017 - Added more information on domains required for DNS John Davie
entries

1.16 7 Feb 2017 - Added link to SRS operation guide S Head


- Added info on max disk size for ASE SAPDATAx volumes

1.17 7 Feb 2017 - Added clarification on SSL setup for WD (Option 4) John Davie

1.18 7 Feb 2017 - Updated the backup housekeeping script S Head

1.19 9 Feb 2017 - Added information on Email Encryption to/from SAP John Davie
- Added new port standard for HTTPs on ABAP ICM

1.20 16 Feb 2017 - Removed BAS VPC references from this document. A BAS John Davie
VPC specific version of the document will be saved
separately

1.21 17 Feb 2017 - Added the link to the SNAP backup recovery procedure S Head

1.22 20 Feb 2017 - Updated OS user creation details S Head


- Add note on SAP hardware key generation

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 5
1.23 03 Mar 2017 - Updated with details on /var sizing and how to expand J Davie
- Add recommendation on show_details_errors for ICM and
WD

1.24 06 Mar 2017 - Added info to the sections on Encryption and Monitoring Roy Keegan

1.25 08 Mar 2017 - Added information on mounting an NFS export from a CSL J Davie
instance to a BP1 instance
- Added HA Alarm Configuration info

1.26 09 Mar 2017 - Updated the backup housekeeping script S Head

1.27 09 Mar 2017 - Updated with link to document outlining how to allow
passwordless SSH for RSYNC

1.28 10 Mar 2017 - Updated the secondary IP for Prod information S Head
- Added info on SUSE postfix setup of OS emailing

1.29 10 Mar 2017 - Changed AZ B subnet names from “ac” to B J Forrest

1.30 17 Mar 2017 - Reformatted the security group table to make it clearer S Head

1.31 24 Mar 2017 - Added information on HTTPs redirects J Davie

1.32 28 Mar 2017 - Updated the Hana backups section (use DB13) S Head

1.33 28 Mar 2017 - Updated storage section to mandate the existence of a J Davie
partition table for each device

1.34 30 Mar 2017 - Updated the Hana backups section (use of XS engine) S Head
- Updated RSYNC section

1.35 04 Apr 2017 - Updated text around when to set no_subtree_check and J Davie
no_root_squash on nfs exports

1.36 10 Apr 2017 - Added VPC and SG information for Prime account used for J Davie
shared SAP services such as SAProuter and LaMa

1.37 11 Apr 2017 - Added information about new SAPNAS in Prime J Davie

1.38 13 Apr 2017 - Added information on using HA alarms for all stacks J Davie

1.39 25 Apr 2017 - Added SAP/DB Autostart information S Head

1.40 9 May 2017 - Added information about Prime VPC subnets to be used (INT, J Davie
not APP)

1.41 5 Jun 2017 - Changed SGs for Hana 2.0 J Davie

1.42 9 Jun 2017 - Changed Prime SAPNAS server J Davie

1.43 30 Jun 2017 - Added more details about SAProuter in Prime J Davie

1.44 12th July 2017 - Updated backup storage types to SC1 and added J Forrest
/interface/<SID> sharing details.

1.45 19th July 2017 - Updated to make clear that S systems now go in Pre-Prod J Davie
VPC

1.46 26 July 2017 - Updated the tagging sections as the Engineering role now S Head
has the access to add custom tags

1.47 26 July 2017 - Added information on which Windows AMI to use for J Davie
Windows based builds

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 6
1.48 28 July 2017 - Added reminder section on SLD strategy and SLDREG (not J Davie
changed from on-prem SOE).

1.49 01 Aug 2017 - Changed volume recommendations to remove VGs/LVs and J Davie
use of partitions

1.50 10 Aug 2017 Updates after Aug TEDA S Head

- Small custom/3rd party executable location


- Added note in integrated SWD
- ASCS enqueue parameters (added SAP Profile section).

1.51 15 Aug 2017 - Added links to the Hana and ASE SOEs in reference docs S Head
section

1.52 01 Sept 2017 - Added information on requirement to use encrypted J Davie


Message Server comms for http(s) based connectivity

1.53 13 Sept 2017 - Various small updates after the SOE Knowledge sharing S Head
sessions

1.54 15 Sept 2017 - Updated SAPnas section to remove old SAPnas shares J Davie

1.55 18 Sept 2017 - Updated details on S3 lifecycle policy J Davie

1.56 26 Sept 2017 - Updated SAP SG Ports with Solman additions J Davie

1.57 5 Oct 2017 - Alarm actions updates S Head

1.58 10 Oct 2017 - Update the Backup section for Hana S Head

1.59 11 Oct 2017 - Updated volume sizing info for /hana/shared S Head
- Updated the emailing section regarding the use of port 25

1.60 12 Oct 2017 - Removed duplication with Standard SUSE build for SAP AWS S Head
SOE

1.61 13 Oct 2017 - Added the backup to S3 and clean script S Head

1.62 19 Oct 2017 - Removed SAP AWS Data Provider for SAP info, as this has J Davie
been moved to the SUSE OS SOE

1.63 23 Oct 2017 - Added information on new AWS SAProuter J Davie

1.64 03 Nov 2017 - Added note on random volume partition error J Davie

1.65 09 Nov 2017 - Added a note on LaMa to mention that J Davie


housekeeping/backup jobs should be taken into account
when scheduling downtime

1.66 22 Nov 2017 - Updates after the Nov TEDA: S Head


- Use of the integrated Web dispatcher

1.67 28 Nov 2017 - Added a clarification of the allocation of SAP instance S Head
numbers in AWS to F and R

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 7

Associated Documents (This document should be read in


conjunction with):
Version
Title of Document Date
No/File Name

Latest N/A

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 8

Contents
1 Introduction....................................................................................................................................11
1.1 Scope...................................................................................................................................11
1.2 Document References.........................................................................................................11
2 SAP on AWS Architecture Summary...............................................................................................13
2.1 SAP on AWS Summary.........................................................................................................13
3 SAP VPCs.........................................................................................................................................14
4 SAP Builds.......................................................................................................................................15
4.1 Principles.............................................................................................................................15
4.2 SID Naming..........................................................................................................................15
4.3 SAP Instance Numbers.........................................................................................................15
4.4 SAP Web Dispatcher............................................................................................................15
4.5 SAP Host Agent....................................................................................................................18
4.6 SAP Software Downloads.....................................................................................................18
4.7 SLD.......................................................................................................................................18
4.8 SAP Landscape Manager (LaMa)..........................................................................................19
4.9 Solution Manager................................................................................................................19
5 Security...........................................................................................................................................19
5.1 Principles.............................................................................................................................19
5.2 AWS Management Console Access......................................................................................19
5.3 OS Access, SSH, RDP.............................................................................................................19
5.4 Remote access (outside BP1)...............................................................................................20
5.5 Data Encryption at rest........................................................................................................20
5.6 Data encryption in transit....................................................................................................20
5.7 AWS IAM roles.....................................................................................................................20
5.8 AWS Security Groups...........................................................................................................21
5.9 Firewalls...............................................................................................................................23
5.10 OS Hardening.......................................................................................................................24
5.11 Passwords............................................................................................................................24
6 AWS Instances................................................................................................................................25
6.1 Principles.............................................................................................................................25
6.2 Instance Types.....................................................................................................................25
6.3 AWS Instance Builds............................................................................................................25

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 9
6.4 AWS Instance Start / Stop / Termination.............................................................................26
7 Operating System Builds.................................................................................................................26
7.1 Principles.............................................................................................................................26
7.2 SUSE.....................................................................................................................................27
7.3 Windows..............................................................................................................................27
7.4 OS Users...............................................................................................................................27
8 Storage...........................................................................................................................................28
8.1 Principles.............................................................................................................................28
8.2 Storage Design.....................................................................................................................28
8.3 Volumes...............................................................................................................................32
8.4 S3.........................................................................................................................................33
8.5 Storage Encryption..............................................................................................................34
8.6 NFS Exports/Mounts............................................................................................................34
8.7 NFS Utility Server.................................................................................................................34
9 Network..........................................................................................................................................35
9.1 Large File Transfers..............................................................................................................35
9.2 IP Addresses.........................................................................................................................35
9.3 Hostnames...........................................................................................................................36
9.4 Default Domain....................................................................................................................36
9.5 DNS......................................................................................................................................36
9.6 Instance Network Interface.................................................................................................38
10 Databases.......................................................................................................................................39
10.1 Which Database to Use........................................................................................................39
10.2 Database Encryption............................................................................................................39
10.3 Oracle..................................................................................................................................39
11 High Availability and DR.................................................................................................................41
11.1 Principles.............................................................................................................................41
11.2 HA Architecture...................................................................................................................41
11.3 SAP/Database HA.................................................................................................................42
11.4 DR Architecture...................................................................................................................42
11.5 HA/DR Diagram....................................................................................................................43
11.6 Virtual Hostnames in SAP Systems.......................................................................................43
11.7 Reference.............................................................................................................................44
12 Backups..........................................................................................................................................45
12.1 Principles.............................................................................................................................45

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 10
12.2 Backup Retention.................................................................................................................45
12.3 CSL v3 VPC...........................................................................................................................46
12.4 Monitoring...........................................................................................................................48
13 Maintenance..................................................................................................................................49
13.1 Housekeeping and Archiving................................................................................................49
14 Support...........................................................................................................................................50
14.1 Responsibilities....................................................................................................................50
14.2 SAP Support Connectivity....................................................................................................50
15 Monitoring and Reporting..............................................................................................................51
15.1 Alarms..................................................................................................................................51
15.2 Actions.................................................................................................................................52
15.3 Dashboards..........................................................................................................................52
15.4 Tagging.................................................................................................................................52
16 SAP.................................................................................................................................................53
16.1 SAP Emailing........................................................................................................................53
16.2 SAP DB Gateway..................................................................................................................53
16.3 SAP Kernel............................................................................................................................53
16.4 SAP Profiles..........................................................................................................................54

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 11

1 Introduction

The SAP Standard Operating Environment(SOE) is the reference document that describes the
architecture of all SAP systems deployed in BP. The SOE primarily defines the architecture of
systems, however it may where required also mandate specific settings and configurations
where these are required to support the architecture.

Some more information on the SOE

 It is a framework of architectural and build standards for SAP and tightly coupled
applications in BP, with associated governance.
 The SOE is required due to the large and complex nature of BP’s SAP estate.
 It ensures consistent system architecture and configuration.
 It follows industry best practice and standardisation within the BP environment.
 It is constantly evolving.
 It builds on top of other standards within BP.
 It should be complied with on major architectural change (e.g. replatforming)
 SOE standards are retrofitted only where there is a good business case.

1.1 Scope

The “SAP in AWS – SOE” details the architecture and deployment requirements for SAP
systems on the Amazon Web Services (AWS) EC2 platform. Unlike in the past, for the on-
premise SOEs, the SAP in AWS SOE combines the requirements from Strategy and
Architecture(S&A) and Technical Environments (TE) into one document.

Many of the TE SAP Basis requirements stay the same as per the on-premise Basis SOE.
Therefore the “Basis SOE v2 handbook” should still be used as a reference. Any changes
required for the AWS deployments will be details in this document. Over time all the relevant
SAP Basis information will be transferred from the Basis SOE v2 handbook to this document.

This document is specific to the CSL v3 VPCs. The earlier BAS VPCs have slightly different
standards and these are stored in a separate document :

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/BAS%20VPCs/BAS%20VPCs%20SAP%20in%20AWS%20-%20SOE.docx

1.2 Document References

Document Name Location

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standard SUSE Build for SAP AWS
Standards%20and%20Best%20Practice/SOE/Cloud/Standard
SOE
%20SUSE%20Build%20for%20SAP%20AWS%20SOE.docx

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 12
https://bp365.sharepoint.com/sites/ITS1/Strategy_Architecture/
Share_Services_Architecture/Shared%20Documents/Forms/
AllItems.aspx?RootFolder=%2Fsites%2FITS1%2FStrategy
%5FArchitecture%2FShare%5FServices%5FArchitecture%2FShared
Buildsheets
%20Documents%2FSAP%2FBuild
%20Sheets&FolderCTID=0x012000C43E1E18C4E9C84AB60C181E1
D04FC8F&View=%7B37ABA9CA%2D293A%2D402E
%2DA484%2DBA18361CB99D%7D

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Basis SOE v2 handbook (on-
Standards%20and%20Best%20Practice/SOE/
premise)
Basis_SOE_v2_Handbook.docx

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20HANA
SAP HANA SOE
%20Standards%20%26%20Documents/SAP%20HANA%20-
%20AWS%20SoE.docx

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
SAP ASE SOE Standards%20and%20Best%20Practice/SAP%20ASE/SAP%20ASE
%20on%20AWS%20for%20SAP%20applications%20-%20SOE.docx

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 13

2 SAP on AWS Architecture Summary


2.1 SAP on AWS Summary

Item Selection

OS Suse, Windows (where SUSE not


supported)

Database Hana/SAP ASE

Volume Manager OS Native

Multipathing OS Native

Backup To local storage then to S3/Snapshot

Server Hardware Fully virtualised EC2

Database Storage EBS

Shared Storage EBS via export.

OS Storage EBS

HA Cloudwatch instance recovery

DR Replication to different availability zone

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 14

3 SAP VPCs
VPC Name Usage Locatio VPC ID VPC Subnets
n

CSL-Customer-Test Project EU- vpc- INTAZa 10.162.48.0/21


(X/D/Q/T/M West 118b7375
Systems) subnet-fa23c4a2

INTAZb 10.162.56.0/21

subnet-ac993fc8

INTAZc 10.162.64.0/21

subnet-d2458ca4

CSL-Customer-Pre-Prod Prod Support EU- vpc-638c7407 INTAZa 10.162.176.0/21


West
Prod Fix (S/F subnet-5766c033
Systems)
INTAZb 10.162.184.0/21

subnet-51448d27

INTAZc 10.162.192.0/21

subnet-f045a4a8

CSL-Customer-Prod Prod/DR (R EU- vpc-308c7454 INTAZa 10.163.48.0/21


Systems) West
subnet-3a22c562

INTAZb 10.163.56.0/21

subnet-ef66c08b

INTAZc 10.163.64.0/21

subnet-6905cf1f

CSL-Shared-Prime Shared EU- Vpc- INTAZa 10.163.208.0/23


Technical (e.g West ac827bc8
subnet-c064c3a4
SAProuter/LaMa
) INTAZb 10.163.210.0/23

subnet-6e23ed18

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 15

4 SAP Builds
4.1 Principles
 All project stack SAP installations where SAP and the DB are on the same virtual server will
be “Standard” installations (not Distributed or HA). For systems installed with NW 7.3 or
above, these will include a separate (A)SCS instance.
 We will use as few AWS instances as practically possible while still ensuring ease of
management and HA/DR restrictions. For example Production instances will have a least
two application server instances on separate AWS instance for high availability.
 To ensure compatibility with automated deployment, stacking is not typically used unless
savings can be justified. Production systems will not be stacked together unless one is very
small e.g. ABAP and Java systems could be stacked together.

4.2 SID Naming


The SID naming used for AWS deployed instances will follow the same principles as the on-
premise instances. However Hana databases most have their own SID independently of the
SAP application SID. For BW on Hana and standalone Hana DBs the SIDs will have ‘H’ as the
first letter followed by the stack and Project/business owner letter e.g. the Nike BW
development instance, WDL, Hana database will have the SID HDL. For ERP on HANA or S/4
HANA, the HANA database will have ‘T’ as the first letter e.g. an R&M S/4 HANA system PXE,
would have a HANA database SID TXE.

4.3 SAP Instance Numbers


The instance number standards for SAP systems in AWS will be the same as outlined in the
current on-premise SOE v2 document (see references). The only addition for AWS is for the
HANA instance numbers. HANA is treated as an application and hence will follow the same
standards as the on-premise SOEv2 document for PAS/SAS. Generally this will mean that
HANA instance numbers should fall in the following range :

00->04

00 will be the default instance number, however should that not be available for some
reason, 01, 02, 03 or 04 can be used instead. Firewall and security group rules have catered
for this range only.

Also note that, as the Production Fix(F) and Production(R) instances no longer share servers,
the same instances numbers (recommended) can be used for both systems when deploying
in AWS.

4.4 SAP Web Dispatcher


For SAP browser traffic load balancing, we will continue to use the SAP Web Dispatcher. The
AWS load balancer is not supported for SAP so will not be used.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 16
It was approved at the August 2017 and November 2017 TEDAs that the integrated web
dispatcher can be used For ABAP and ABAP/Java systems. For Java only systems the
integrated WD is not supported yet therefore the external WD should be used until SAP
release the integrated version for Java. The integrated Web dispatcher can be installed as an
option in SWPM.

Note: Java only Systems

The SAP Web Dispatcher deployments will differ from on-premise SOEv2 in that there will be
a single Web Dispatcher instance per landscape tier, each with a separate instance number
for each backend system e.g. three instances will all be served by Web Dispatcher X<n><n>,
with three separate instance numbers, e.g : 30, 31, 32

In the project stack (excluding Mock and including S), the Web Dispatcher instance will
reside on the Java stack virtual server

In the production stack (M, F & R), the Web Dispatcher will reside on a standalone virtual
server.

It should be noted that this architecture means that an outage to the Web Dispatcher will
affect all systems in that tier of the landscape.

The SWPM installer mechanism has been tested and caters for separate Web Dispatcher
installations under the same SID, with differing instance numbers.

The SWD installation guide should be used as a reference when setting up the Web
Dispatcher.

End Note

For all SWD deployments the following applies:

1) In the on-premise SAP systems, http traffic is encrypted up to the Web Dispatcher, then
decrypted between the Web Dispatcher and the backend SAP system. In AWS, traffic will
still be decrypted at the Web Dispatcher, but then will be re-encrypted again before
being sent to the backend SAP system. The backend SAP systems must all therefore have
SSL configured at the ICM layer. Please use the following standard for the SSL port on
the ABAP ICM :

a. Port Number = 81NN, where NN is the instance number of the PAS/SAS in


question

Users/Systems will still connect to the Web Dispatcher, therefore this will still need to
have BP or external Certificate Authority certificates installed. The backend systems,
however, may have SAP self-signed certificates with extended expiry dates (e.g 25
years). These certificates should not need to be replaced during the lifetime of the
system. This applies to all of the backend SAP components which require an SSL server

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 17
certificate (SCS, PAS, SAS etc.). The SSL encryption/decryption to be used equates to
“option 4” in the following SAP Help for Web Dispatcher :

https://help.sap.com/saphelp_nw75/helpdata/en/
48/98e6a84be0062fe10000000a42189d/content.htm

2) Please ensure the following profile parameter is set in both the Web Dispatcher and
ABAP ICM for F and R systems. This reduces the amount of data shown during an ICM
error :

a. is/HTTP/show_detailed_errors=FALSE

3) There is no requirement to remove HTTP ports completely, hence these should continue
to be configured on both the Web Dispatcher and ABAP ICM. Redirect on the Web
Dispatchers should, however, be configured to ensure that any traffic going to the HTTP
port is automatically redirected to HTTPs. Standard parameter icm/HTTP/redirect_<xx>
should be used for this, e.g :

a. icm/HTTP/redirect_0 = PREFIX=/, FROM=*, FROMPROT=http, PROT=https

The above redirect is NOT required on the ABAP ICM.


In order to ensure re-directs occur correctly on the ABAP stack ICM, some additional
configuration must take place :

a. All systems must put a generic entry in table HTTPURLLOC to redirect all URLs to
WD HTTPs. E.g :

Note : There will be NO redirect on the ABAP ICM for HTTPs and we will NOT use the action
file based modification handler on either the WD or ABAP ICM for redirect purposes. Use of
the action file to force usage of the correct virtual hostnames is, however, still permitted.

4) All other standards around Web Dispatcher (e.g Admin port on 90NN, use of Unicode
kernel etc.) are identical to the on-premise SOE v2. The SOE v2 handbook should
therefore still be referred to for SOE Web Dispatcher standards.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 18

4.5 SAP Host Agent


All SAP system hosts will have a SAP Host Agent installed. This SAP Host Agent should start
automatically on reboot of the AWS instance. Due to the security requirements in AWS, the
secure port (1129) should be used. This can be configured manually via the following steps
after SAP Host Agent has been installed.

Log on to the target host of the backend system as root, then execute the following :

cd /usr/sap/hostctrl/
cd exe
mkdir sec
cd sec
export SECUDIR=/usr/sap/hostctrl/exe/sec
chown sapadm:sapsys /usr/sap/hostctrl/exe/sec
/usr/sap/hostctrl/exe/sapgenpse get_pse -p SAPSSLS.pse -noreq -x
<PASSWORD> "CN=<HOSTNAME>"
/usr/sap/hostctrl/exe/sapgenpse seclogin -p SAPSSLS.pse -x <PASSWORD>
-O sapadm
chmod 644 /usr/sap/hostctrl/exe/sec/SAPSSLS.pse
chown sapadm:sapsys /usr/sap/hostctrl/exe/sec/SAPSSLS.pse
chown sapadm:sapsys /usr/sap/hostctrl/exe/sec/cred_v2
/usr/sap/hostctrl/exe/saphostexec –restart
pf=/usr/sap/hostctrl/exe/host_profile

Now check the sapstartsrv.log file under /usr/sap/hostctrl/work to ensure the 1129 port has
been started.

4.6 SAP Software Downloads


All AWS instances will have internet access as standard (note: some instances in the BAS
Sandbox VPC don’t currently, but it can be activated as required). Therefore software media
can be downloaded directly from AWS. The suggested option would be to use an AWS
Windows jumpbox instance where SAP download manager can be installed. The software
can be downloaded locally and then sftp’d over to the /sapnas software shared mounted on
a Linux server. Alternatively /sapnas can be mounted onto the Windows server if the NFS
feature is activated.

4.7 SLD
Please refer to the on-prem SOE v2 handbook for details around SLD usage and connectivity.
The strategy has not changed for AWS based systems.

Please note that the same requirement exists to implement SLDREG on every instance. All
new project and production landscape systems should register to the IMS/IFS/IRS SLDs with
SLDREG (as well as RZ70 and Java SLD data supplier). Instructions on how to do this can be
found here.

4.8 SAP Landscape Manager (LaMa)


The SAP Landscape Manager tool will be deployed to support the following functionality :

 Automatic Scheduling of both SAP and EC2 instance downtime


 SAP Post Copy Automation

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 19
 T1/P2 scheduling of OS/Application EBS volume snap backups

More information will be added to this section when the service is released and available.

Note : Please ensure that any critical housekeeping or backup jobs are scheduled outside of the
downtime period when managed by LaMa.

4.9 Solution Manager


Currently all AWS deployed systems should connect to the existing on-premise Solution
Manages instances. In the future a Solution manager instance will be deployed into AWS.

5 Security
5.1 Principles
 All DS/MITs requirements to be complied with, regular auditing and testing should be
undertaken (details TBC)
 The attack surface of instances will be minimised by shutting off non-essential services.
 Only required ports will be opened to the VPC and to each individual instance.
 Systems will be kept up to date with patches, following BP best practises or better.
 Sandbox, dev/test and production will be segmented in separate VPCs/accounts

The link to the current DS controls that must be adhered to, can be found here.

5.2 AWS Management Console Access


Access to the CSL3 VPC AWS management console is gained using the following URL

https://sso.bp.com/fim/sps/saml20/saml20/logininitial?
RequestBinding=HTTPPost&PartnerId=urn:amazon:webservices&NameIdFormat=Email&Allo
wCreate=false.

The logon account to be used is a –adm-<ntid>-<xxx>. Access is granted to a cloud


environment(CE) rather than the whole VPC. Therefore the –adm- account must be given
access to a CE to be able to logon to the AWS console.

All the engineers who have access to a CE will have the same permissions.

5.3 OS Access, SSH, RDP


For Windows, one needs a recent version of the RDP client (version TBC).
For Linux SSH access, putty should be used.

The Dev/Test and Production instances will be installed in different VPCs so access to logon
to an instance will be segregated by the VPC access.

CSL3 AWS instances are accessed via “–sysop-<ntid>” accounts in the CD2 domain. Key pairs
are not used. When logging on via putty the user id should be entered with the domain
prefix e.g. cd2\-sysop-<ntid>. At present, root access can be gained by su’ing directly to root
after -sysop logon. A prompt will be made to re-enter the -sysop password.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 20

5.4 Remote access (outside BP1)


TBC how access is achieved for developers and end users

5.5 Data Encryption at rest


All data should be encrypted where possible.

Application/Database encryption and AWS volume (storage) encryption is to be used in


parallel. Software encryption licenses should be obtained where required.

Link to Ami Encryption process:

https://basproducts.atlassian.net/wiki/display/CSL/AMI+Encryption+Process

5.6 Data encryption in transit


All traffic both within and outside AWS is to be encrypted.

Link to Encryption connectivity matrix:

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards and Best


Practice/SOE/Cloud/SOE Best Practice Procedures/Encryption and connectivity matrix.xlsx

Link to Digital Security control document on AWS security, including encryption

https://soe.bpglobal.com/Apps/digitalsecurityportal/BAS-ISS/Shared%20Documents/SAP/
SAP%20Cloud%20Controls%20Requirements%20AWS%20V1.1.xlsx?Web=1

Note : In general, all communication should be encrypted into and out of AWS based instances where
the technology allows it. Where there is no current technical solution available to provide encryption,
please contact DS to discuss a possible exemption.

For SAP Message Server communication specifically, it is required to set up and use SSL on the
standard port (444NN). This can be set up using a self-signed certificate and the SCS PSE should be
used (located in /usr/sap/<SID>/(A)SCSNN/sec).

5.7 AWS IAM roles


IAM roles can be assigned to AWS instances on instance launch, to provide access from the
instance to various AWS functionality (e.g accessing S3 storage, performing EC2 commands,
such as instance launch etc.). It is important that a role is always assigned to an AWS
instance at initial launch, since a role cannot be assigned after an instance has been first
launched. At the present time, a single IAM role has been created, which should be assigned
to ALL instances at launch :

CSL v3 VPC :

<CLOUD ENVIRONMENT>-role_INSTANCE-PROFILE

In the CSL VPC, the IAM role is still being developed.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 21

5.8 AWS Security Groups


5.8.1 CSL v3 VPC
In the CSL v3 VPC, there will be a single SAP Security Group for each account as well as a
“Default” global group. These two groups should be assigned to all SAP based instances in
the CSL v3 VPC :

5.8.1.1 Default Security Group


The default security group should be assigned to all instances. This is named as follows in
each VPC :

Account Security SG Name


Group ID

Customer-Test sg-f2740795 WE1-T1-NET001-SEG-Global-


WE1T1NET001SEGGlobal-68EDOEFH1CFI

Customer-Pre-Prod sg-c06d1ea7 WE1-P2-NET001-SEG-Global-


WE1P2NET001SEGGlobal-SLHMKFTXTJYR

Customer-Prod sg-b54e3dd2 WE1-P3-NET001-SEG-Global-


WE1P3NET001SEGGlobal-120SHPYG2B8

Shared-Prime sg-912e81f6 WEW1PNET001SEG001-WEW1PNET001SEG001-


AW4GFJEDRC0P

Port(s) Type Source IPs Description

445 UDP All Unknown

80 TCP All HTTP browser based

49152-65535 UDP/TCP All TCP

389 UDP/TCP All LDAP

53 UDP/TCP All DNS

123 UDP All NTP

3389 TCP All RDP

445 TCP All NetBT

443 TCP All HTTPs standard port

5985-5986 TCP All Powershell

88 TCP/UDP All Kerberos

22 TCP All SSH

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 22

636 TCP All LDAP SSL

8081 TCP All HTTP Alternate port

5.8.1.2 SAP Security Group


The SAP Security Group should also be assigned to all SAP instances. This is named as follows
in each VPC :

Account Security SG Name


Group ID

Customer-Test sg-9f16daf9 WE1-T1-NET001-SEG-SAP-WE1T1NET001SEGSAP-


1UT53JE57LW3F

Customer-Pre-Prod sg-9d10dcfb WE1-P2-NET001-SEG-SAP-WE1P2NET001SEGSAP-


1ER99DY11O8WC

Customer-Prod sg-3911dd5f WE1-P3-NET001-SEG-SAP-WE1P3NET001SEGSAP-


2805TGL9ZVI5

Shared-Prime sg-d507a1ac WE1-P1-NET001-SEG-SAP-WE1P1NET001SEGSAP-


17Y3FRJ1CZD1A

Port(s) Type Source IPs Description

All ICMP ICMP All General rule to allow ping etc.

3900-3999 TCP All SAP Java Message Server HTTP port

44400 – 44499 TCP All SAP Message Server HTTPs port

30013 - 30498 TCP All SAP HANA DB ports

4300-4304 TCP All SAP HANA XS Engine secure port

8100-8199 TCP All SAP Web Dispatcher HTTPs port

3600-3699 TCP All SAP ABAP Message Server

3200-3399 TCP All SAP Dispatcher and Gateway ports

4800-4909 TCP All SAP Secure Gateway Port & Sybase


Database Ports

50014-59914 TCP All Sapcontrol secure ports

1163 UDP All Sterling Port

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 23

4283 TCP All ASE Cockpit Port

4237 TCP All SAP Installer Port

25 TCP All SMTP

1363-1364 TCP All Sterling Ports

2049 TCP All NFS v4 port

8088 TCP All

1129 TCP All SAP Host Agent secure port

21212 TCP All SAPinst installer port

6001-6004 TCP All Solman Wily Introscope Collector Ports

6443 TCP All Solman MOM Https Port

All Ports TCP All AWS This opens up all ports between AWS
instances instances assigned to this Security Group
with the SAP
SG assigned

5.9 Firewalls
Each virtual server will be firewalled using security groups in AWS, with a standard policy
applied where possible. Additionally, the standard CNX firewall will be in place between BP
and AWS. Please see section “AWS Security Groups” for more information.

The firewall rules defined for SAP communication to BP can be found in the AWS SAP firewall
standards document.

The internal SUSE firewall will not be used and will be deactivated.

The internal Windows firewall will not be used and will be deactivated (confirmed that DS
has approved this and is deactivated in the CSL Windows standard build).

5.10 OS Hardening
OS hardening to be applied to all builds as appropriate. Specific details TBC.

5.10.1 Umask
The standard CSL SUSE images come with a default umask of 0077. SAP installs mandate a
umask of 022, therefore this umask MUST be set on root before sapinst is started. This is
part of the standard installation instructions provided by SAP.

Additionally, both Sybase and HANA users (syb<SID> and <SID>adm) have default umasks of
027. This is correct, but it should be noted this means that access to some DB directories is

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 24
only possible when logged in as these users. The standard SAP administration user
(<SID>adm) still maintains the same 022 umask as previously.

5.11 Passwords
All OS and application related passwords (e.g. SAP , SAP ASE etc ) should be stored in the TE
BIS tool under the appropriate SAP SID.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 25

6 AWS Instances
6.1 Principles
Use the appropriate supported instance for best balance of price/performance.

6.2 Instance Types


BP’s new SAP systems will be deployed on AWS instances. Only certain instance types are
supported by AWS for SAP and these will in general be used:

Product Size Instance Type

All Up to 488GB R4

All Greater than 488GB X1

6.3 AWS Instance Builds


The CSL provide DS approved AMIs for both SUSE and Windows. The Windows AMIs are
regularly updated with the latest patches and since there are no custom SAP modifications in
these builds, the latest CSL encrypted AMI for the Windows version in question should
always be used.

The SUSE AMIs are also based on CSL hardened builds, but additionally include any build
updates request by TE for SAP. The AMI to use for the AWS instance build will be on the
build sheet. The latest CSL provided SUSE AMIs to be used for SAP installations are listed in
the Standard SUSE Build for SAP AWS SOE document. These include all the OS elements
required for a SAP installation (e.g X11 etc).

In the future we will look at using cloud formation templates(CFT) for SAP builds once we
can productionize the process.

All instances should have ‘Termination Protection’ active to reduce the chance of accidental
instance deletion.

6.3.1 Instance tagging Convention

The following naming convention should be used when creating EC2 instances in the CSL3
VPCs (the only difference is the Tag Key to use):

CSL3 VPC Tag Key = te-name

Tag Value (for all VPCs):

SID = SAP SID


APPIDENT = Non-SAP Application Identifier
Components = Either APP for application, DB for Database, or CEN for both

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 26
Project = name of Project
Number = sequential number from 1-10

1) SAP Systems :
a. SID(s)-Components-Number. Examples :
i. For PDL/CDL central system, this would be PDL/CDL-CEN-1
ii. For the WXO App server, this would be WXO-APP-1
iii. For the WXO DB server, this would be WXO-DB-1
iv. For the WDL/HDL server, this would be HDL/WDL-CEN-1
2) Non-SAP Systems :
a. APPIDENT(s)-Components-Number. Examples :
i. For Bartender server, this would be BAR-APP-1
3) Jumpboxes :
a. Jumpbox-Project-Number. Examples :
i. For Nike Jumpbox, this would be Jumpbox-Nike-1
ii. For second Nike Jumpbox, this would be Jumpbox-Nike-2

6.4 AWS Instance Start / Stop / Termination


6.4.1 Start / Stop
When stopping and starting the AWS instances (or rebooting the OS) it’s important to check
the status of the mounted filesystems. The filesystems are mounted with the option ‘nofail’
(see the ‘Storage’ section for more details) so that the instance will still start even if a
filesystem can’t be mounted. The onus, therefore, is placed on the Basis engineer to check
that all required filesystems are mounted before starting the SAP system.

6.4.2 Termination
Terminations of instances in the CLS3 VPC need to be requested via Cloudhub
(https://bp.service-now.com/cloudhub/). The project Tech Lead should approve any
termination requests. Any related SNAPs of volumes no longer required should also
be requested for deletion.

7 Operating System Builds


7.1 Principles
 Align with IIS (formerly GOI) standards where possible
 Maximum compatibility with SAP products
 SUSE 12 SP1 will be the default OS for SAP system instances ( update 15/09/17 : SP2 is now
available as an upgrade path from SP1 – please update to SP2 when starting a new project/landscape – see the
SUSE SOE doc for more details)
 Windows 2012 R2 will be the default OS for other instances not supported on SUSE
 The standard shell for SAP installation users will be CSH and this should be set up for
all SAP OS users as the default shell. Other shells (e.g bash) can be used for
administrative purposes only.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 27

7.2 SUSE
SUSE Linux Enterprise Server is the internal OS used by SAP within their datacenters. New
features are also currently released on SUSE before RHEL, therefore we will be using SUSE as
the default OS for all SAP instances in AWS, where supported. A new standard build, similar
to the current BP RHEL build below, will be released for SUSE in due course.

The OS build guide for SAP on SUSE in AWS, which combines the requirements for SAP and
SAP ASE, is available:

Standard SUSE Build for SAP AWS SOE

7.2.1 Emailing
To be able to use ‘sendmail’ for emailing from the OS there is some configuration of postfix
required:

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/SOE%20Best%20Practice%20Procedures/AWS%20-%20SUSE
%20configure%20postix%20to%20relay%20mails%20via%20BP.docx

7.3 Windows
The default Windows version will be 2012 R2.

7.4 OS Users
User and Groups required for the SAP deployments need to be manually create.

Unix

<Please refer to the “Standard SUSE Build for SAP AWS SOE”>

Windows

The Windows users and groups will be created in the CD2 domain. User/group creation
requests should be raised against with the GOT team.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 28

8 Storage
8.1 Principles

8.2 Storage Design

Most of the filesystem storage for both Hana and SAP ASE instances will be on gp2 storage.
The exception is the /backup filesystem which will be SC1 for all systems.

All instances should use “EBS Optimised Volumes” by ticking the appropriate options on
launch. This is only required on those instance types which don’t have EBS optimised
volumes by default (at the moment, this means R3 type instances – X1 and R4 have these
activated by default).

SAP on HANA will have the following standard filesystems. The sizes are for Dev/Sandbox
systems. The shared, data and log volumes will be increased for larger environments.

Note that for Hana, scaling up requires larger storage as well as a larger instance.

Hana Filesystems
App Server(s) Hana Master Hana Slave(s) Notes
 All filesystems XFS (except swap which is type swap)
 Shared filesystems (/sapmnt etc are exported from the
DB but will probably move to EFS when available
/ /usr/sap / /usr/sap / /usr/sap  /backup is to SC1 in prod and non-prod
GP2 GP2 GP2 GP2 GP2 GP2
 Sterling interface folder should be shared from the
sterling instance in the same account.

Swap /hana/data /hana/log /hana/data /hana/log


GP2 GP2 GP2 GP2 GP2

Swap Swap
GP2 GP2

/hana/shared /hana/shared
GP2 NFS

/backup /backup
ST1/SC1 NFS

/sapmnt /sapmnt /sapmnt


NFS GP2 NFS

/interface/<SID> /interface/<SID> /interface/<SID>


NFS NFS NFS

Sterling C:D

/interface/<SID>
GP2

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 29

Filesystem Size Device Type Construction

/ 50G EBS GP2 Single volume

/tmp, /var, /var/tmp, 100G (20Gb each) EBS GP2 Single volume
var/log, /home,
/var/log/audit

/usr/sap 50G EBS GP2 Single volume

/hana/shared For single node systems, min EBS GP2 Single volume
(Host RAM; 1TB), i.e. up to a
1TB node, /hana/shared is
the node size, after that it’s
1TB.

For scale out systems, 1 x


host RAM per 4 nodes.
/hana/data Same as Hana RAM size EBS GP2 Single volume

/hana/log Same as Hana RAM size EBS GP2 Single volume


(upsized for throughput)

/backup Sufficient to hold 7 days data EBS SC1 Striped if


necessary

/interface Large enough to store EBS GP2 Single volume


interface files

/sapnas Varies NFS Mount from


utility instance

Note : The original size of the /var volume in the CSL build is 20Gb. In this SOE, this has been extended
to 100Gb. Currently the SAP AMI gives a 20Gb volume, which must be manually extended to 100Gb.
See instructions on how to do this here. This note will be updated when a new version of the SAP SUSE
AMI is available with 100Gb volume as standard.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 30

SAP on SAP ASE will have the following standard filesystems. The sizes are for Dev/Sandbox
systems. The sapdata and saplog volumes will be increased for larger environments. All the
SAP ASE utility directories are being created under ‘/sybase/<SID>/’ that may change if
experience dictates.

ASE Filesystems
DB Server App Server(s) Notes
 All filesystems XFS (except swap which is type swap)
 Shared filesystems (/sapmnt, /interface etc are
/ /usr/sap / /usr/sap exported from the DB but will move to EFS when
GP2 GP2 GP2 GP2 available
 Multiple sapdatas if required
 /backup is to SC1 in prod and non-prod
Swap /sybase/<SID> Swap  Sterling interface folder should be shared from the
GP2 GP2 GP2 sterling instance in the same account

/sybase/<SID>/
sapdata_n
GP2

/sybase/<SID>/
saplog_n
GP2

/sapmnt /sapmnt
GP2 NFS

/backup
SC1

/interface/<SID> /interface/<SID>
NFS NFS

Sterling C:D

/interface/<SID>
GP2

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 31

Filesystem Size Device Type Construction


/Storage type

/ 50G EBS GP2 Single volume

/tmp, /var, /var/tmp, var/log, 100G (20 Gb each) EBS GP2 Single volume
/home, /var/log/audit

/usr/sap 100G EBS GP2 Single volume

/sapmnt/ 100G EBS GP2 Single volume

/sybase/<SID> 50G EBS GP2 Single volume

/sybase/<SID>/sapdata_1 500G – 3TB EBS GP2 Single volume

/sybase/<SID>/saplog_1 50G EBS GP2 Single volume

/backup Large enough to EBS SC1/ST1 Striped if


store 7 days of (prod) necessary
change

/interface Large enough to EBS GP2 Single volume


store interface files

/sapnas Varies NFS Mount from


utility instance

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 32
Note : The original size of the /var volume in the CSL build is 20Gb. In this SOE, this has been extended
to 100Gb. Currently the SAP SUSE AMI gives a 20Gb volume, which must be manually extended to
100Gb. See instructions on how to do this here. This note will be updated when a new version of the
SAP SUSE AMI is available with 100Gb /var volume as standard.

Other filesystems, for example those for LiveCache, will follow a similar approach and will be
defined in the buildsheets.

If the database is close to or bigger than bigger than 3TB further SAPDATAx disks can be
added at 3TB each.

AWS S3 storage will be used for storing DB transaction log backups snyc’d from the AWS
instance primary backup filesystem.

File system mount options (exclude root, NFS and SWAP – kept those as default setting):

nobarrier,noatime,nodiratime,nofail,logbsize=256K

Notes: We will not be using RAW devices. The option ‘nofail’ was added after approval at the
December 2016 BTD.

The SOE originally mandated the use of partition tables on each volume/device mounted to
an AWS instance. Since AWS released online resizing volume functionality, it was decided to
change this recommendation to mandate NO partition tables on application
volumes/devices mounted within the OS (this does not apply to root/var volumes that are
delivered with the AMI). This is because the existence of a partition table does not allow for
a fully online filesystem resizing procedure (filesystems must be unmounted at the OS level
first, before resizing occurs). Hence, no partition tables should be created on devices within
the OS, and filesystems should be created directly on the mounted volumes. This also means
that Yast can no longer be used to create filesystems and a manual procedure should be
followed instead. This procedure is documented here.

8.3 Volumes
All volumes will be mounted using the filesystem type “XFS”. When creating volumes, the
‘Delete on Termination’ flag should NOT be set as a safety measure in case someone
accidentally terminates an instance.

In early cloud HANA deployments we defined the /hana/data and /hana/log filesystems
single logical volumes within a volume group. For various performance reasons, this has
been discontinued and we will use the volumes as provisioned by AWS without logical
volumes or groups. Therefore all filesystems will be mounted as a single device e.g /usr/sap,
/hana/log etc.

Some older PoC systems may still have the logical volume and group setups.

Note : In some rare circumstances, when attaching an encrypted volume to an EC2 instance, the
volume can appear to already be partitioned when viewed from the OS. This is caused by the random
encryption characters encoded on the volumes. If this occurs, you can either detach and delete the
volume and create a new one, or run the following command at the OS to initialise and remove the
random partition information :

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 33

sudo dd if=/dev/zero of=/dev/xvd<DEVICE LETTER> bs=512 count=1


conv=notrunc

8.3.1 Volume Tagging Convention


The volume naming convention, when creating volumes in the CSL3 VPCs, is as follows ((the
only difference is the Tag Key to use):

CSL3 VPC Tag Key = te-name

Tag Value (for all VPCs):


<APPLICATION>-<SID>-<DEVICE TYPE>-<MOUNT POINT>-<NUMBER>

Where

APPLICATION = High level application name (e.g sap, bartender etc.)

SID = System Identifier of main system using this volume (e.g CDL). For Non-SAP, this
should be a unique identifier for the application instance

DEVICE TYPE = whether this volume will be mounted as a single device or included in
a volume group (e.g VG or SD)

MOUNT POINT = the mount point of the volume on the server (e.g /usr/sap/CDL)

NUMBER = the number of the volume if this is part of a volume group (e.g 02)

Examples :

1) The single volume for /usr/sap/CDL on the shared CDL/PDL AWS instance would be
called :
a. SAP-CDL-SD-/usr/sap/CDL-01
2) The second volume as part of the WDL Hana data volume group would be called :
a. SAP-CDL-VG-/hana/data/WDL-02

8.4 S3
S3 storage may be used in the CSL v3 VPCs, however any buckets created must follow the
following rules :

• The bucket name must be exclusively lower case and begin with “<cloud env>-osb-“
• The last 4 digits can be used as required
• The S3 bucket upload configuration MUST have server side encryption set

For example, a valid name for a bucket is “we1-t1-te02-osb-0001”.


Any subdirectories can be created within the bucket as required. The bucket can only be
accessed from instances within the same Cloud Environment.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 34

8.5 Storage Encryption


Storage encryption on the OS (via Yast or LVM) will not normally be used. Encryption will be
at the application layer and the AWS layer.

DS and the CSL specify that all volumes should be encrypted, therefore when building
instances via the AWS console, all volumes should have the “encrypted” flag set (this will be
specified in the build sheet provided by S&A). The newer CSL builds have this set at the AMI
level.

Currently Hana backups are not encrypted, whereas SAP ASE backups can be. Therefore
the /backup filesystem should have the ‘encryption’ flag set during the volume creation for
both the DB types, for consistency (as per the general recommendation above).

8.6 NFS Exports/Mounts


NFS exports/mounts will be required for various situations (e.g /sapmnt shared across SAP
servers). The following standards will apply:

1) NFS Version 4 should be used for all exports/mounts


2) Use the following options ONLY when exporting a filesystem via NFS (see note below) :
a. rw
3) Use the following options ONLY when mounting an NFS filesystem:
a. rw, bg, hard, nolock, vers=4

Note 1 : For HANA installations, SAP recommend setting the “no_root_squash” and “no_subtree_check” options
for the NFS export settings (OSS note 2099253). Additionally, it has been found that these parameters need to be
set during any SAP installation or upgrade/patch activity using SWPM or SUM. Therefore these two options should
be set on NFS exports during installation and patching time ONLY. These two should then be removed after the
installation or patching is complete, as there are security implications in leaving them active.

It is possible to export filesystems from CSL v3 VPC based instances and mount them on on-
prem servers, but this should only be performed on a temporary basis during migration
activities.

Note 2 : For NFS exports that need to be mounted on a BP1 based system, the export domain must be set to
“bp.com”

8.7 NFS Utility Server


In order to provide a central software repository that can be shared amongst instances, an
NFS utility instance will be configured which will provide NFS exports of a mounted
filesystem to all instances within AWS. This is equivalent to the SAPNAS volume currently
used within the BP network. The volume will be based on low performance cheaper disk and
will initially be sized at 2TB. The Utility server will also be shared with other tools such as SAP
LVM. The utility server details are as follows:

CSL3 VPC – NEW (Mountable from instances in ALL accounts)

Hostname : we1p103190002

IP ADDRESS : 10.163.210.10

NFS Exported Filesystem : /sapnas

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 35
The sapnas filesystem is currently exported as read-write and can be mounted on any host in
the VPC using the following command :

mount 10.163.210.10:/sapnas /sapnas

The structure of /sapnas is the same as the BP SAPNAS and should be used in the same way.

9 Network
9.1 Large File Transfers
In order to move large amounts of data from BP datacenters into AWS, there are three
options :

1) Use the Aspera connect service. This service is run by the CSL and can be used to transfer
large file to AWS
2) Direct SFTP connection from BP servers to AWS instances

For option 2, the following steps can be used:

Steps:

1. Need to generate public key file id_rsa.pub for the user on Solaris server using ‘ssh-
keygen -t rsa’ command

2. Copy this file over from home/<user>/.ssh folder to AWS server.

3. Append the content of this file to /home/<user>/.ssh/authorized_keys using ‘cat <file>


>> authorized_keys’ command

4. Run sftp from Solaris server using (for example) ‘sftp [email protected]

Care should be taken with option 2) not to affect the overall bandwidth available to AWS.

3) Alternatively, scp can be used :

scp (-r for recursive) <source file/folder>  <targetserver user>:<target server>:<target


file/folder> e.g.

scp –r CDL_EXPORT [email protected]:/backup/CDL_EXPORT

This command must be executed on the source (only push from BP to AWS is currently
supported).

9.2 IP Addresses

Each AWS instance will be assigned one IP address each regardless of how many SAP
systems are running on it. In the Production like systems F, R, R-DR there will be one IP
addresses per AWS instance hosting a SAP application instance and two or three on the
DB/(A)SCS AWS instance. On the single IP address AWS instances the IP address will be
assigned to the hostname with the default domain suffix. On the instances with two/three

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 36
IP addresses the addition VIP(s) will be assigned to the SAP ASCS/SCS/ (External) WD
instances. This is so the DSN aliases can move to a new AWS instance in the case of a DR
invocation.

When creating new AWS instances, the Primary IP address field should be left as ‘Auto-
assign’ so that AWS automatically assigns a free IP address from the subnet chosen. IP
addresses can be reassigned to new instances if that is required.

9.3 Hostnames
The CSL3 instances are created from CSL provided AMIs. The hostname is automatically
generated, when an instance is launched, and is based on the cloud environment name an
example hostname is we1t101430006. The FQDN i.e. <hostname>.cd2.bp.com is
automatically registered on the AWS DNS servers.

9.4 Default Domain


The default domain allocated to the CSL v3 AWS instances is:

cd2.bp.com

9.5 DNS
The Project DNS standards will remain similar to those we are using on-premise. Both the
‘sap<sid>.bpweb.bp.com’ and the ‘sap<sid>.cd2.bp.com’ and ‘sap<sid>.bp.com’ DNS aliases
should be registered as CNAMES (no reverse lookup) against the AWS instance hostname A-
record i.e. <hostname>.cd2.bp.com.

Note: For external WD only

The SAP web dispatcher DNS aliases have changed, in Project systems, from the earlier on-
premise setup now that the web dispachers have their own SID. The new DNS alias
structure for Project and Production systems is:

sap<target sid>wd.bpweb.bp.com
sap<target sid>wd.cd2.bp.com
sap<target sid>wd.bp.com

where the <target sid> is the backend SAP instance.

Security certificates should be registered against the sap<target sid>wd.bpweb.bp.com alias,


with a Subject Alternative Name (SAN) for the sap<target sid>wd.cd2.bp.com and sap<target
sid>wd.bp.com aliases.

End Note:

The Production DNS standards will also remain similar to those being used on-premise.
However not all aliases will be against virtual IPs. See the following table for guidance on

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 37
which aliases should be requested and which should be a CNAME or an A record against a
VIP.

Instance Virtual Hostname (where Type of DNS Entry


required)

Database SAP<SID>DB Cname to host

Primary Application Server SAP<SID>A Cname to host

(A)SCS SAP<SID> A record with reverse (VIP).


Also used for integrated WD

Secondary Application Servers SAP<SID>B, SAP<SID>C etc Cname to host

Web Dispatcher SAP<SID>WD A record with reverse (VIP) –


Only required for external WD

When requesting the CNAMEs above, please request in the following three domains :

1) BPWEB.BP.COM. This domain will be continue to be used for “user” to “server” connections
2) CD2.BP.COM. This is a default domain for AWS and will be used for new “server” to “server”
connections
3) BP.COM. This is the legacy domain, but will be retained to ease replatforming projects and
ensure old RFCs etc. will still resolve to the new target instances.

The BPWEB.BP.COM and BP.COM CNAME aliases can be requested from the on-prem Global
DNS team. The CD2.BP.COM CNAME aliases can be requested via the CSL

During replatforming projects temporary DNS aliases can be used, while both source and
target systems exist. The temporary DNS aliases format is to add an ‘x’ to the segment of
the alias names e.g.

sap<sid>x.bpweb.bp.com
sap<sid>x.bp.com

Once the source systems have been switch off the target instances can adopt the standard
DNS aliases and the ‘x’ aliases can be deleted.

9.6 Instance Network Interface


There is currently only one network interface with a primary IP address per instance, in most
cases. Production and Production Test systems may require a secondary IP address for the
SCS and WD (for those instances not using integrated web dispatcher i.e. Java only stacks)
components. Currently these must be separately requested by raising a ticket against the
CSL team, who will create the secondary IP and assign it to the ENI for the appropriate
instances. Once assigned to the AWS instance the secondary IP needs to be configured in
SUSE and set-up to be made persistent (note this procedure is only relevant for SP1. In SP2
it becomes much easier):

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 38
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/SOE%20Best%20Practice%20Procedures/SUSE%2012%20SP1%20-
%20Add%20additional%20IP%20at%20boot.docx

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 39

10 Databases
10.1 Which Database to Use
Usage Database SOE Guide

BW Reporting Hana Hana SOE:


Systems
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20HANA%20Standards
%20%26%20Documents/SAP%20HANA%20-%20AWS%20SoE.docx

Other systems SAP ASE ASE SOE:

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20ASE/SAP%20ASE
%20on%20AWS%20for%20SAP%20applications%20-%20SOE.docx

SRS installation/setup:

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20ASE/
SAP_ASE_SRS_Build%20and%20Config.xls

SRS Operations Guide

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20ASE/
SAP_ASE_SRS_Operations_Guide.xls

Archived Read Oracle


Only Systems

LiveCache MaxDB

10.2 Database Encryption


Database encryption should be activated where available (e.g. HANA and SAP ASE).

10.3 Oracle
10.3.1 Use of Oracle for SAP is currently not allowed
Oracle in AWS is currently not allowed, see SAP Note 1656099 which states (as of Jan 2016)

"SAP on Oracle" is not a supported software stack in an AWS


environment, because currently there is no Oracle/AWS support
cooperation agreement.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 40
Oracle can therefore not currently be used by productive instances which require SAP
support, but it can be used for Read-Only systems, to simplify the migration.

10.3.2 Licensing of Oracle in AWS


Notwithstanding the issues with support, the licensing for Oracle in AWS is as follows:

Each Amazon instance must be licensed based on the rules laid down for AWS in the Oracle
cloud licensing documents, put simply these are:
1. Look up the number of virtual cores for the instance in the Amazon Virtual Cores
Table
2. Apply the Oracle core factor (normally 0.5 for Intel CPUs)
3. The number of licenses required for Intel CPUs is therefore the number of virtual
cores divided by 2.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 41

11 High Availability and DR


11.1 Principles
Follow best practises for AWS, balancing uptime against cost and complexity.

11.2 HA Architecture
High availability is achieved using a combination of Amazon Cloudwatch and auto restart in
Hana/ASE.

In the event of a hardware failure within AWS, CloudWatch will automatically restart the
Hana virtual server instance on a new physical server, and Hana will auto restart on boot of
the new server instance. Application instances will be located in two AWS availability zones,
to reduce the risk of availability zone outages causing a SAP system outage. Full size
“dormant” (shut down) application server instances will also be located in two availability
zones, ready to be used if an availability zone is lost.

There will be no use of clustering or SAP Enqueue replication in the HA architecture.

The AWS Alarm configuration to enable Cloudwatch based HA recovery is very simple. An
alarm must be configured for each SAP AWS instance, using the following alarm configuraton
:

This alarm should be set on each instance requiring HA with the following settings :

Send a Notification to : Support Email Addresses (can be multiple)


Take the Action : Recover this instance
Whenever : Status Check Failed (System)
Is : Failing
For at least : 1
Consecutive period(s) of : 1 minute
Name of alarm : <HOSTNAME>-StatusCheckFailed-(System)

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 42
Detailed Cloudwatch Monitoring must be enabled on the instance for this to take effect. This
alarm will be triggered if and when AWS detects hardware issues which are affecting the
instance. The recovery will effectively restart the instance on different hardware within the
same availability zone. These HA alarms should be set for every SAP instance, whether in
Test, Pre-Prod or Prod accounts.

As per the on-prem SOE v2, it is still required to add a SAP gateway process to the ASCS
instance. This central gateway should be used to register RFC server programs instead of
individual app server gateways. Please see the on-prem SOE v2 Handbook for more details.

11.3 SAP/Database HA
In Production and Production Fix the SAP components (ASCS, SAS, WD etc) and database
should be set to autostart after an instance restart. For SAP components and Hana the
‘Autostart =1’ (with a capital ‘A’) parameter should be used in the instance profiles. For ASE
the database/ SRS start script needs to be added to the OS boot init files (see the ASE SOE
for more details).

11.4 DR Architecture
Disaster recovery is achieved by replicating a copy of the production database to another
availability zone. This will be achieved in HANA via inherent HANA replication and in ASE via
Sybase Replication Server. The standby instance will be sized as a “pilot light” instance, with
just enough capacity to enable the replication to function correctly. In the event of a DR, the
standby database instance would be manually resized in AWS to match production.

As per the HA architecture, application servers are deployed across both availability zones,
reducing the risk of application issues when moving to the DR instance. The Fix instance is no
longer stacked with DR, but is run on a separate instance. This simplifies the design.

Client traffic will need to be redirected to the IP of the DR instance. Initially a manual DNS
change will be used but there may be other options (e.g. ELB) that can be investigated to
avoid this requirement (TBC). It is envisaged that this architecture could be also be used in
certain HA scenarios, certainly more frequently than it’s used in on-premise SAP systems
today.

11.4.1 Rsync
RSYNC will be used in the same way that it is currently used in the on-prem systems to
replicate certain filesystems (e.g /sapmnt/SID) between the primary and secondary servers.
Please refer to the on-prem SOE handbook for standards around the use of RSYNC.

The rsync scripts for SUSE in AWS have to change slightly. The SUSE version of the /sapmnt
rsync script is embedded here:

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 43

For the AWS builds, passwordless SSH must be enabled between the source and target
hosts, to allow RSYNC to function. The procedure for how to enable this can be found here.

11.5 HA/DR Diagram

Availability Zone A Availability Zone B

WD WD

App 3 App 4
(on-demand, (on-demand,
App 1 AZ B failure or App 2 AZ A failure or
exceptional exceptional
load) load)

DB
DB Standby
DB Replication
Active (pilot
light)

11.6 Virtual Hostnames in SAP Systems


To meet the BP SAP<SID> naming convention for end user connectivity, and the reverse DNS
requirements of SAP Note 129997, VIPs are required for production systems.

Note: the VIPs are only needed for components that need to fail over to different hosts. See
section 9.6 above. The SAP components with VIPs should be installed against the virtual
hostname. The SAP components without VIPs should be installed against the local instance
(physical) hostnames.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 44

11.7 Reference
https://d0.awsstatic.com/enterprise-marketing/SAP/sap-hana-on-aws-high-availability-disaster-
recovery-guide.pdf
https://d0.awsstatic.com/enterprise-marketing/SAP/
SAP_HANA_on_AWS_Implementation_and_Operations_Guide.pdf

11.7.1 DB Replication Mode by System Type

System Type DR Replication Mode Applies to

Transactional Synchronous ECC, S/4Hana,

Non-transactional Asynchronous BW on Hana, Portal

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 45

12 Backups
12.1 Principles.
The section contains the high level architecture, retention periods, and standards around
compression etc, OS backups, application backups and DB backups for SAP.

12.2 Backup Retention


12.2.1 OS and Application Backups

Daily Snaps : 7 days


Weekly Snaps : 4 weeks
Monthly Snaps : 1 month

12.2.2 Database and Log Backups

Daily DB Backup to local Storage : 7 days


Daily Snaps of backup storage : 7 days
Weekly Snaps of backup storage : 4 weeks
Monthly Snaps of backup storage : 1 month
Hourly S3 transfer of DB transaction log backups (Development and Production only) : 35
days
Projects are expected to manage backup space (and associated costs) themselves to allow
maximum flexibility.

The below standard script “sync_backups_to_S3_and_cleanup.sh” should be used in copy


the log backups, for Development and Production systems only, to S3 storage. The script
also deletes DB and log backups, from local storage, that are older than 7 days for all
environments for both Hana and ASE. For Hana is also deletes the backups from the backup
catalog. For that the backup_get.sql SQL script is also required. These scripts should
therefore be copied in all environments, although the backup_get.sql script is only required
for Hana databases. The script logic determines if the system is a Development or
Production instance and alters the behaviour accordingly.

The cleanup script needs to be scheduled in cron as the <sid>adm (for Hana), or syb<sid>
(for ASE) user to run hourly. For Development and Production an S3 bucket is required
named “<ce>-osb-bkpr ”, all in lower case, e.g “we1-p3-0001-osb-bkpr”. For DR the bucket
should be named “<ce>-osb-bkdr” e.g. “we1-p3-0001-osb-bkdr”. The S3 bucket should
have a Lifecycle rule created to permanently delete all files after a period of 35 days. This
policy should be set at the bucket level and conform to the following :

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 46
Rule Name : Delete_After_35_days
Transitions : Do not tick either box
Expiration : Tick Current and Previous Versions and set both expirations to 35 days. Also tick
“Cleanup incomplete multipart uploads” and set the expiration to 7 days.

Note that the “Previous Versions” options does not apply to our buckets, as we do not use
version control, however the policy it still set for completeness.
When calling the script, if the system is not a Development or Production instance a dummy
value can be entered e.g. ‘dummy’ for the S3 bucket name as it won’t be used by the script.
Both scripts should be copied to the ‘/backup/scripts’ folder on the DB server. Full details of
how to schedule the script in cron are contained in the cleanup script comments. The scripts
are under change control and should not be changed during a deployment without
consultation.

Note: For Hana backups, those older than 7 days are removed from the Hana catalog as well
as from local disk. Therefore, to recover from a backup older than 7 days it first has to be
recovered from the OS SNAP backups. The Hana recovery can then be executed by
referencing the recovered backup file rather than the Hana catalog.

12.3 CSL v3 VPC


SNAP backups of all volumes attached to CSL v3 instances are current automatically
scheduled in Production and Prime. These can be viewed in the AWS console as normal.
Although ALL volumes are currently included, there is work underway to enable engineers to
deselect certain volumes from this standard schedule (for example DB Data and Log
volumes, which are backed up via the DB backup mechanism). This section will be updated
when this is in place.

12.3.1 OS/Application Backup Architecture


The root and application binary volumes will be backed up via the AWS SNAP process as per
the above. This includes all volumes apart from the DB data and log volumes, and also
includes the DB backup volumes. These are scheduled by the CSL managed tool N2WS. AWS
SNAPs are automatically stored on S3 storage and are stored on multiple availability zones.

SNAP backup recovery procedure.

12.3.2 Database Backup Architecture


Databases will be backed up using the standard DB backup tools:

1) Hana : HANA backup tool


2) ASE : ASE Backup tool
3) Oracle : RMAN/brbackup tool

No 3rd party backup tools will be used. These backup tools will take online consistent
backups from the primary DB to compressed and encrypted (where supported by the DB)
flat files on a dedicated data backup volume:

/backup/data/<DB SID>

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 47
The same backup tools will take log backups from the primary and secondary DBs to
compressed and encrypted (where supported by the DB) flat files on the a dedicated log
backup volume :

/backup/log/<DB SID>

Both data and log backup volumes will be included in the standard CSL scheduled volume
SNAP backup schedule as per the retention period and frequency above. DB backups should
therefore be scheduled to avoid overlapping with the standard backup SNAP timings as
much as possible.

Additionally, in the Production and Development systems only, an hourly script will execute
to copy transaction log backups from /backup/log to an S3 bucket. These S3 transaction log
backups will be kept for 35 days.

12.3.2.1 Data Backups


Database data backups will scheduled from the database specific tool:

1) HANA : HANA XS Engine Backup Scheduler (where no ABAP Stack) / DB13 (ABAP stack)
2) ASE : ASE DBACOCKPIT Backup Scheduler
3) Oracle : DB13 Backup Scheduler

For Hana databases that have a directly connected ABAP stack the backups should be
scheduled via DB13. For Hana databases that do not have an ABAP stack the backups can
be scheduled in the XS engine. The details of how to schedule the Hana backups in the XS
engine can be found in the HANA_XS_Engine_Scheduled_Backups document here.

Backups will be scheduled as follows:

Hana

Saturday : Full online consistent backup


Sunday : Differential online consistent backup
Monday : Differential online consistent backup
Tuesday : Differential online consistent backup
Wednesday : Full online consistent backup
Thursday : Differential online consistent backup
Friday : Differential online consistent backup

(see the Hana SOE guide for further details)

ASE

(Note: ASE cumulative backups are not being used at this time as they aren’t available via
DBACOCKPIT)

Saturday : Full online consistent backup for all databases


Sunday : Full online consistent backup for all databases
Monday : Full online consistent backup for all databases

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 48
Tuesday : Full online consistent backup for all databases
Wednesday : Full online consistent backup for all databases
Thursday : Full online consistent backup for all databases
Friday : Full online consistent backup for all databases

(see the ASE SOE guide for further details)

12.3.2.2 Log Backups


Database log backups will scheduled from the database specific tool :

1) HANA : HANA Automatic log backups


2) ASE : ASE DBACOCKPIT log backup Scheduler
3) Oracle : DB13 Archivelog Backup Scheduler

Frequency will be as follows :

1) HANA : Automatic Log Backups every 15 minutes


2) ASE : ASE scheduled Log Backups every 1 hour
3) Oracle : DB13 Log Backups every 1 hour

Additionally, Development and Production system log backups will be transferred to an S3


storage bucket on an hourly basis with a retention period of 35 days.

(see the ASE SOE guide for further details)

12.4 Monitoring
The automated SNAP backup process, scheduled by the CSL, is monitored by the CSL support
team.

The backups of the DB data and DB logs with be monitoring via Solution Manager.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 49

13 Maintenance
13.1 Housekeeping and Archiving
Following defined standards for housekeeping.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 50

14 Support
14.1 Responsibilities
Component Responsible Notes

SAP Application ADAM Partner / TE Same as on-prem

SAP Basis ADAM Partner / TE Same as on-prem

OS Support ADAM Partner / TE OS support on-prem was


with GOI/HP. This will move
to the ADAM Partner/TE in
AWS.

OS Patching Can be either ADAM Partner


(change of scope from
current) or under Cloud
Service Line OS support
offering.

Infrastructure (including CSL/AWS


Hypervisor)

14.2 SAP Support Connectivity


Connectivity from SAP support to instances in AWS will be provided via an encrypted
SAProuter connection between an on-prem SAProuter and a SAProuter in the Prime account
in AWS. The details are :

On Prem

Hostname : reuxgbuz168 (SD0 host)


IP Address : 149.189.163.40
Alias : saprouterop.bpweb.bp.com

AWS

Instance ID :           i-004729fd102208e2e
Hostname : we1p103190002
IP Address:             10.163.210.10
SAP Router Alias:  saprouteroc.bpweb.bp.com

The AWS saprouter can be started by the root user, by running the following command :

/usr/sap/saprouter/saprouter -r -S 3299 -K "p:CN=we1p103190002, OU=BP, O=BP, C=US" &

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 51
This SAProuter has a bidirectional SNC encrypted link to the SD0 SAProuter.
When setting up the connection to SAP from an AWS SAP instance, use the standard
connection string format, e.g. :

/H/saprouteroc.bpweb.bp.com/H/saprouterop.bpweb.bp.com/H/

For connections (e.g. SAPGUI) from the BP network to AWS:

/H/saprouterop.bpweb.bp.com/H/saprouteroc.bpweb.bp.com/H/

This will pass traffic through the encrypted saprouter to saprouter connection. Details on
the installation and the setup of this saprouter to saprouter connection can be found here :

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/SOE%20Best%20Practice%20Procedures/
AWS_SAProuter_Installation_Guide.xlsx?d=w66a4e1afd2c84dbb9e1b23c46398fb41

15 Monitoring and Reporting


Cloudwatch detailed monitoring should be enabled on all instances. This is a requirement for
SAP to support instances in AWS.

Logs:

The following logs will be kept for 2 years and regularly monitored by DS, SOC, CE Owners.

•Cloudtrail logs from every CSL AWS account

•VPC Flow Logs from each VPC within each AWS account

•Host level logs from EC2 instances launched within each account

Link to AWS logging architecture doc

https://basproducts.atlassian.net/wiki/display/CSL/AWS+Logging+Design

15.1 Alarms
Alarms should be created as follows to send alert emails to the support teams

Instance Name Description Metric Threshold

Production / Prod <SID>-High-CPU- High CPU CPUUtilization >90 for 10


Fix Utilisation minutes.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 52

15.2 Actions
The HA alarm actions must be set up for ALL SAP instances, in ALL accounts (Test, Pre-Prod
and Production).

Actions should be created as follows, alerts should also be sent.

Instance Name Description Metric Action

Production / Prod <Instance>- Instance >=2 for 2 Recover


Fix StatusCheckFailed(instance) Failure consecutive this
periods instance

Project Stack <Instance>- Instance >=2 for 2 Recover


(X/D/S/Q/T/M) StatusCheckFailed(instance) Failure consecutive this
periods instance

Production / Prod <Instance>- System >=1 for 5 Recover


Fix StatusCheckFailed(System) Failure consecutive this
periods instance

Project Stack <Instance>- System >=1 for 5 Recover


(X/D/S/Q/T/M) StatusCheckFailed(System) Failure consecutive this
periods instance

Alerts for the Project Stack should be sent to the following email account:

GIT&[email protected]

15.3 Dashboards
A dashboard should be created for each logical grouping of instances. Actual details TBC.

15.4 Tagging

Tagging is used to help identify recourses in AWS.

The resources created in the CSL3 VPC are automatically tagged with the CSL defined tags
and values. However, to improve the searchability of resources by the TE engineers we will
create an addition custom tag.

Tag Key Value Description Examples

te-name Descriptive name of the AWS resource e.g.

Instance = <see section on Instance tagging Instance= HWG-DB-1


convention>
Volume = <see section on volume tagging Volume=

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 53
convention> SAP-CDL-SD-/usr/sap/CDL-01

16 SAP
16.1 SAP Emailing
SAP emailing will be routed through the BP Microsoft Exchange servers in the same way as
on-premise SAP email is routed.

Any alerts generated by Solution Manager will utilise the current BP Microsoft Exchange
server setup. There will be no change for AWS (no use of SES for example).

Details on the setup of email for SAP (inbound and outbound) including details of how to
encrypt both inbound and outbound emails can be found under this SharePoint library

https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SAP%20Basis%20Configuration

Encryption must be set up for outbound and inbound emails from SAP in AWS (see ‘Emailing’
section under the OS section in this document).

Note: Both Postfix and SAP ICM SMTP service can’t run on port 25 on the same AWS
instance. For incoming email Postfix needs to run on port 25 (a requirement of Exchange) so
the SAP ICM SMTP port will need to be changed (see the above emailing setup documents
for further information).

16.2 SAP DB Gateway


For SAP ASE and Hana there is no requirement to install a separate SAP Gateway to connect
to the DB.

For Oracle deployments a DB Gateway is only required for non central system installations
i.e. where the application server instances are installed on different AWS instances to that of
the database.

16.3 SAP Kernel


The way the SAP hardware key is generated on AWS instances is dependent on the kernel
patch level. Therefore it can change when the kernel is patched (see OSS Note 1697114 for
the patch levels). A new licence key may therefore need to be requested during a kernel
patching exercise if the generation method changes.

16.3.1 Custom/3rd Party executable installation location


In the past when dealing with small custom built or 3 rd party executables that can be stored
with the SAP kernel we have had an issue of them being deleted during kernel patching. To
reduce this risk these executables should be installed in a separate folder under

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 54
/sapmnt/<SID>/ExtApps/<program id> for example /sapmnt/<SID>/ExtApps/QCI. Any 3 rd
party apps that installs into their own installation directory should continue to do so.

16.4 SAP Profiles


It is desirable to add the majority of customized SAP parameters to the DEFAULT profile.
This ensures the whole system uses the same parameter value for consistency and reduces
the maintenance requirement. It's important that the parameters are not duplicated into
the instance profiles as these will override the parameter value from the DEFAULT profile
and also adds confusion around the active value and the place where it should be amended.

There are some parameters that have to be added to the instance profiles and others that
are more desirable to stay in the instances profiles. In the table below there is a guide that
describes the parameter types and the preferred profile location.

 The general rule is that any parameter which is set for the system and is unlikely to be
different across instances should be in the DEFAULT profile, for example minimum password
length applies across all instances so should be set in the DEFAULT profile. If a parameter
can be set for a specific instance and sometimes is then that should go in the instance profile
e.g. Workprocess related parameters. There are also a few parameters that SAP have made
mandatory for the Instance profiles.

 This is not a complete list of all SAP parameters but shows those commonly added to SAP
Profiles. These rules apply to new SOE systems. However they should also be followed on
older systems where possible. They apply to both the Project and Production stack
instances.

New parameter sections can be requested by contacting Stephen Head or John Davie.

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 55
 
Parameter Type Parameter Examples Recommended
Profile

SAP security related login/* DEFAULT


auth/*
rsec/*
rdisp/gui_auto_logout

Gateway related gw/max_conn DEFAULT

SAP host/directory location SAPTRANSHOST DEFAULT


parameters (tend to be in upper SAPDBHOST
case) DIR_TRANS

RFC Communications related rdisp/appc_ca_blk_no DEFAULT


rdisp/rfc_min_wait_dia_wp

Language related zcsa/installed_languages DEFAULT

Database related dbs/ora/tnsnames DEFAULT


j2ee/dbtype

Message server/System/Enqueue ms/http_port DEFAULT


related system/type
enque/table_size

Java dispatcher/message server/SCS j2ee/ms/port DEFAULT


related

SAP Buffers zcsa/table_buffer_area DEFAULT


rsdb/obj/max_objects

Workprocessor related rdisp/wp_no_dia Instance

ICM related icm/server_port_0 Instance


icm/host_name_full

Memory related PHYS_MEMSIZE (Windows) Instance


em/initial_size_MB

Java Instance related j2ee/phys_memsize Instance


jstartup/*

Shared Memory pools ipc/shm_psize_xx Instance

RZ10 should be used for changing the DEFAULT and INSTANCE profiles so that the changes
are logging in SAP. The OS editor should only be used for standalone instances (ERS, SCS,
WD, GW).

The SAP profile checking program, sappfpar, should always be run, to test profiles, after
changing parameters. If buffer parameters have increased it’s likely that the IPC shared
memory parameters (ipc/shm_psize_<nn>) will also need to be increased (as identified by
sappfpar).

As per the Apr16 BTDA all new and replatformed NW7.4 (or higher) ABAP systems running
on Linux should use zero administration memory management (ZAMM).

Implement ZAMM based on simple calculation of PHYS_MEMSIZE parameter:

1) 70% of free physical memory for single instance

2) Scale up for multiple systems on one server (e.g. two systems would use 35% each)

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 56
3) The PHYS_MEMSIZE parameter should be set in the instance profiles as the value may
vary for each instance.

4) Remove all other ZAMM calculated memory parameters from both the instance and
Default profiles, based on OSS note 2085980.

5) Exceptions to the rule in step 4 are the parameters abap/heap_area_dia and


abap/heap_area_nondia which can be manually set high that the default value if required
(ZAMM doesn’t calculate these).

16.4.1 Standard Profile Parameters


Various issues encountered during testing of the SOE builds may result in some SAP instance
parameter recommendations. These are documented in this section.

1) Parameters SAPLOCALHOST and SAPLOCALHOSTFULL. The two parameters must be set to


the hostname which is returned from a reverse lookup of the IP address used to install the
SAP instance. In order to ascertain whether a system has this issue, the following should be
checked :
a. See what hostname/alias was used to install the SAP instances. This can be found at
the end of the SAP profile names. E.g IMS_DVEBMGS10_sapimsa was installed
against host sapimsa.
b. Check that SAPLOCALHOST and SAPLOCALHOSTFULL are set to the same value. E.g :
i. SAPLOCALHOST = sapimsa
ii. SAPLOCALHOSTFULL = sapimsa.bp.com
c. If this is not the case, change the two SAPLOCALHOST and SAPLOCALHOSTFULL
parameters to the same value as the hostname that the SAP instance has been
installed against.

2) OSS note 1728922 outlines a problem with memory management in the ABAP stack which
potentially affects all systems and has previously caused a P1 in the PRC system. The SOE
recommendation is to always set the following parameter when the kernel is meets the
minimum patch levels outlined in the note :

a. rdisp/softcancel_check_clean = on

The other parameter mentioned in the note (rdisp/softcancel_sequence) should NOT be set.
Details on the rationale behind this setting can be found here.

3) SAPGUI scripting. It was decided in the June 16 BTDA that the SAPGUI scripting parameters should be
set to ‘true’ in all systems including Production. The user access to scripting will then be controlled by
SAP security roles. This was approved by the SAP security team and will appear in the SAP security
control document:
sapgui/user_scripting = TRUE  
sapgui/user_scripting_per_user = TRUE

SAP in AWS Reference Arc


hitecture
SAP in AWS Page 57
4) ASCS parameters for enqueue. It was approved in the August 2017 TEDA that following SAP
recommended parameters (OSS note 43614) should be implemented in all instance as a pre-emptive
measure:

Parameter Profile Description

enque/encni/set_so_keepalive = TRUE ASCS instance profile Improves ENSA stability

enque/sync_dequeall = 1 ABAP application server


instance profile(s)
Make clients (work processes)
wait for the result of enqueue
enque/deque_wait_answer = TRUE ASCS instance profile operations to be able to report
parameter and ABAP issues (if any)
application server
instance profile(s)

SAP in AWS Reference Arc


hitecture

You might also like