SAP in AWS - SOE
SAP in AWS - SOE
Amendment History
Versio Date Comment By
n
0.1 25 Jan 2016 Initial version Jeff Forrest
0.2 17 May 2016 Updates from SOE Governance sessions : John Davie
0.21 19 May 2016 Added CIDR Block for dev/test VPC Jeff Forrest
0.22 19 May 2016 Added details on new utility server for SAPNAS plus saprouter John Davie
connection to SAP for support purposes
0.23 24 May 2016 Added sections under storage and network Stephen Head
0.24 03 Jun 2016 Added information on using scp to transfer files from bp to AWS John Davie
0.25 07 Jun 2016 Various additions from the AWS question tracker. Stephen Head
0.26 13 Jun 2016 Added information on default OS shell, SAP install type and S3 backup John Davie
example script
0.27 24 Jun 2016 Added SNAP based backup script John Davie
0.38 04 Jul 2016 1) Added new architectural decision on Web Dispatcher single John Davie
SID per tier
2) Added information on standard AWS Security groups to be
used
3) Added information on Volume naming convention
0.39 05 Jul 2016 Added section on IAM role for AWS instances John Davie
0.41 11 July 2016 Various additions from the AWS question tracker. Stephen Head
0.42 14 July 2016 Updated security group ports, added non-sap security group and John Davie
added information on Windows Firewall setting. Also Volume naming
updated for non-SAP
0.43 3 Aug 2016 Updated the storage encryption statement Stephen Head
0.44 3 Aug 2016 Updated section on Volume Group/Logical Volumes John Davie
0.45 4 Aug 2016 Added the SWD installation guide URL. Updated the DNS standards. Stephen Head
0.46 5 Aug 2016 Various additions from the AWS question tracker. Stephen Head
0.48 11 Aug 2016 Various additions from the AWS question tracker. Stephen Head
0.51 26 Aug 2016 Updated the secure saprouter details Stephen Head
0.52 6 Sept 2016 Added information on the AWS Data Provider John Davie
0.53 12 Sept 2016 Added information on setting up SSL for SAP Host Agent John Davie
0.54 21 Sept 2016 Added sapinst port 21212 to Security group definitions John Davie
0.55 17 Oct 2016 Added information on encryption of HTTP to the backend John Davie
0.56 21 Oct 2016 1) Added information on new Security Groups in CSL v3 John Davie
2) Added information on HANA instance number standards
0.57 27 Oct 2016 1) Added new ports to the SAP SG in the CSL VPC John Davie
2) Added information on new SAPNAS in CSL VPC
0.58 02 Nov 2016 Added information about standard SNAP backups taken in CSL v3 John Davie
instances
0.60 11 Nov 2016 Added further information on UID/GIDs to be used Stephen Head
0.61 15 Nov 2016 Added standard on setting “Cloudwatch Detailed Monitoring” John Davie
0.62 21 Nov 2016 General updates from the AWS questions sheet Stephen Head
0.63 22 Nov 2016 Update with latest Backup/Restore Strategy John Davie
0.64 01 Dec 2016 Updated with latest NFS standards John Davie
0.65 2 Dec 2016 Updated the tagging sections with the CSL account tagging info Stephen Head
0.66 8 Dec 2016 General updates for CSL3 deployments Stephen Head
0.67 9 Dec 2016 Add advice on EBS Optimised Volumes John Davie
0.68 13 Dec 2016 Added some more details to the scope of this document section Stephen Head
0.69 13 Dec 2016 Added link to DS Controls standards for security John Davie
0.70 13 Dec 2016 Updated VPCs and Storage sections Jeff Forrest
0.71 16 Dec 2016 Updates after Dec16 BTDA. Added Instance stop/start/termination Stephen Head
info
1.01 6 Jan 2017 Updated NFS recommendations and clarified Web Dispatcher John Davie
recommendations
1.03 11 Jan 2017 - Updated the external document links as the SOE and related Stephen Head
docs have moved to the TE Sharepoint.
1.04 11 Jan 2017 - Updated with basic information on creating/using S3 buckets John Davie
in CSL v3 (naming convention etc.)
1.06 18 Jan 2017 - Added link the SAP ASE SOE doc Stephen Head
- Added info regarding the requirement for a SAP DB Gateway
1.07 18 Jan 2017 - Added info on using scp instead of SFTP John Davie
- Added information on RSYNC
1.08 26 Jan 2017 - Updated the backup section cleanup script and details Stephen Head
1.09 27th Jan 2017 - Updated storage section to show Hana backup shared over Jeff Forrest
NFS on scaleout.
1.10 27 Jan 2017 - Added the Hana SOE document link Stephen Head
1.11 27 Jan 2017 - Added confirmation that a Gateway Process is still required John Davie
in the ASCS
1.12 31 Jan 2017 - Added the ASE SRS install/config guide URL Stephen Head
1.13 1 Feb 2017 - Updated VIP/virtual hostname usage info Stephen Head
1.14 Feb 6 2017 - Added Subnets and Security groups IDs John M.
1.15 Feb 6 2017 - Added more information on domains required for DNS John Davie
entries
1.17 7 Feb 2017 - Added clarification on SSL setup for WD (Option 4) John Davie
1.19 9 Feb 2017 - Added information on Email Encryption to/from SAP John Davie
- Added new port standard for HTTPs on ABAP ICM
1.20 16 Feb 2017 - Removed BAS VPC references from this document. A BAS John Davie
VPC specific version of the document will be saved
separately
1.21 17 Feb 2017 - Added the link to the SNAP backup recovery procedure S Head
1.24 06 Mar 2017 - Added info to the sections on Encryption and Monitoring Roy Keegan
1.25 08 Mar 2017 - Added information on mounting an NFS export from a CSL J Davie
instance to a BP1 instance
- Added HA Alarm Configuration info
1.27 09 Mar 2017 - Updated with link to document outlining how to allow
passwordless SSH for RSYNC
1.28 10 Mar 2017 - Updated the secondary IP for Prod information S Head
- Added info on SUSE postfix setup of OS emailing
1.30 17 Mar 2017 - Reformatted the security group table to make it clearer S Head
1.32 28 Mar 2017 - Updated the Hana backups section (use DB13) S Head
1.33 28 Mar 2017 - Updated storage section to mandate the existence of a J Davie
partition table for each device
1.34 30 Mar 2017 - Updated the Hana backups section (use of XS engine) S Head
- Updated RSYNC section
1.35 04 Apr 2017 - Updated text around when to set no_subtree_check and J Davie
no_root_squash on nfs exports
1.36 10 Apr 2017 - Added VPC and SG information for Prime account used for J Davie
shared SAP services such as SAProuter and LaMa
1.37 11 Apr 2017 - Added information about new SAPNAS in Prime J Davie
1.38 13 Apr 2017 - Added information on using HA alarms for all stacks J Davie
1.40 9 May 2017 - Added information about Prime VPC subnets to be used (INT, J Davie
not APP)
1.43 30 Jun 2017 - Added more details about SAProuter in Prime J Davie
1.44 12th July 2017 - Updated backup storage types to SC1 and added J Forrest
/interface/<SID> sharing details.
1.45 19th July 2017 - Updated to make clear that S systems now go in Pre-Prod J Davie
VPC
1.46 26 July 2017 - Updated the tagging sections as the Engineering role now S Head
has the access to add custom tags
1.47 26 July 2017 - Added information on which Windows AMI to use for J Davie
Windows based builds
1.49 01 Aug 2017 - Changed volume recommendations to remove VGs/LVs and J Davie
use of partitions
1.51 15 Aug 2017 - Added links to the Hana and ASE SOEs in reference docs S Head
section
1.53 13 Sept 2017 - Various small updates after the SOE Knowledge sharing S Head
sessions
1.54 15 Sept 2017 - Updated SAPnas section to remove old SAPnas shares J Davie
1.56 26 Sept 2017 - Updated SAP SG Ports with Solman additions J Davie
1.58 10 Oct 2017 - Update the Backup section for Hana S Head
1.59 11 Oct 2017 - Updated volume sizing info for /hana/shared S Head
- Updated the emailing section regarding the use of port 25
1.60 12 Oct 2017 - Removed duplication with Standard SUSE build for SAP AWS S Head
SOE
1.61 13 Oct 2017 - Added the backup to S3 and clean script S Head
1.62 19 Oct 2017 - Removed SAP AWS Data Provider for SAP info, as this has J Davie
been moved to the SUSE OS SOE
1.64 03 Nov 2017 - Added note on random volume partition error J Davie
1.67 28 Nov 2017 - Added a clarification of the allocation of SAP instance S Head
numbers in AWS to F and R
Latest N/A
Contents
1 Introduction....................................................................................................................................11
1.1 Scope...................................................................................................................................11
1.2 Document References.........................................................................................................11
2 SAP on AWS Architecture Summary...............................................................................................13
2.1 SAP on AWS Summary.........................................................................................................13
3 SAP VPCs.........................................................................................................................................14
4 SAP Builds.......................................................................................................................................15
4.1 Principles.............................................................................................................................15
4.2 SID Naming..........................................................................................................................15
4.3 SAP Instance Numbers.........................................................................................................15
4.4 SAP Web Dispatcher............................................................................................................15
4.5 SAP Host Agent....................................................................................................................18
4.6 SAP Software Downloads.....................................................................................................18
4.7 SLD.......................................................................................................................................18
4.8 SAP Landscape Manager (LaMa)..........................................................................................19
4.9 Solution Manager................................................................................................................19
5 Security...........................................................................................................................................19
5.1 Principles.............................................................................................................................19
5.2 AWS Management Console Access......................................................................................19
5.3 OS Access, SSH, RDP.............................................................................................................19
5.4 Remote access (outside BP1)...............................................................................................20
5.5 Data Encryption at rest........................................................................................................20
5.6 Data encryption in transit....................................................................................................20
5.7 AWS IAM roles.....................................................................................................................20
5.8 AWS Security Groups...........................................................................................................21
5.9 Firewalls...............................................................................................................................23
5.10 OS Hardening.......................................................................................................................24
5.11 Passwords............................................................................................................................24
6 AWS Instances................................................................................................................................25
6.1 Principles.............................................................................................................................25
6.2 Instance Types.....................................................................................................................25
6.3 AWS Instance Builds............................................................................................................25
1 Introduction
The SAP Standard Operating Environment(SOE) is the reference document that describes the
architecture of all SAP systems deployed in BP. The SOE primarily defines the architecture of
systems, however it may where required also mandate specific settings and configurations
where these are required to support the architecture.
It is a framework of architectural and build standards for SAP and tightly coupled
applications in BP, with associated governance.
The SOE is required due to the large and complex nature of BP’s SAP estate.
It ensures consistent system architecture and configuration.
It follows industry best practice and standardisation within the BP environment.
It is constantly evolving.
It builds on top of other standards within BP.
It should be complied with on major architectural change (e.g. replatforming)
SOE standards are retrofitted only where there is a good business case.
1.1 Scope
The “SAP in AWS – SOE” details the architecture and deployment requirements for SAP
systems on the Amazon Web Services (AWS) EC2 platform. Unlike in the past, for the on-
premise SOEs, the SAP in AWS SOE combines the requirements from Strategy and
Architecture(S&A) and Technical Environments (TE) into one document.
Many of the TE SAP Basis requirements stay the same as per the on-premise Basis SOE.
Therefore the “Basis SOE v2 handbook” should still be used as a reference. Any changes
required for the AWS deployments will be details in this document. Over time all the relevant
SAP Basis information will be transferred from the Basis SOE v2 handbook to this document.
This document is specific to the CSL v3 VPCs. The earlier BAS VPCs have slightly different
standards and these are stored in a separate document :
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/BAS%20VPCs/BAS%20VPCs%20SAP%20in%20AWS%20-%20SOE.docx
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standard SUSE Build for SAP AWS
Standards%20and%20Best%20Practice/SOE/Cloud/Standard
SOE
%20SUSE%20Build%20for%20SAP%20AWS%20SOE.docx
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Basis SOE v2 handbook (on-
Standards%20and%20Best%20Practice/SOE/
premise)
Basis_SOE_v2_Handbook.docx
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20HANA
SAP HANA SOE
%20Standards%20%26%20Documents/SAP%20HANA%20-
%20AWS%20SoE.docx
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
SAP ASE SOE Standards%20and%20Best%20Practice/SAP%20ASE/SAP%20ASE
%20on%20AWS%20for%20SAP%20applications%20-%20SOE.docx
Item Selection
Multipathing OS Native
OS Storage EBS
3 SAP VPCs
VPC Name Usage Locatio VPC ID VPC Subnets
n
INTAZb 10.162.56.0/21
subnet-ac993fc8
INTAZc 10.162.64.0/21
subnet-d2458ca4
subnet-51448d27
INTAZc 10.162.192.0/21
subnet-f045a4a8
INTAZb 10.163.56.0/21
subnet-ef66c08b
INTAZc 10.163.64.0/21
subnet-6905cf1f
subnet-6e23ed18
4 SAP Builds
4.1 Principles
All project stack SAP installations where SAP and the DB are on the same virtual server will
be “Standard” installations (not Distributed or HA). For systems installed with NW 7.3 or
above, these will include a separate (A)SCS instance.
We will use as few AWS instances as practically possible while still ensuring ease of
management and HA/DR restrictions. For example Production instances will have a least
two application server instances on separate AWS instance for high availability.
To ensure compatibility with automated deployment, stacking is not typically used unless
savings can be justified. Production systems will not be stacked together unless one is very
small e.g. ABAP and Java systems could be stacked together.
00->04
00 will be the default instance number, however should that not be available for some
reason, 01, 02, 03 or 04 can be used instead. Firewall and security group rules have catered
for this range only.
Also note that, as the Production Fix(F) and Production(R) instances no longer share servers,
the same instances numbers (recommended) can be used for both systems when deploying
in AWS.
The SAP Web Dispatcher deployments will differ from on-premise SOEv2 in that there will be
a single Web Dispatcher instance per landscape tier, each with a separate instance number
for each backend system e.g. three instances will all be served by Web Dispatcher X<n><n>,
with three separate instance numbers, e.g : 30, 31, 32
In the project stack (excluding Mock and including S), the Web Dispatcher instance will
reside on the Java stack virtual server
In the production stack (M, F & R), the Web Dispatcher will reside on a standalone virtual
server.
It should be noted that this architecture means that an outage to the Web Dispatcher will
affect all systems in that tier of the landscape.
The SWPM installer mechanism has been tested and caters for separate Web Dispatcher
installations under the same SID, with differing instance numbers.
The SWD installation guide should be used as a reference when setting up the Web
Dispatcher.
End Note
1) In the on-premise SAP systems, http traffic is encrypted up to the Web Dispatcher, then
decrypted between the Web Dispatcher and the backend SAP system. In AWS, traffic will
still be decrypted at the Web Dispatcher, but then will be re-encrypted again before
being sent to the backend SAP system. The backend SAP systems must all therefore have
SSL configured at the ICM layer. Please use the following standard for the SSL port on
the ABAP ICM :
Users/Systems will still connect to the Web Dispatcher, therefore this will still need to
have BP or external Certificate Authority certificates installed. The backend systems,
however, may have SAP self-signed certificates with extended expiry dates (e.g 25
years). These certificates should not need to be replaced during the lifetime of the
system. This applies to all of the backend SAP components which require an SSL server
https://help.sap.com/saphelp_nw75/helpdata/en/
48/98e6a84be0062fe10000000a42189d/content.htm
2) Please ensure the following profile parameter is set in both the Web Dispatcher and
ABAP ICM for F and R systems. This reduces the amount of data shown during an ICM
error :
a. is/HTTP/show_detailed_errors=FALSE
3) There is no requirement to remove HTTP ports completely, hence these should continue
to be configured on both the Web Dispatcher and ABAP ICM. Redirect on the Web
Dispatchers should, however, be configured to ensure that any traffic going to the HTTP
port is automatically redirected to HTTPs. Standard parameter icm/HTTP/redirect_<xx>
should be used for this, e.g :
a. All systems must put a generic entry in table HTTPURLLOC to redirect all URLs to
WD HTTPs. E.g :
Note : There will be NO redirect on the ABAP ICM for HTTPs and we will NOT use the action
file based modification handler on either the WD or ABAP ICM for redirect purposes. Use of
the action file to force usage of the correct virtual hostnames is, however, still permitted.
4) All other standards around Web Dispatcher (e.g Admin port on 90NN, use of Unicode
kernel etc.) are identical to the on-premise SOE v2. The SOE v2 handbook should
therefore still be referred to for SOE Web Dispatcher standards.
Log on to the target host of the backend system as root, then execute the following :
cd /usr/sap/hostctrl/
cd exe
mkdir sec
cd sec
export SECUDIR=/usr/sap/hostctrl/exe/sec
chown sapadm:sapsys /usr/sap/hostctrl/exe/sec
/usr/sap/hostctrl/exe/sapgenpse get_pse -p SAPSSLS.pse -noreq -x
<PASSWORD> "CN=<HOSTNAME>"
/usr/sap/hostctrl/exe/sapgenpse seclogin -p SAPSSLS.pse -x <PASSWORD>
-O sapadm
chmod 644 /usr/sap/hostctrl/exe/sec/SAPSSLS.pse
chown sapadm:sapsys /usr/sap/hostctrl/exe/sec/SAPSSLS.pse
chown sapadm:sapsys /usr/sap/hostctrl/exe/sec/cred_v2
/usr/sap/hostctrl/exe/saphostexec –restart
pf=/usr/sap/hostctrl/exe/host_profile
Now check the sapstartsrv.log file under /usr/sap/hostctrl/work to ensure the 1129 port has
been started.
4.7 SLD
Please refer to the on-prem SOE v2 handbook for details around SLD usage and connectivity.
The strategy has not changed for AWS based systems.
Please note that the same requirement exists to implement SLDREG on every instance. All
new project and production landscape systems should register to the IMS/IFS/IRS SLDs with
SLDREG (as well as RZ70 and Java SLD data supplier). Instructions on how to do this can be
found here.
More information will be added to this section when the service is released and available.
Note : Please ensure that any critical housekeeping or backup jobs are scheduled outside of the
downtime period when managed by LaMa.
5 Security
5.1 Principles
All DS/MITs requirements to be complied with, regular auditing and testing should be
undertaken (details TBC)
The attack surface of instances will be minimised by shutting off non-essential services.
Only required ports will be opened to the VPC and to each individual instance.
Systems will be kept up to date with patches, following BP best practises or better.
Sandbox, dev/test and production will be segmented in separate VPCs/accounts
The link to the current DS controls that must be adhered to, can be found here.
https://sso.bp.com/fim/sps/saml20/saml20/logininitial?
RequestBinding=HTTPPost&PartnerId=urn:amazon:webservices&NameIdFormat=Email&Allo
wCreate=false.
All the engineers who have access to a CE will have the same permissions.
The Dev/Test and Production instances will be installed in different VPCs so access to logon
to an instance will be segregated by the VPC access.
CSL3 AWS instances are accessed via “–sysop-<ntid>” accounts in the CD2 domain. Key pairs
are not used. When logging on via putty the user id should be entered with the domain
prefix e.g. cd2\-sysop-<ntid>. At present, root access can be gained by su’ing directly to root
after -sysop logon. A prompt will be made to re-enter the -sysop password.
https://basproducts.atlassian.net/wiki/display/CSL/AMI+Encryption+Process
https://soe.bpglobal.com/Apps/digitalsecurityportal/BAS-ISS/Shared%20Documents/SAP/
SAP%20Cloud%20Controls%20Requirements%20AWS%20V1.1.xlsx?Web=1
Note : In general, all communication should be encrypted into and out of AWS based instances where
the technology allows it. Where there is no current technical solution available to provide encryption,
please contact DS to discuss a possible exemption.
For SAP Message Server communication specifically, it is required to set up and use SSL on the
standard port (444NN). This can be set up using a self-signed certificate and the SCS PSE should be
used (located in /usr/sap/<SID>/(A)SCSNN/sec).
CSL v3 VPC :
<CLOUD ENVIRONMENT>-role_INSTANCE-PROFILE
All Ports TCP All AWS This opens up all ports between AWS
instances instances assigned to this Security Group
with the SAP
SG assigned
5.9 Firewalls
Each virtual server will be firewalled using security groups in AWS, with a standard policy
applied where possible. Additionally, the standard CNX firewall will be in place between BP
and AWS. Please see section “AWS Security Groups” for more information.
The firewall rules defined for SAP communication to BP can be found in the AWS SAP firewall
standards document.
The internal SUSE firewall will not be used and will be deactivated.
The internal Windows firewall will not be used and will be deactivated (confirmed that DS
has approved this and is deactivated in the CSL Windows standard build).
5.10 OS Hardening
OS hardening to be applied to all builds as appropriate. Specific details TBC.
5.10.1 Umask
The standard CSL SUSE images come with a default umask of 0077. SAP installs mandate a
umask of 022, therefore this umask MUST be set on root before sapinst is started. This is
part of the standard installation instructions provided by SAP.
Additionally, both Sybase and HANA users (syb<SID> and <SID>adm) have default umasks of
027. This is correct, but it should be noted this means that access to some DB directories is
5.11 Passwords
All OS and application related passwords (e.g. SAP , SAP ASE etc ) should be stored in the TE
BIS tool under the appropriate SAP SID.
6 AWS Instances
6.1 Principles
Use the appropriate supported instance for best balance of price/performance.
All Up to 488GB R4
The SUSE AMIs are also based on CSL hardened builds, but additionally include any build
updates request by TE for SAP. The AMI to use for the AWS instance build will be on the
build sheet. The latest CSL provided SUSE AMIs to be used for SAP installations are listed in
the Standard SUSE Build for SAP AWS SOE document. These include all the OS elements
required for a SAP installation (e.g X11 etc).
In the future we will look at using cloud formation templates(CFT) for SAP builds once we
can productionize the process.
All instances should have ‘Termination Protection’ active to reduce the chance of accidental
instance deletion.
The following naming convention should be used when creating EC2 instances in the CSL3
VPCs (the only difference is the Tag Key to use):
1) SAP Systems :
a. SID(s)-Components-Number. Examples :
i. For PDL/CDL central system, this would be PDL/CDL-CEN-1
ii. For the WXO App server, this would be WXO-APP-1
iii. For the WXO DB server, this would be WXO-DB-1
iv. For the WDL/HDL server, this would be HDL/WDL-CEN-1
2) Non-SAP Systems :
a. APPIDENT(s)-Components-Number. Examples :
i. For Bartender server, this would be BAR-APP-1
3) Jumpboxes :
a. Jumpbox-Project-Number. Examples :
i. For Nike Jumpbox, this would be Jumpbox-Nike-1
ii. For second Nike Jumpbox, this would be Jumpbox-Nike-2
6.4.2 Termination
Terminations of instances in the CLS3 VPC need to be requested via Cloudhub
(https://bp.service-now.com/cloudhub/). The project Tech Lead should approve any
termination requests. Any related SNAPs of volumes no longer required should also
be requested for deletion.
7.2 SUSE
SUSE Linux Enterprise Server is the internal OS used by SAP within their datacenters. New
features are also currently released on SUSE before RHEL, therefore we will be using SUSE as
the default OS for all SAP instances in AWS, where supported. A new standard build, similar
to the current BP RHEL build below, will be released for SUSE in due course.
The OS build guide for SAP on SUSE in AWS, which combines the requirements for SAP and
SAP ASE, is available:
7.2.1 Emailing
To be able to use ‘sendmail’ for emailing from the OS there is some configuration of postfix
required:
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/SOE%20Best%20Practice%20Procedures/AWS%20-%20SUSE
%20configure%20postix%20to%20relay%20mails%20via%20BP.docx
7.3 Windows
The default Windows version will be 2012 R2.
7.4 OS Users
User and Groups required for the SAP deployments need to be manually create.
Unix
<Please refer to the “Standard SUSE Build for SAP AWS SOE”>
Windows
The Windows users and groups will be created in the CD2 domain. User/group creation
requests should be raised against with the GOT team.
8 Storage
8.1 Principles
Most of the filesystem storage for both Hana and SAP ASE instances will be on gp2 storage.
The exception is the /backup filesystem which will be SC1 for all systems.
All instances should use “EBS Optimised Volumes” by ticking the appropriate options on
launch. This is only required on those instance types which don’t have EBS optimised
volumes by default (at the moment, this means R3 type instances – X1 and R4 have these
activated by default).
SAP on HANA will have the following standard filesystems. The sizes are for Dev/Sandbox
systems. The shared, data and log volumes will be increased for larger environments.
Note that for Hana, scaling up requires larger storage as well as a larger instance.
Hana Filesystems
App Server(s) Hana Master Hana Slave(s) Notes
All filesystems XFS (except swap which is type swap)
Shared filesystems (/sapmnt etc are exported from the
DB but will probably move to EFS when available
/ /usr/sap / /usr/sap / /usr/sap /backup is to SC1 in prod and non-prod
GP2 GP2 GP2 GP2 GP2 GP2
Sterling interface folder should be shared from the
sterling instance in the same account.
Swap Swap
GP2 GP2
/hana/shared /hana/shared
GP2 NFS
/backup /backup
ST1/SC1 NFS
Sterling C:D
/interface/<SID>
GP2
/tmp, /var, /var/tmp, 100G (20Gb each) EBS GP2 Single volume
var/log, /home,
/var/log/audit
/hana/shared For single node systems, min EBS GP2 Single volume
(Host RAM; 1TB), i.e. up to a
1TB node, /hana/shared is
the node size, after that it’s
1TB.
Note : The original size of the /var volume in the CSL build is 20Gb. In this SOE, this has been extended
to 100Gb. Currently the SAP AMI gives a 20Gb volume, which must be manually extended to 100Gb.
See instructions on how to do this here. This note will be updated when a new version of the SAP SUSE
AMI is available with 100Gb volume as standard.
SAP on SAP ASE will have the following standard filesystems. The sizes are for Dev/Sandbox
systems. The sapdata and saplog volumes will be increased for larger environments. All the
SAP ASE utility directories are being created under ‘/sybase/<SID>/’ that may change if
experience dictates.
ASE Filesystems
DB Server App Server(s) Notes
All filesystems XFS (except swap which is type swap)
Shared filesystems (/sapmnt, /interface etc are
/ /usr/sap / /usr/sap exported from the DB but will move to EFS when
GP2 GP2 GP2 GP2 available
Multiple sapdatas if required
/backup is to SC1 in prod and non-prod
Swap /sybase/<SID> Swap Sterling interface folder should be shared from the
GP2 GP2 GP2 sterling instance in the same account
/sybase/<SID>/
sapdata_n
GP2
/sybase/<SID>/
saplog_n
GP2
/sapmnt /sapmnt
GP2 NFS
/backup
SC1
/interface/<SID> /interface/<SID>
NFS NFS
Sterling C:D
/interface/<SID>
GP2
/tmp, /var, /var/tmp, var/log, 100G (20 Gb each) EBS GP2 Single volume
/home, /var/log/audit
Other filesystems, for example those for LiveCache, will follow a similar approach and will be
defined in the buildsheets.
If the database is close to or bigger than bigger than 3TB further SAPDATAx disks can be
added at 3TB each.
AWS S3 storage will be used for storing DB transaction log backups snyc’d from the AWS
instance primary backup filesystem.
File system mount options (exclude root, NFS and SWAP – kept those as default setting):
nobarrier,noatime,nodiratime,nofail,logbsize=256K
Notes: We will not be using RAW devices. The option ‘nofail’ was added after approval at the
December 2016 BTD.
The SOE originally mandated the use of partition tables on each volume/device mounted to
an AWS instance. Since AWS released online resizing volume functionality, it was decided to
change this recommendation to mandate NO partition tables on application
volumes/devices mounted within the OS (this does not apply to root/var volumes that are
delivered with the AMI). This is because the existence of a partition table does not allow for
a fully online filesystem resizing procedure (filesystems must be unmounted at the OS level
first, before resizing occurs). Hence, no partition tables should be created on devices within
the OS, and filesystems should be created directly on the mounted volumes. This also means
that Yast can no longer be used to create filesystems and a manual procedure should be
followed instead. This procedure is documented here.
8.3 Volumes
All volumes will be mounted using the filesystem type “XFS”. When creating volumes, the
‘Delete on Termination’ flag should NOT be set as a safety measure in case someone
accidentally terminates an instance.
In early cloud HANA deployments we defined the /hana/data and /hana/log filesystems
single logical volumes within a volume group. For various performance reasons, this has
been discontinued and we will use the volumes as provisioned by AWS without logical
volumes or groups. Therefore all filesystems will be mounted as a single device e.g /usr/sap,
/hana/log etc.
Some older PoC systems may still have the logical volume and group setups.
Note : In some rare circumstances, when attaching an encrypted volume to an EC2 instance, the
volume can appear to already be partitioned when viewed from the OS. This is caused by the random
encryption characters encoded on the volumes. If this occurs, you can either detach and delete the
volume and create a new one, or run the following command at the OS to initialise and remove the
random partition information :
Where
SID = System Identifier of main system using this volume (e.g CDL). For Non-SAP, this
should be a unique identifier for the application instance
DEVICE TYPE = whether this volume will be mounted as a single device or included in
a volume group (e.g VG or SD)
MOUNT POINT = the mount point of the volume on the server (e.g /usr/sap/CDL)
NUMBER = the number of the volume if this is part of a volume group (e.g 02)
Examples :
1) The single volume for /usr/sap/CDL on the shared CDL/PDL AWS instance would be
called :
a. SAP-CDL-SD-/usr/sap/CDL-01
2) The second volume as part of the WDL Hana data volume group would be called :
a. SAP-CDL-VG-/hana/data/WDL-02
8.4 S3
S3 storage may be used in the CSL v3 VPCs, however any buckets created must follow the
following rules :
• The bucket name must be exclusively lower case and begin with “<cloud env>-osb-“
• The last 4 digits can be used as required
• The S3 bucket upload configuration MUST have server side encryption set
DS and the CSL specify that all volumes should be encrypted, therefore when building
instances via the AWS console, all volumes should have the “encrypted” flag set (this will be
specified in the build sheet provided by S&A). The newer CSL builds have this set at the AMI
level.
Currently Hana backups are not encrypted, whereas SAP ASE backups can be. Therefore
the /backup filesystem should have the ‘encryption’ flag set during the volume creation for
both the DB types, for consistency (as per the general recommendation above).
Note 1 : For HANA installations, SAP recommend setting the “no_root_squash” and “no_subtree_check” options
for the NFS export settings (OSS note 2099253). Additionally, it has been found that these parameters need to be
set during any SAP installation or upgrade/patch activity using SWPM or SUM. Therefore these two options should
be set on NFS exports during installation and patching time ONLY. These two should then be removed after the
installation or patching is complete, as there are security implications in leaving them active.
It is possible to export filesystems from CSL v3 VPC based instances and mount them on on-
prem servers, but this should only be performed on a temporary basis during migration
activities.
Note 2 : For NFS exports that need to be mounted on a BP1 based system, the export domain must be set to
“bp.com”
Hostname : we1p103190002
IP ADDRESS : 10.163.210.10
The structure of /sapnas is the same as the BP SAPNAS and should be used in the same way.
9 Network
9.1 Large File Transfers
In order to move large amounts of data from BP datacenters into AWS, there are three
options :
1) Use the Aspera connect service. This service is run by the CSL and can be used to transfer
large file to AWS
2) Direct SFTP connection from BP servers to AWS instances
Steps:
1. Need to generate public key file id_rsa.pub for the user on Solaris server using ‘ssh-
keygen -t rsa’ command
4. Run sftp from Solaris server using (for example) ‘sftp [email protected]’
Care should be taken with option 2) not to affect the overall bandwidth available to AWS.
This command must be executed on the source (only push from BP to AWS is currently
supported).
9.2 IP Addresses
Each AWS instance will be assigned one IP address each regardless of how many SAP
systems are running on it. In the Production like systems F, R, R-DR there will be one IP
addresses per AWS instance hosting a SAP application instance and two or three on the
DB/(A)SCS AWS instance. On the single IP address AWS instances the IP address will be
assigned to the hostname with the default domain suffix. On the instances with two/three
When creating new AWS instances, the Primary IP address field should be left as ‘Auto-
assign’ so that AWS automatically assigns a free IP address from the subnet chosen. IP
addresses can be reassigned to new instances if that is required.
9.3 Hostnames
The CSL3 instances are created from CSL provided AMIs. The hostname is automatically
generated, when an instance is launched, and is based on the cloud environment name an
example hostname is we1t101430006. The FQDN i.e. <hostname>.cd2.bp.com is
automatically registered on the AWS DNS servers.
cd2.bp.com
9.5 DNS
The Project DNS standards will remain similar to those we are using on-premise. Both the
‘sap<sid>.bpweb.bp.com’ and the ‘sap<sid>.cd2.bp.com’ and ‘sap<sid>.bp.com’ DNS aliases
should be registered as CNAMES (no reverse lookup) against the AWS instance hostname A-
record i.e. <hostname>.cd2.bp.com.
The SAP web dispatcher DNS aliases have changed, in Project systems, from the earlier on-
premise setup now that the web dispachers have their own SID. The new DNS alias
structure for Project and Production systems is:
sap<target sid>wd.bpweb.bp.com
sap<target sid>wd.cd2.bp.com
sap<target sid>wd.bp.com
End Note:
The Production DNS standards will also remain similar to those being used on-premise.
However not all aliases will be against virtual IPs. See the following table for guidance on
When requesting the CNAMEs above, please request in the following three domains :
1) BPWEB.BP.COM. This domain will be continue to be used for “user” to “server” connections
2) CD2.BP.COM. This is a default domain for AWS and will be used for new “server” to “server”
connections
3) BP.COM. This is the legacy domain, but will be retained to ease replatforming projects and
ensure old RFCs etc. will still resolve to the new target instances.
The BPWEB.BP.COM and BP.COM CNAME aliases can be requested from the on-prem Global
DNS team. The CD2.BP.COM CNAME aliases can be requested via the CSL
During replatforming projects temporary DNS aliases can be used, while both source and
target systems exist. The temporary DNS aliases format is to add an ‘x’ to the segment of
the alias names e.g.
sap<sid>x.bpweb.bp.com
sap<sid>x.bp.com
Once the source systems have been switch off the target instances can adopt the standard
DNS aliases and the ‘x’ aliases can be deleted.
10 Databases
10.1 Which Database to Use
Usage Database SOE Guide
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20ASE/SAP%20ASE
%20on%20AWS%20for%20SAP%20applications%20-%20SOE.docx
SRS installation/setup:
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20ASE/
SAP_ASE_SRS_Build%20and%20Config.xls
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/
Standards%20and%20Best%20Practice/SAP%20ASE/
SAP_ASE_SRS_Operations_Guide.xls
LiveCache MaxDB
10.3 Oracle
10.3.1 Use of Oracle for SAP is currently not allowed
Oracle in AWS is currently not allowed, see SAP Note 1656099 which states (as of Jan 2016)
Each Amazon instance must be licensed based on the rules laid down for AWS in the Oracle
cloud licensing documents, put simply these are:
1. Look up the number of virtual cores for the instance in the Amazon Virtual Cores
Table
2. Apply the Oracle core factor (normally 0.5 for Intel CPUs)
3. The number of licenses required for Intel CPUs is therefore the number of virtual
cores divided by 2.
11.2 HA Architecture
High availability is achieved using a combination of Amazon Cloudwatch and auto restart in
Hana/ASE.
In the event of a hardware failure within AWS, CloudWatch will automatically restart the
Hana virtual server instance on a new physical server, and Hana will auto restart on boot of
the new server instance. Application instances will be located in two AWS availability zones,
to reduce the risk of availability zone outages causing a SAP system outage. Full size
“dormant” (shut down) application server instances will also be located in two availability
zones, ready to be used if an availability zone is lost.
The AWS Alarm configuration to enable Cloudwatch based HA recovery is very simple. An
alarm must be configured for each SAP AWS instance, using the following alarm configuraton
:
This alarm should be set on each instance requiring HA with the following settings :
As per the on-prem SOE v2, it is still required to add a SAP gateway process to the ASCS
instance. This central gateway should be used to register RFC server programs instead of
individual app server gateways. Please see the on-prem SOE v2 Handbook for more details.
11.3 SAP/Database HA
In Production and Production Fix the SAP components (ASCS, SAS, WD etc) and database
should be set to autostart after an instance restart. For SAP components and Hana the
‘Autostart =1’ (with a capital ‘A’) parameter should be used in the instance profiles. For ASE
the database/ SRS start script needs to be added to the OS boot init files (see the ASE SOE
for more details).
11.4 DR Architecture
Disaster recovery is achieved by replicating a copy of the production database to another
availability zone. This will be achieved in HANA via inherent HANA replication and in ASE via
Sybase Replication Server. The standby instance will be sized as a “pilot light” instance, with
just enough capacity to enable the replication to function correctly. In the event of a DR, the
standby database instance would be manually resized in AWS to match production.
As per the HA architecture, application servers are deployed across both availability zones,
reducing the risk of application issues when moving to the DR instance. The Fix instance is no
longer stacked with DR, but is run on a separate instance. This simplifies the design.
Client traffic will need to be redirected to the IP of the DR instance. Initially a manual DNS
change will be used but there may be other options (e.g. ELB) that can be investigated to
avoid this requirement (TBC). It is envisaged that this architecture could be also be used in
certain HA scenarios, certainly more frequently than it’s used in on-premise SAP systems
today.
11.4.1 Rsync
RSYNC will be used in the same way that it is currently used in the on-prem systems to
replicate certain filesystems (e.g /sapmnt/SID) between the primary and secondary servers.
Please refer to the on-prem SOE handbook for standards around the use of RSYNC.
The rsync scripts for SUSE in AWS have to change slightly. The SUSE version of the /sapmnt
rsync script is embedded here:
For the AWS builds, passwordless SSH must be enabled between the source and target
hosts, to allow RSYNC to function. The procedure for how to enable this can be found here.
WD WD
App 3 App 4
(on-demand, (on-demand,
App 1 AZ B failure or App 2 AZ A failure or
exceptional exceptional
load) load)
DB
DB Standby
DB Replication
Active (pilot
light)
Note: the VIPs are only needed for components that need to fail over to different hosts. See
section 9.6 above. The SAP components with VIPs should be installed against the virtual
hostname. The SAP components without VIPs should be installed against the local instance
(physical) hostnames.
11.7 Reference
https://d0.awsstatic.com/enterprise-marketing/SAP/sap-hana-on-aws-high-availability-disaster-
recovery-guide.pdf
https://d0.awsstatic.com/enterprise-marketing/SAP/
SAP_HANA_on_AWS_Implementation_and_Operations_Guide.pdf
12 Backups
12.1 Principles.
The section contains the high level architecture, retention periods, and standards around
compression etc, OS backups, application backups and DB backups for SAP.
The cleanup script needs to be scheduled in cron as the <sid>adm (for Hana), or syb<sid>
(for ASE) user to run hourly. For Development and Production an S3 bucket is required
named “<ce>-osb-bkpr ”, all in lower case, e.g “we1-p3-0001-osb-bkpr”. For DR the bucket
should be named “<ce>-osb-bkdr” e.g. “we1-p3-0001-osb-bkdr”. The S3 bucket should
have a Lifecycle rule created to permanently delete all files after a period of 35 days. This
policy should be set at the bucket level and conform to the following :
Note that the “Previous Versions” options does not apply to our buckets, as we do not use
version control, however the policy it still set for completeness.
When calling the script, if the system is not a Development or Production instance a dummy
value can be entered e.g. ‘dummy’ for the S3 bucket name as it won’t be used by the script.
Both scripts should be copied to the ‘/backup/scripts’ folder on the DB server. Full details of
how to schedule the script in cron are contained in the cleanup script comments. The scripts
are under change control and should not be changed during a deployment without
consultation.
Note: For Hana backups, those older than 7 days are removed from the Hana catalog as well
as from local disk. Therefore, to recover from a backup older than 7 days it first has to be
recovered from the OS SNAP backups. The Hana recovery can then be executed by
referencing the recovered backup file rather than the Hana catalog.
No 3rd party backup tools will be used. These backup tools will take online consistent
backups from the primary DB to compressed and encrypted (where supported by the DB)
flat files on a dedicated data backup volume:
/backup/data/<DB SID>
/backup/log/<DB SID>
Both data and log backup volumes will be included in the standard CSL scheduled volume
SNAP backup schedule as per the retention period and frequency above. DB backups should
therefore be scheduled to avoid overlapping with the standard backup SNAP timings as
much as possible.
Additionally, in the Production and Development systems only, an hourly script will execute
to copy transaction log backups from /backup/log to an S3 bucket. These S3 transaction log
backups will be kept for 35 days.
1) HANA : HANA XS Engine Backup Scheduler (where no ABAP Stack) / DB13 (ABAP stack)
2) ASE : ASE DBACOCKPIT Backup Scheduler
3) Oracle : DB13 Backup Scheduler
For Hana databases that have a directly connected ABAP stack the backups should be
scheduled via DB13. For Hana databases that do not have an ABAP stack the backups can
be scheduled in the XS engine. The details of how to schedule the Hana backups in the XS
engine can be found in the HANA_XS_Engine_Scheduled_Backups document here.
Hana
ASE
(Note: ASE cumulative backups are not being used at this time as they aren’t available via
DBACOCKPIT)
12.4 Monitoring
The automated SNAP backup process, scheduled by the CSL, is monitored by the CSL support
team.
The backups of the DB data and DB logs with be monitoring via Solution Manager.
13 Maintenance
13.1 Housekeeping and Archiving
Following defined standards for housekeeping.
14 Support
14.1 Responsibilities
Component Responsible Notes
On Prem
AWS
Instance ID : i-004729fd102208e2e
Hostname : we1p103190002
IP Address: 10.163.210.10
SAP Router Alias: saprouteroc.bpweb.bp.com
The AWS saprouter can be started by the root user, by running the following command :
/H/saprouteroc.bpweb.bp.com/H/saprouterop.bpweb.bp.com/H/
/H/saprouterop.bpweb.bp.com/H/saprouteroc.bpweb.bp.com/H/
This will pass traffic through the encrypted saprouter to saprouter connection. Details on
the installation and the setup of this saprouter to saprouter connection can be found here :
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SOE/Cloud/SOE%20Best%20Practice%20Procedures/
AWS_SAProuter_Installation_Guide.xlsx?d=w66a4e1afd2c84dbb9e1b23c46398fb41
Logs:
The following logs will be kept for 2 years and regularly monitored by DS, SOC, CE Owners.
•VPC Flow Logs from each VPC within each AWS account
•Host level logs from EC2 instances launched within each account
https://basproducts.atlassian.net/wiki/display/CSL/AWS+Logging+Design
15.1 Alarms
Alarms should be created as follows to send alert emails to the support teams
15.2 Actions
The HA alarm actions must be set up for ALL SAP instances, in ALL accounts (Test, Pre-Prod
and Production).
Alerts for the Project Stack should be sent to the following email account:
15.3 Dashboards
A dashboard should be created for each logical grouping of instances. Actual details TBC.
15.4 Tagging
The resources created in the CSL3 VPC are automatically tagged with the CSL defined tags
and values. However, to improve the searchability of resources by the TE engineers we will
create an addition custom tag.
16 SAP
16.1 SAP Emailing
SAP emailing will be routed through the BP Microsoft Exchange servers in the same way as
on-premise SAP email is routed.
Any alerts generated by Solution Manager will utilise the current BP Microsoft Exchange
server setup. There will be no change for AWS (no use of SES for example).
Details on the setup of email for SAP (inbound and outbound) including details of how to
encrypt both inbound and outbound emails can be found under this SharePoint library
https://bp365.sharepoint.com/sites/ES2/team/TE/Reference/Standards%20and%20Best
%20Practice/SAP%20Basis%20Configuration
Encryption must be set up for outbound and inbound emails from SAP in AWS (see ‘Emailing’
section under the OS section in this document).
Note: Both Postfix and SAP ICM SMTP service can’t run on port 25 on the same AWS
instance. For incoming email Postfix needs to run on port 25 (a requirement of Exchange) so
the SAP ICM SMTP port will need to be changed (see the above emailing setup documents
for further information).
For Oracle deployments a DB Gateway is only required for non central system installations
i.e. where the application server instances are installed on different AWS instances to that of
the database.
There are some parameters that have to be added to the instance profiles and others that
are more desirable to stay in the instances profiles. In the table below there is a guide that
describes the parameter types and the preferred profile location.
The general rule is that any parameter which is set for the system and is unlikely to be
different across instances should be in the DEFAULT profile, for example minimum password
length applies across all instances so should be set in the DEFAULT profile. If a parameter
can be set for a specific instance and sometimes is then that should go in the instance profile
e.g. Workprocess related parameters. There are also a few parameters that SAP have made
mandatory for the Instance profiles.
This is not a complete list of all SAP parameters but shows those commonly added to SAP
Profiles. These rules apply to new SOE systems. However they should also be followed on
older systems where possible. They apply to both the Project and Production stack
instances.
New parameter sections can be requested by contacting Stephen Head or John Davie.
RZ10 should be used for changing the DEFAULT and INSTANCE profiles so that the changes
are logging in SAP. The OS editor should only be used for standalone instances (ERS, SCS,
WD, GW).
The SAP profile checking program, sappfpar, should always be run, to test profiles, after
changing parameters. If buffer parameters have increased it’s likely that the IPC shared
memory parameters (ipc/shm_psize_<nn>) will also need to be increased (as identified by
sappfpar).
As per the Apr16 BTDA all new and replatformed NW7.4 (or higher) ABAP systems running
on Linux should use zero administration memory management (ZAMM).
2) Scale up for multiple systems on one server (e.g. two systems would use 35% each)
4) Remove all other ZAMM calculated memory parameters from both the instance and
Default profiles, based on OSS note 2085980.
2) OSS note 1728922 outlines a problem with memory management in the ABAP stack which
potentially affects all systems and has previously caused a P1 in the PRC system. The SOE
recommendation is to always set the following parameter when the kernel is meets the
minimum patch levels outlined in the note :
a. rdisp/softcancel_check_clean = on
The other parameter mentioned in the note (rdisp/softcancel_sequence) should NOT be set.
Details on the rationale behind this setting can be found here.
3) SAPGUI scripting. It was decided in the June 16 BTDA that the SAPGUI scripting parameters should be
set to ‘true’ in all systems including Production. The user access to scripting will then be controlled by
SAP security roles. This was approved by the SAP security team and will appear in the SAP security
control document:
sapgui/user_scripting = TRUE
sapgui/user_scripting_per_user = TRUE