We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28
IT601 – System and Network Administration STATES OF COMPUTER MACHINES
Introduction A Computer Machine Consists of ….
➢Computers and networks are now a days a vital part of the daily life. • Hardware Components ➢System administration matters because computers and networks matter • Operating System ▪ Applications are everywhere e.g SOHO, SME, NGOs, Government and • Accessories Educational Institutes • Has Different States and Several Processes Associated with it ▪ User of Computers have increased, esp the networked computers Evard's Computer Machine Life Cycle ▪ Technologies are diversified and heterogeneous A Computer Machine Goes through different states in their life ▪ Architectural approaches are also diversified Evard has outline different states and process that are performance on ➢The need to have specialized people to operate and maintain computers and Five States networks • New • Clean • Configured • Unknown • Off ▪ Build the infrastructure Several Processes ▪ Operate and maintain • Build • Initialize • Update • Entropy • Debug • Retire ▪ Provide Support to users New State ➢System administration matters because computers and networks matter A new machine Unpacking ➢This course focuses on the principles of systems and network administration Physical Connections IT SYSTEM A. What is a System? Power up A system is a collection of elements or components that are organized for a common Preinstalled OS or No OS Purpose. Clean State A. What is a IT System? A machine that has the operating system installed but isn't set up to function in the A collection of computing and/or communications components and other resources System. that support one or more functional objectives of an organization Build Process Computing, Network, DHCP, DNS, http, Computing, PC, Phone, Configuration. Select OS, Install OS, Drivers and Software’s IT System Components Rebuild Process Computing Machines, Software, Network, Services, Users, Processes Recovered Data, Reformate Disks, Reinstall OS/Software Computing Machines Configured State Computing Machines are important component of IT Systems and there are different • A machine that is properly configured to meet the needs of the System. forms of computing machines in an organization. • User Roles and Permissions Servers/Workstation, Desktops, Laptops/ Mobile Devices • Updates Software Update Process Operating Systems, Applications, Utilities Select Updates, Schedule Updates, Monitor Network Unknown State Wireless, Router, Switch, Firewall, DPI, Load balancers, VPNs, Cable and • A machine that has been misconfigured, or that has become out of date, or that has Connector been borrowed by an intern and returned stained Services Debug Process Application Services, Network Services, Web services Troubleshooting, Remove the bugs, Fix Drivers and Software’s Users Entropy Process Diverse Users, Different Roles, Different Levels Changes by user/admin/updates, Compromised State Processes Rebuild Process Support System, Operations, Configurations, Change Management Recover User Data, Format Disks and Reinstall OS Off State • The typical characteristics of a small business are having no more than 1,500 Powered Off, No User Data, OS installed or Removed, Disconnected from employees Infrastructure • Mid-market enterprise: Retire Process • These organizations are larger than small businesses but smaller than large Recover User Data, Flush the Hard Disks, Isolate from Infrastructure enterprises. • Computer Machine Passes through 5 states throughout their life cycle • They generally employ between 1,500 and 2,000 people • Different Processes are used for transition of the state of Computer Machines • Large enterprise: System Administration •These organizations are relatively few, What is SNA and Who is System Administrator? Managing an IT System means? 1. Difficult to define because system administrators do so many things. Applying Tasks and actions • looks after computers, networks, and the people who use them. ● Installation and Configuration • look after hardware, operating systems, software, configurations, applications, or ● Monitoring, Logging and Reporting security ● Changing Changes System State 2 - A system administrator influences how effectively other people can or do use ● Supporting their computers and networks. ● Provisioning Help Employees, Support Others, Guide Others ● Troubleshooting 3 - A system administrator sometimes needs to be a WEEK#2 Business-process, consultant, corporate visionary, janitor, software engineer, System Administration Maturity electrical engineer, psychiatrist, mind-reader, economist Process and its Maturity 4 - Known with different names. ● The term process describes the means by which the system components are Network administrators, system architects, system engineers, operator’s system integrated or interact with each other to produce a desired end result. programmers ● IT System have several components which Objective interacts or integrate with General Definition each other for the common objectives. System administration is the field of work which involves… Maturity ● Managing one or more systems ● Process maturity is a measure of how well defined and controlled a system 's ● System could be software, hardware, servers or workstations processes are. ● The objective are efficiency and effectiveness ● A high level of process maturity shows that processes are documented well, users Importance of System Administration understand and follow procedures, and there is continuous process improvement. • System administration matters because computers and networks matter. Maturity Models IT Business Operations, Collaborations, Solve Many Problems Several maturity models exists for IT and Business, Examples are…. • Business Depends on • CMMI Internet, Intranet, World Wide Web • ITIL Maturity Model • Management now has a more realistic view of computers, more people becoming • Agile Maturity Model direct users of Computers • DevOps Maturity Model • Unreliable machine room power system that caused occasional but bearable • MDM Maturity Model problems now prevents sales from being recorded. We Focus on System Administration Maturity Model (SAMM) which is quite • Computers matter more than ever. If computers are to work and work well, system similar to ITIL Maturity Model. administration matters SAMM Maturity Levels@ Organization Size and System Administration Level 1 Initial (Adhoc Proceses) • Small and Home Offices: Key Process Area • Most businesses in the U.S. fall under this category. • The user is given the authority to decide what to do and when to do. • They must also be able to assign work to other people creating interactions. Level 2 Repeatable (Disciplined process) Hardware install – New or upgraded hardware is installed whenever & where ever Key Process Area makes the most sense at the moment. • Requirements Management Problem report – Problems are sometimes reported by users by mail or phone to a • Project Planning random administrator. • Project Tracking Security – No specific security standards or policies exist. • Subcontract/Vendor Management Disk capacity – Disk space is in short supply. No information on usage rates is • Quality Assurance available. Project managers of supported organizations often fight among • Configuration Management themselves regarding disk space. Level 3 Defined (Standard, consistent process) Backups – Backups are usually done according to a weekly schedule. Key Process Area Level 2 – Repeatable – Practices • Process Focus New user – Procedure to request and create an account is well documented. Cycle • Process Definition time for requests is monitored. • Training Software install – Installation guidelines are understood. Time spent installing and • Integrated Management configuring software is tracked. • System & Network Engineering Hardware install – Installation standards are understood. Time spent installing and • Intergroup Coordination configuring hardware is tracked. Level 4 Managed (Predictable process) Problem report – Process to report problems is well understood by users and cycle Key Process Area time for problem resolution is monitored. • Quantitative Process Management Security – Various security standards are clearly documented. Security violations • Quality Management are monitored. Level 5 Optimizing (Continuously Improving) Disk capacity – Acceptable disk capacity levels are established. Capacity is Key Process Area periodically monitored. Defect Prevention Backups – Failure conditions for backups are understood. Failure rates and effort Technology Change Management to resolve problems are tracked. Process Change Management Level 3 – Defined – Practices SAMM –Common Practices New user – Head count expansion information is provided to signal new account Common Practices at different SAMM Levels requests are expected. Revision of procedure. Five Levels Software install – Installations are planned with the supported organizations • Initial Reasons for install/upgrade are recorded. • Repeatable Hardware install – Installations are coordinated with supported organizations and • Defined vendors. Reasons for install/upgrade are recorded. • Managed Problem report – Problems are mapped to root causes. Group reviews solutions to • Optimizing resolutions suggested to address root causes. Level 1 – Initial – Practices Security – Group reviews security incidents for root vulnerabilities. Resolutions are New user – Verbal requests are addressed as time permits or escalated with discussed, tested and released. management. Disk capacity – Capacity planning is addressed in project plans written by Software install – New or upgraded software is installed whenever & where ever supported organizations and reviewed by the network & systems group. makes the most sense at the moment. Backups – Training is provided for network and systems group to enhance backup programs and procedures. Level 4 – Managed – Practices Outstanding 10 Excellence in practice well recognized Consistent long term New user – Cycle time numbers are used to adjust staffing to meet demand and use Consistent world class results requirements. Software install – Productivity measures and goals for installation created. Building Reliable Systems Hardware install – Productivity/problem data is compare to goals for installs, tests Introduction to Reliability and demos. All systems will eventually fail. Problem report – Cycle time and resolution quality information is used on a regular • Many Reasons basis to access effectiveness of problem reporting and resolution system. • An important term that quantifies dependability of a system during Security – Effectiveness of group reviews is studied. its life-cycle is the reliability. Disk capacity – Metric for disk availability is created and used. Generally, Reliability is defined as the probability of success. Backups – Backup system is certified with high reliability rating. Objectives of Reliability Level 5 – Optimizing – Practices The objective of studying reliability New user – Accounts are electronically requested and verified. To quantify how “reliable” a product or service is, to understand the causes for poor Software install – Problems with software installs are documented and avoidable reliability, with new procedures. To deploy actions to improve the reliability of Organizations. Hardware install – New hardware technologies are evaluated & integrated. Unreliable Systems contributes Problem report – New problem reporting system installed to better meet user Increase cost of ownership of the systems. requirements for ease of use. Defining Reliability Security – Internal contest to establish better security practices is established. Reliability is defined as the probability that a system will perform its Disk capacity – Project tracking information combined with metrics of utilization intended function(s) during a specified period of time under defined are used to predict needs. conditions. Backups – Backup process is revisited with input from supported organizations re: • Specified Time Period production schedules. • Defined Conditions SAMM Process Evaluation Ratings Examples of Reliability Reliability usually changes as a function of time and is denoted as R (t). Rating Score Characterization Examples of reliability statements are: The basic coverage warranty lasts for 36 months or 36,000 miles. Poor 0 No ability No interest Ineffective results We warrant the bulb will be free from defects and will operate for 3 years based on 3 hours/day. Weak 2 Partial ability Fragmented usage Inconsistent results Calculating Reliability The Reliability is calculated by Fair 4 Implementation Plan defined Usage in major areas r𝑇 Reliability at time T Consistent positive results (𝑇) Working Systems at time T (𝑇0) Total Systems at time T0 Marginal 6 Implementation across organization Usage in most areas Failure Probability Positive measurable results (( Failure Probability at time T (𝑇) Failed Systems at time T Qualified 8 Practice is integral part of process Consistent use across (𝑇0) Total Systems at time T0 = 1 − (𝑡) organization. Positive long term results Failure Rate • The frequency of component failure per unit time. It is usually denoted by Mean time to recovery (MTTR) the λ. 𝜆 𝑡 Failure rate at time t dt interval • The average time duration to fix a failed component and return to operational state. • It includes the time spent during the alert and diagnostic process before repair activities are initiated. Weibull distribution Mean time to detection (MTTD) 𝛼 is called the scale factor which represents the characteristic life of the product The average time elapsed between the occurrence of a component failure and its • which is the time at which 63% of the products have failed. detection. • 𝛽 is called the shape factor and represents the different shapes for the Availability Weibull distribution. It Determines the instantaneous performance of a component at any given time 𝛽 < 1 represents early failures based on time duration between its failure and recovery 𝛽 = 1 represents constant failure rate 𝛽 > 1 represents wear-out failures Reliability in Practical IT Systems System/Service Metrics For series connected components, the effective failure rate is determined as the sum • MTTF of failure rates of each component. Given n series-connected components: • Mean time to failure • MTTD • Mean time to detection For parallel connected components, MTTF is determined as the reciprocal sum of • MTTR failure rates of each system component. Given n parallel-connected components: • Mean time to recovery For hybrid systems, the connections may be reduced to series or parallel • MTBF configurations first. • Mean time between failure Failure Rate Availability and Reliability specifications • The frequency of component failure per unit time denoted by λ. • Calculate reliability and availability of each component individually. For • Failure rate is considered as forecasted failure intensity given that the series connected components, compute the product of all component values. component is fully operational in its initial condition. for N series-connected components. • For parallel connected components, for N parallel-connected components. Repair rate • The frequency of successful repair operations performed on a failed component per unit time and denoted μ . • Repair rate is defined mathematically as follows: • For hybrid connected components, reduce the calculations to series or Mean time to failure (MTTF) parallel configurations first. The average time duration before a non-repairable system component fails. Caveats • It’s important to note a few caveats regarding these incident metrics and the associated reliability and availability calculations. Mean time between failure (MTBF) These metrics may be perceived in relative terms. Failure may be The average time duration between inherent failures of a repairable system defined differently for the same components in different component. applications, use cases, and organizations. 𝑀𝑇𝐵𝐷 = 𝑀𝑇𝑇𝐹 + 𝑀𝑇𝑇𝑅 The value of metrics such as MTTF, MTTR, MTBF, and MTTD are 4.3 – Backups averages observed in experimentation under controlled or specific Backup strategy and policies environments. These measurements may not hold consistently in Scheduling: when and how often? real- world applications. Capacity planning • Organizations should therefore map system reliability and availability Location: on-site vs. off-site. calculations to business value and end-user experience. Monitoring backups • Decisions may require strategic trade-offs with cost, performance and, security, Checking logs and decision makers will need to ask questions beyond the system Verifying media dependability metrics and specifications followed by IT departments. Performing restores when requested Common Tasks Time IT System Components Automated? 1. Services 4.4 - Software Installation 2. Computing Machines Automated consistent OS installs 3. Network Desktop vs. server OS image needs. 4. Processes Installation of software Routine Tasks 5. Software’s 1) User Managment Purchase, find, or build custom software. 6. Users 2) Hardware Managment. Managing multiple versions of a software pkg. Common Tasks 3) Backup Managment. Managing software installations Planning and Design 4) Software Managment. Distributing software to multiple hosts. 1) Architecuture 5) Troubleshooting. Patching and updating software 2) Physical Planning 6) System monitoring. When? 3) IP Addressing 7) Security Managment. Updates vs Upgrades 4) VLANS 8) Users Assistance. 4.5 – Troubleshooting 5) Internet vs Intranet 9) Communication. Problem identification User Management By user notification Adding User Accounts By log files or monitoring programs Namespace management Tracking and visibility Removing user accounts Ensure users know you’re working on problem 4.2 Hardware Management Provide an ETA if possible Adding and removing hardware Finding the root cause of problems Configuration, cabling, etc. Provide temporary solution if necessary Purchase Solve the root problem to permanently eliminate Evaluate and purchase servers + other hardware 4.6 - System Monitoring Capacity planning Automatically monitor systems for How many servers? How much bandwidth, storage? Problems (disk full, error logs, security) Data Center management Performance (CPU, memory, disk, network) Power, racks, environment (cooling, fire alarm) Provides data for capacity planning Virtualization Determine need for resources When can virtual servers be used vs. Physical? Establish case to bring to management 4.7 - Helping Users Request tracking system Ensures that you don’t forget problems. Security Highly secure against More prone to Ensures users know you’re working on their problem; reduces interruptions, status malware Hacking attempts queries. cyber threats cyber threats User documentation and training Policies and procedures Support Large comm. supports Community and long- Answer commonly term customer support Schedule and communicate downtimes asked questions Great documentation Effected Systems Downtimes command line graphical user interface Operation 4.8 – Communicate Customers User Experience requires an relatively more beginner- friendly Keep customer appraised of process. experienced Linux • When you’ve started working on a request with ETA. administrator • When you make progress, need feedback. • When you’re finished. Database Support MySQL, PostgreSQL Microsoft SQL Microsoft Communicate system status. Access • Uptime, scheduled downtimes, failures. • Meet regularly with customer managers. Scripting Support Python PHP, Perl, Other ASP ASP.NET Managers Unix languages Meet regularly with your manager. Write weekly status reports. Advantages/Disadvantages of Linux Servers WEEK#3 Advantages Server Operating Systems No additional licensing fee as the operating system is free. Two Major Server Operating Systems More reliable - it rarely experiences malware, cyber threats, or other security Linux Operating Systems errors. Servers Not demanding on the client hardware and lower resource consumption. Desktops Due to its low infrastructure requirements, it shows excellent performance Mobiles rates. Windows Operating Systems System administrators have the freedom and opportunity to customize the Servers system. Desktops Seamless use of open-source software on the server. Mobiles Supports cooperative work without exposing the program’s core. Linux Window system Disadvantages Architecture Centered around the Based on the Windows Operating via a command line instead of a GUI requires some learning or Linux kernel NT architecture experience. Not all versions have long-term support. Cost Free open-source Owned by Microsoft, Updating from one major version to another can sometimes be complex. software a licensing fee per user Some third-party and professional programs may not have support or require Ubuntu version 10.10 and prior, actually had different kernels for the server admin privileges. and desktop editions. Ubuntu no longer has separate -server and -generic Advantages/Disadvantages of Windows Servers kernel flavors. Advantages These have been merged into a single -generic kernel flavor to help reduce Beginner-friendly due to its intuitive graphical user interface and out-of-the- the maintenance burden over the life of the release. box functionality. When running a 64-bit version of Ubuntu on 64-bit processors you are not Guaranteed five years of maintenance + five years of extended support. limited by memory addressing space. Supports third-party applications and is compatible with Microsoft To see all kernel configuration options you can look through /boot/config- applications. 4.14.0-server. Also, Linux Kernel in a Nutshell3 is a great resource on the Requires less admin monitoring and maintenance thanks to its robust options available. approach and automated updates. Installing Ubuntu Server@ Disadvantages Software Package Management Higher costs due to the obligatory licensing fee for the OS. Package management system is derived from the same system used by More prone to malware, cyber-threats, and other security-related errors. the Debian GNU/Linux Distribution. Its mandatory GUI makes it more resource intensive. Package Files Linux Server The package files contain all of the necessary files, meta-data, and What is Ubuntu Server? instructions to implement a particular functionality or software Ubuntu Server is a server operating system developed by Canonical that application on your Ubuntu computer. runs on all major architectures Debian package files typically have the extension '.deb', and usually exist in repositories which are collections of packages found on Ubuntu is a server platform that anyone can use for the following and much various media, such as CD-ROM discs, or online. more: Packages are normally in a precompiled binary format; thus installation is quick, and requires no compiling of software. Ubuntu Server has these minimum requirements: Dependencies Many complex packages use dependencies. Dependencies are additional packages required by the principal package in order to function properly. For example, the speech synthesis package festival depends upon the package libasound2, which is a package supplying the ALSA sound library needed for audio playback. In order Differences from Ubuntu Desktop for festival to function, it and all of its dependencies must be There are a few differences between the Ubuntu Server Edition and the installed. Ubuntu Desktop Edition. The software management tools in Ubuntu will do this automatically. It should be noted that both editions use the same apt repositories, making it Dpkg just as easy to install a server application on the Desktop Edition as it is on dpkg is a package manager for Debian-based systems. the Server Edition. It can install, remove, and build packages, but unlike other package The differences between the two editions are the lack of an X window management systems, it cannot automatically download and install packages environment in the Server Edition and the installation process. or their dependencies. Using dpkg to manage software List all packages installed on the system, from a terminal prompt Over time, updated versions of packages currently installed on your computer may type: become available from the package repositories (for example security updates). To dpkg -l upgrade your system, first update your package index as outlined above, and then Depending on the amount of packages on your system, this can generate a type: large amount of output. Pipe the output through grep to see if a specific sudo apt upgrade package is installed: Actions of the apt command, such as installation and removal of packages, dpkg -l | grep apache2 are logged in the /var/log/dpkg.log log file. To list the files installed by a package, in this case the ufw package, enter: Aptitude dpkg -L ufw Launching Aptitude with no command-line options, will give you a menu- If you are not sure which package installed a file, dpkg -S may be able to driven, text-based front-end to the Advanced Packaging Tool (APT) system. tell you. For example: dpkg -S /etc/host.conf base-files: /etc/host.conf Many of the common package management functions, such as installation, APT removal, and upgrade, can be performed in Aptitude with single-key The apt command is a powerful command-line tool, which works with commands, which are typically lowercase letters. Ubuntu's Advanced Packaging Tool (APT) performing such functions as Aptitude is best suited for use in a non-graphical terminal environment to installation of new software packages ensure proper functioning of the command keys. upgrade of existing software packages You may start the menu-driven interface of Aptitude as a normal user by updating of the package list index typing the following command at a terminal prompt: upgrading the entire Ubuntu system sudo aptitude Advantages Actions Being a simple command-line tool, apt has numerous advantages Install Packages over other package management tools available in Ubuntu for server Remove Packages administrators. Update Package Index Ease of use over simple terminal connections (SSH) Upgrade Packages The ability to be used in system administration scripts The first column of information displayed in the package list in the top pane, Which can in turn be automated by the cron scheduling utility. when actually viewing packages lists the current state of the package, and Using APT uses the following key to describe the state of the package: Installing a Package i: Installed package sudo apt install nmap c: Package not installed, but package configuration remains on Removing a Package system sudo apt remove nmap p: Purged from system Also, adding the --purge option to apt remove will remove the package configuration v: Virtual package files as well. This may or may not be the desired effect, so use with caution. B: Broken package Updating the Package Index u: Unpacked files, but package not yet configured The APT package index is essentially a database of available packages from the C: Half-configured - Configuration failed and requires fix repositories defined in the /etc/apt/sources.list file and in the /etc/apt/sources.list.d H: Half-installed - Removal failed and requires fix directory. To update the local package index with the latest changes made in the To exit Aptitude, simply press the q key and confirm you wish to repositories, type the following: exit. Many other functions are available from the Aptitude menu by sudo apt update pressing the F10 key. Upgrade Packages Command Line Aptitude sudo aptitude install nmap sudo aptitude remove nmap quickly identify all available Ethernet interfaces Automatic Updates@ sudo lshw -class network The unattended-upgrades package can be used to automatically install provides greater details around the hardware capabilities of specific adapters. updated packages, and can be configured to update all packages or just install ip a@ security updates. sudo lshw -class network@ Install the package by entering the following in a terminal: Ethernet Interface Logical Names sudo apt install unattended-upgrades Interface logical names can also be configured via a netplan configuration. Configuration of APT If you would like control which interface receives a particular logical name Configuration of the Advanced Packaging Tool (APT) system repositories use the match and set-name keys. is stored in the /etc/apt/sources.list file and the /etc/apt/sources.list.d The match key is used to find an adapter based on some criteria like MAC directory. address, driver, etc. Then the You may edit the file to enable repositories or disable them. For example, to set-name key can be used to change the device to the desired logial name.@ disable the requirement of inserting the Ubuntu CD-ROM whenever Ethernet Interface Settings package operations occur, simply comment out the appropriate line for the ethtool is a program that can display and CD-ROM, which appears at the top of the file: change Ethernet card settings such as # no more prompting for CD-ROM please auto-negotiation, # deb cdrom:[Ubuntu 18.04 _Bionic Beaver_ - Release i386 (20111013.1)]/ bionic port speed, main restricted duplex mode, Extra Repositories and Wake-on-LAN. By default, the Universe and Multiverse repositories are enabled but The following is an example of how to view supported features and if you would like to disable them edit /etc/apt/sources.list and configured settings of an Ethernet interface. @ comment the following lines: IP Addressing Networking – Interfaces and Addressing Temporary IP Address Assignment Network Management Dynamic IP Address Assignment (DHCP Client) Ubuntu ships with a number of graphical utilities to configure your network Static IP Address Assignment devices. Loopback Interface The network management includes Temporary IP Address Assignment For temporary network configurations, you can use the ip command which Configuring Interfaces is also found on most other GNU/Linux operating systems. IP Addressing The ip command allows you to configure settings which take effect DNS immediately, however they are not persistent and will be lost after a reboot. Bridging To temporarily configure an IP address, you can use the ip command in the Ethernet Interfaces following manner. Modify the IP address and subnet mask to match your Ethernet interfaces are identified by the system using predictable network network requirements. interface names. sudo ip addr add 10.102.66.200/24 dev enp0s25 These names can appear as eno1 or enp0s25. The ip can then be used to set the link up or down. ip link set dev In some cases an interface may still use the kernel eth# style of naming. enp0s25 up Identify Ethernet Interfaces ip link set dev enp0s25 down To identify all available Ethernet interfaces, you can use following commands as To verify the IP address configuration of enp0s25, you can use the ip command in shown below. ip a the following manner. @ ip route show@ LACP , Link Aggregation Control Protocol Dynamic IP Address Assignment (DHCP Client)@ NetFlow ip address show lo@ sFlow Networking – Name Server and Bridging IPFIX , Internet Protocol Flow Information Export Name Resolution RSPAN , Remote SPAN Name resolution as it relates to IP networking is the process of mapping IP CLI addresses to hostnames, making it easier to identify resources on a network. 802.1ag Two cases shall be discussed, It is designed to support distribution across multiple physical servers similar Name resolution using DNS to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V. Name resolution using static hostname records OVS can be installed as below Name resolution using DNS @ sudo apt update Name resolution using static hostname records @ sudo apt install openvswitch-switch Fully Qualified Domain Names (FQDN's)@ Service is started automatically after installation: Name Service Switch Configuration @ systemctl status openvswitch-switch.service Bridging ovs-vsctl show Bridging multiple interfaces is a more advanced configuration but is very MACVLAN useful in multiple scenarios. MacVLAN helps the user to configure sub-interfaces of a parent physical One scenario is using bridge on a system with one interface to allow virtual Ethernet interface with its own unique MAC address and as a result with its machines direct access to the outside network. own IP address. Another scenario is setting up a bridge with multiple network interfaces, then Applications, VMs and containers can now be grouped to a specific sub- using a firewall to filter traffic between two network segments. interface, in order to connect directly to the physical network using their own Configure the bridge by editing your netplan configuration found in MAC and IP addresses. /etc/netplan/:@ Drawbacks Now apply the configuration to enable the bridge: limitation to the number of different MAC addresses allowed sudo netplan apply on the physical port. The new bridge interface should now be up and running. The brctl provides NICs have a limitation on the number of MAC addresses they useful information about the state support natively. of the bridge, controls which interfaces are part of the bridge, etc. IEEE 802.11 protocol specifications, multiple MAC Networking – Bridging (cont.) addresses on a single client are not allowed. Legacy Bridging Common DHCP Server, Low CPU, Normal Network Utilization, Meets The brctl utility provide ubuntu ‘s legacy bridging that can be 802.11 standards and Easy to Set-Up are some salient feature of installed using the following command. sudo apt install bridge-utils MACVLAN creating a new bridge device, we’ll create one called br-sna1 WEEK#4 brctl addbr br-sna1 ip link set br-sn1 up User Management A net device can be added to bridge using following command User management is a critical part of maintaining a secure system. brctl addif br-sna1 NetDeviceName Ineffective user and privilege management often lead many systems into Open vSwitch Bridge devices being compromised. Open vSwitch (OVS) is an open source, production-quality, multilayer- it is important to understand how you can protect your server through virtual switch. simple and effective user account management techniques. Open vSwitch is designed for massive network automation through Root User programmatic extension, but still with the support for standard protocols and management interfaces. o Root is the superuser account in Unix and Linux. It is a user account for administrative purposes, and typically has the highest access rights on the system. o Usually, the root user account is called root. However, in Unix and Linux, any account with user id 0 is a root account, regardless of the name. Root user in Ubuntu In ubuntu server, administrative root accountis disabled by default. Deleting an account does not remove their respective home folder. It is up This does not mean that the root account has been deleted or that it to you whether or not you wish to delete the folder manually or keep it may not be accessed. It merely has been given a password which according to your desired retention policies. matches no possible encrypted value, therefore may not log in User Group Management directly by itself A user can be assigned to group(s) based on their department or Instead, users are encouraged to make use of a tool by the name of A group allows a user special access to system resources, such as files, sudo to carry out system administrative duties. directories, or processes (programs) that are running on the system. sudo allows an authorized user to temporarily elevate their privileges using This group membership can also be used to prevent access to system their own password instead of having to know the password belonging to resources because several security features in Linux make use of groups to the root account. impose security restrictions. This simple yet effective methodology provides accountability for all Every user is a member of at least one group. This first group is called the user actions, and gives the administrator granular control over which user’s primary group. Any additional groups a user is a member of are called actions a user can perform with said privileges. the user’s secondary groups. By default, the initial user created by the installer is a member of the group Group membership can be displayed by executing either the id or groups "sudo" which is added to the file /etc/sudoers as an authorized sudo user. command: student@onecoursesource:~$ id To give any other account full root access through sudo, simply add them to uid=1002(student)gid=1002(student) the sudo group. groups=1002(student),60(games),1001(ocs) Enabling/disabling root student@onecoursesource:~$ groups student games ocs If for some reason you wish to enable the root account, simply give it a Both the id and groups commands display information about the current user password: by default. Both commands also accept an argument of another user account sudo passwd name: sudo will prompt you for your password, and then ask you to supply a new student@onecoursesource:~$ id root password for root. uid=0(root) gid=0(root) groups=0(root) To disable the root account password, use the following passwd syntax: student@onecoursesource:~$ groups root sudo passwd -l root root : root However, to disable the root account itself, use the following command: The most important difference between primary and secondary group usermod --expiredate 1 membership relates to when a user creates a new file. Each file is owned by You should read more on sudo by reading the man page: a user ID and a group ID. man sudo When a user creates a file, the user’s primary group membership is used for User Management Operations the group ownership of the file: The process for managing local users and groups is straightforward and differs very Group information little from most other GNU/Linux operating systems. Ubuntu and other Debian Group information is stored in several files: based distributions encourage the use of the "adduser" package for account The /etc/passwd file contains user account information, including the management. primary group membership for each user. The /etc/group file stores information about each group, including the group users A default group that is rarely used in modern Linux distributions. name, group ID (GID) and secondary user membership. operators A group that was traditionally used on Unix systems for users who student@onecoursesource:~$grepstudent/etc/passwd required elevated privileges for specific system tasks. This group is student:x:1002:1002::/home/student: rarely used in modern Linux distributions. The /etc/gshadow file stores additional information for the group, including group administrators and the group password. Adding Removing Groups To add or delete a personalized group, use the following syntax, student@onecoursesource:~$head/etc/group respectively: root:x:0: sudo addgroup groupname daemon:x:1: sudo delgroup groupname bin:x:2: To add a user to a group, use the following syntax: sys:x:3: adm:x:4:syslog,bo tty:x:5: sudo adduser username groupname disk:x:6: User Level Security lp:x:7: User Profile Security mail:x:8: When a new user is created, the adduser utility creates a new home directory news:x:9: named /home/username. Special Groups The default profile is modeled according to contents of /etc/skel, A typical Linux system will have many default group accounts. These which includes all profile basics. default group accounts typically have GID values under 1000, making it easy For multiuser environment, close attention is required to the user for an administrator to recognize these as special accounts. home directory permissions to ensure confidentiality. Additionally, if you add new software to the system, more groups may be By default, user home directories in Ubuntu are created with world added as software vendors make use of both user and group accounts to read/execute permissions. provide controlled access to files that are part of the software. This means that all users can browse and access the contents of other Administrators who are focused on security should be aware of these special user’s home directories. group accounts because these accounts can provide either security features To verify your current user home directory permissions, use the following or pose security threats. syntax: ls -ld /home/username Group Description drwxr-xr-x 2 username username 4096 2007-10-02 20:03 username Root This group account is reserved for the system administrator. Do not To remove the world readable-permissions, following command can be add a regular user to this group because it will provide the regular user used. sudo chmod 0750 /home/username with elevated access to system files. The efficient approach is to modify the adduser global default permissions Adm Members of this group typically have access to files related to system when creating user home folders. Simply edit the file /etc/adduser.conf and monitoring (such as log files). Being able to see the contents of these modify the DIR_MODE variable to something appropriate, so that all new files can provide more information about the system than a regular user home directories will receive the correct permissions. would typically have. DIR_MODE=0750 Lp This is one of many groups (including tty, mail, and cdrom) used by ls -ld /home/username the operating system to provide access to specific files. Typically, drwxr-x--- 2 username username 4096 2007-10-02 20:03 username regular users are not added to these groups because they are used by background processes called daemons. Password Policy A strong password policy is one of the most important aspects of your sudo This group is used in conjunction with the sudo command. security posture. Many successful security breaches involve simple brute staff A default group that was traditionally used on Unix systems but is force and dictionary attacks against weak passwords. rarely used in modern Linux distributions. To offer any form of remote access involving your local password system, make sure you adequately address Minimum password complexity requirements associated with the AllowGroups variable located in the file Maximum password lifetimes /etc/ssh/sshd_config. Frequent audits of your authentication systems AllowGroups sshlogin Password Expiry External User Database Authentication When creating user accounts, you should make it a policy to have a Remote Administration minimum and maximum password age forcing users to change their OpenSSH passwords when they expire. This topic introduces a powerful collection of tools for the remote control To easily view the current status of a user account, use the following syntax: of, and transfer of data between, networked computers called OpenSSH. sudo chage -l username OpenSSH is a freely available version of the Secure Shell (SSH) protocol Last password change : Jan 20, 2015 family of tools for remotely controlling, or transferring files between, Password expires : never Password inactive : never Account expires : never computers. Minimum number of days between password change : 0 Maximum number of Traditional tools used to accomplish these functions, such as telnet or rcp, days between password change : 99999 are insecure and transmit the user's password in cleartext when used. Number of days of warning before password expires : 7 OpenSSH provides a server daemon and client tools to facilitate secure, To set any of these values, simply use the following syntax, and follow the encrypted remote control and file transfer operations, effectively replacing interactive prompts: sudo chage username the legacy tools. Example : Change the explicit expiration date (-E) to 01/31/2015, minimum The OpenSSH server component, sshd, listens continuously for client password age (-m) of 5 days, maximum password age (-M) of 90 days, connections from any of the client tools. inactivity period (-I) of 30 days after password expiration, and a warning When a connection request occurs, sshd sets up the correct connection time period (-W) of 14 days before password expiration: depending on the type of client tool connecting. sudo chage -E 01/31/2015 -m 5 -M 90 -I 30 -W 14 username if the remote computer is connecting with the ssh client application, Other Considerations the OpenSSH server sets up a remote control session after Many applications use alternate authentication mechanisms that can be easily. It authentication. is important to understand and control how users authenticate and gain access If a remote user connects to an OpenSSH server with scp, the to services and applications on your server. OpenSSH server daemon initiates a secure copy of files between the SSH Access by Disabled Users server and client after authentication. Simply disabling/locking a user account will not prevent a user from OpenSSH can use many authentication methods, including plain password, logging into your server remotely if they have previously set up RSA public key, and Kerberos tickets. public key authentication. Install OpenSSH They will still be able to gain shell access to the server, without the need Installation of the OpenSSH client and server applications is simple. for any password. To install the OpenSSH client applications on your Ubuntu system, Remember to check the users home directory for files that will allow use this command at a terminal prompt: for this type of authenticated SSH access, e.g. sudo apt install openssh-client /home/username/.ssh/authorized_keys. To install the OpenSSH server application, and related support files, Remove or rename the directory .ssh/ in the user's home folder to use this command at a terminal prompt: sudo apt install openssh- prevent further SSH authentication capabilities. server Be sure to check for any established SSH connections by the Configuring the OpenSSH disabled user, as it is possible they may have existing inbound or You may configure the default behavior of the OpenSSH server outbound connections. Kill any that are found. application, sshd, by editing the file /etc/ssh/sshd_config. who | grep username (to get the pts/# terminal) sudo pkill -f pts/# For information about the configuration directives used in this file, Restrict SSH access to only required user accounts. You may create you may view the appropriate manual page with the following a group called "sshlogin" and add the group name as the value command, issued at a terminal prompt: man sshd_config Configuring the OpenSSH@ Installing and Configuring Puppet@ SSH Keys Zentyal SSH keys allow authentication between two hosts without the need of a Zentyal is a Linux small business server that can be configured as a gateway, password. SSH key authentication uses two keys, a private key and a public infrastructure manager, unified threat manager, office server, unified key. communication server or a combination of them. To generate the keys, from a terminal prompt enter: ssh-keygen -t rsa Integrated This will generate the keys using the RSA Algorithm. During the process you will All network services managed by Zentyal are tightly integrated, be prompted for a password. Simply hit Enter when prompted to create the key. automating most tasks. By default the public key is saved in the file ~/.ssh/id_rsa.pub, while • This saves time and helps to avoid errors in network configuration and ~/.ssh/id_rsa is the private key. Now copy the id_rsa.pub file to the administration. remote host and append it to ~/.ssh/authorized_keys by entering: Opensource ssh-copy-id username@remotehost • Zentyal is open source, released under the GNU General Public License Finally, double check the permissions on the authorized_keys file, (GPL) and runs on top of Ubuntu GNU/Linux. only the authenticated user should have read and write permissions. • Zentyal consists of a series of packages (usually one for each module) that If the permissions are not correct change them by: provide a web interface to configure the different servers or services. chmod 600 .ssh/authorized_keys • Zentyal publishes one major stable release once a year based on the latest You should now be able to SSH to the host without being prompted Ubuntu LTS release. for a password. Configuration Puppet • The configuration is stored on a key-value Redis database, but users, groups, Puppet is a cross platform framework enabling system administrators to and domains-related configuration is on OpenLDAP. perform common tasks using code. • When you configure any of the available parameters through the web The code can do a variety of tasks from installing new software, to checking interface, final configuration files are overwritten using the configuration file permissions, or updating user accounts. templates provided by the modules. Puppet is great not only during the initial installation of a system, but also Advantage throughout the system’s entire life cycle. In most circumstances puppet will • The main advantage of using Zentyal is a unified, graphical user interface to be used in a client/server configuration. configure all network services and high, out-of-the-box integration between Puppet uses a client-server approach and consists of the following systems: them. The Puppet Master is a server with the Puppet Master daemon that Installing and Configuring Zantyal@ manages crucial system information for all nodes using manifests. Logging The Puppet Agents are nodes with Puppet installed on them with the Logging refers to record keeping of information about events that occur in a Puppet Agent daemon running. computer system, such as problems, errors or just information on current Puppet utilizes a client/server architecture consisting of the Puppet Master operations. and Puppet Agents. Puppet Agents uses pull mode to poll the master and Different types of events may occur in the operating system or in retrieve node-specific and site-specific configuration info. other software. The topology goes through the following steps: 1 - A node running a Puppet These log messages can then be used to monitor and understand the Agent daemon operation of the system, to debug problems, or during an audit. gathers all the information (facts) about itself, and the agent sends the facts to the Puppet Master. Logging is particularly important in multi-user software, to have a central 1 - The Puppet Master uses the data to create a catalog on how the overview of the operation of the system. node should be configured and sends it back to the Puppet Agent. On Linux, you have two types of logging mechanisms : 2 - The Puppet Agent configures itself based on the catalog and reports Kernel logging: related to errors, warning or information entries that your back to the Puppet Master. kernel may write User logging: linked to the user space, those log entries are related to To view the first 15 lines of a file, run head -n 15 file.txt, and to view processes or services that may run on the host machine. the last 15, run tail -n 15 file.txt. Kernel Logging Due to the nature of log files being appended to at the bottom, the On the kernel space, logging is done via the Kernel Ring Buffer. The ring tail command will generally be more useful. buffer is a circular buffer that is the first data structure storing log Monitoring files messages when the system boots up. To monitor a log file, you may pass the -f flag to tail. It will keep running, printing new additions to the file, until you stop it (Ctrl + When starting Linux machine, if log messages are displayed on the C). For example: tail -f file.txt. screen, those messages are stored in the kernel ring buffer. Searching files The Kernel logging is started before user logging One way that we looked at to search files is to open the file in less The kernel ring buffer, pretty much like any other log files on your and press /. system can be inspected. A faster way to do this is to use the grep command. In order to open Kernel-related logs on your system, you have to use We specify what we want to search for in double quotes, along with the “dmesg” command. the filename, and grep will print all the lines containing that search Example of events : Errors in mounting a disk, Driver Loading etc term in the file. For example, to search for lines containing “test” in Log files locations file.txt, you would run grep "test" file.txt. There are many different log files that all serve different purposes. When If the result of a grep search is too long, you may pipe it to less, trying to find a log about something, you should start by identifying the most allowing you to scroll and search through it: grep "test" file.txt | less. relevant file. System Logging Daemon (syslogd) The system logging daemon syslogd, also known as sysklogd, awaits System logs logging messages from numerous sources and routes the messages to the appropriate file or network destination. Messages logged to syslogd usually contain common elements like system hostnames and time-stamps in addition to the specific log Application logs information. Configuration of syslogd@ Log Rotation When viewing directory listings in /var/log or any of its subdirectories, you Non-human-readable logs may encounter log files with names such as daemon.log.0, daemon.log.1.gz, and so on. What are these log files? They are 'rotated' log files. That is, they have automatically been renamed after a predefined time-frame, and Viewing and monitoring log files a new original log started. After even more time the log files are The most basic way to view files from the command line is using the cat compressed with the gzip utility as in the case of the example command. You simply pass in the filename, and it outputs the entire contents daemon.log.1.gz. of the file: cat file.txt. The purpose of log rotation is to archive and compress old logs so Viewing the start or end of a file that they consume less disk space, but are still available for It is generally required to quickly view the first or last n number of inspection as needed. lines of a file. Typically, logrotate is called from the system-wide cron script The head and tail commands come in handy. /etc/cron.daily/logrotate, and further defined by the configuration file These commands work much like cat, although you can specify how /etc/logrotate.conf. Individual configuration files can be added into many lines from the start/end of the file you want to view. /etc/logrotate.d Log files that have zeroes appended at the end are rotated files. That means Vim is based on the original Vi editor, which was created by Bill Joy in log file names have automatically been changed within the system. 1976. In the 90’s, it started becoming clear that Vi was lacking in some logrotate handles systems that create significant amounts of log files. The features when compared with the Emacs editor. command is used by the cron scheduler and reads the logrotate configuration VIM includes all the missing features of VI. file /etc/logrotate.conf. It’s also used to read files in the logrotate VIM is generally preinstalled with many linux distributions, if not it can be configuration directory. installed as below sudo apt-get update sudo apt-get install vim var/log/log name here].log { Missingok Vim Modes Notifempty Compress Size 20k Daily Everything in Vim is considered a mode. You can achieve whatever you Create 0600 root root want if you understand modes in Vim. } There are many modes in Vim. But, we'll be looking at the 4 most important The commands perform the actions as follows: modes. missingok – Tells logrotate not to output an error if a log file is Command Mode missing Default mode, also called Normal mode. notifempty – Does not rotate the log file if it is empty. It reduces the To switch from one mode to another, you have to come to Command Mode size of the log file with gzip The commands that you run without any prefix (colon) indicate that you're size – Ensures that the log file does not exceed the specified running the command in command mode. dimension and rotates it otherwise Insert Mode daily – Rotates the log files on a daily schedule. This can also be to edit the contents of the file. done on a weekly or monthly schedule You can switch to insert mode by pressing i from command mode. create – Instantiates a log file where the owner and group are a root You can use the Esc key to switch back to command mode. user Command-Line Mode Logging related commands To execute commands 1) dmesg . the dmesg kernel ring buffer utility the commands in this mode are prefixed with a colon (:) 2) faillog, the faillog command (and also the faillog configuration file via man Switch to this mode by pressing : (colon) in command mode 5 faillog) Visual Mode 3) grep , the grep pattern searching utility to visually select some text and run commands over that section of code 4) head , the head utility switch to this mode by pressing v from the command mode. 5) klogd, the kernel log daemon (klogd) VIM Commands 6) last , the last command which shows last logged in users Insert mode commands 7) less , the less paging utility a Append text following current cursor position 8) logger , the logger command-line interface to syslog utility A Append text to the end of current line 9) logrotate , the the logrotate utility i Insert text before the current cursor position 10) savelog , the savelog log file saving utility I Insert text at the beginning of the cursor line 11) syslogd , the system log daemon (syslogd) o Open up a new line following the current line and add text there 12) syslog.conf , the syslogd configuration file O Open up a new line in front of the current line and add text there 13) Tail , the tail utility Command mode commands Shell Scripts Ctrl + e Writing and Editing Files Ctrl + d Vim is an acronym for Vi IMproved. It is a free and open-source cross- Ctrl + f platform text editor. It was first released by Bram Moolenaar in 1991 for Ctrl + y UNIX variants. Ctrl + u Ctrl + b % - use with '{','}','(',')' to jump with the matching one. Unsetting Variables 0 - first column of the line unset VAR_NAME $ - jump to the last character of the line Variable Types Editing Commands Local Variables d …delete the characters from the cursor position up the position Environment Variables given by the next command Shell Variables c …change the character from the cursor position up to the position Special variables@ indicated by the next command. $0 - The filename of the current script. y …copy the characters from the current cursor position up to the $n - These variables correspond to the arguments with which a script was position indicated by the next command. invoked. Here n is a positive decimal number corresponding to the position p …paste previous deleted or yanked (copied) text after the current of an argument. cursor position. $# - The number of arguments supplied to a script. Undo and Redo $* - All the arguments are double quoted. If a script receives two arguments, u- you can undo almost anything using u in the command mode. $* is equivalent to $1 $2. Ctrl+r - undo is undoable using Ctrl-r. $@ - All arguments are individually double quoted. If script receives two Searching and Replacing arguments, $@ is equivalent to $1 $2. :s/old/new/gc $? - The exit status of the last command executed. :s/old/new/g $$ - The process number of the current shell. This is the process ID under Save the file which they are executing. :wq , Save file and exit $! - The process number of the last background command. :q! , Exit file without saving the changes Defining Array Values First Script@ Basic Syntax Create a new script file with name : myfirstScript.sh Vi myfirstScript.sh array_name[index]=value Write the following content #!/bin/sh For the ksh shell, here is the syntax of array initialization # Author : IT601 set -A array_name value1 value2 ... valuen # Copyright (c) Virtual University of Pakistan For the bash shell, here is the syntax of array initialization echo “Hello Virtual University Student, What is your student iD” read VUID array_name=(value1 ... valuen) echo “WELCOME !, $VUID" Accessing Array Values Make the script executable Chmod 777 myfirstScript.sh ${array_name[index]} Run script Operators ./myfirstScript.sh Arithmetic Operators@ Variables Relational Operators@ Variable Names@ Boolean Operators@ The name of a variable can contain only letters (a to z or A to Z), numbers ( 0 to 9) String Operators@ or the underscore character ( _). By convention, Unix shell variables will have their File Test Operators@ names in UPPERCASE. Control Statements@ Defining Variables Loops@ VAR_NAME=variable_value Nesting while Loops@ Accessing Values Creating Functions echo $VAR_NAME Syntax Read-only Variables function_name () { list of commands readonly VAR_NAME } Simple Function Linux, Window Server, UNIX # Define your function here Hello () { • Software packages echo "Hello World" MySQL, DHCP Server, DNS server, Oracle Server } How are Servers different? # Invoke your function Hello • 1000s of clients depend on server. Passing Parameters • Requires high reliability. # Define your function here Hello () { • Requires tighter security. echo "Hello World $1 $2" • Often expected to last longer. } • Investment amortized over many clients, longer lifetime. # Invoke your function Server CPU Hello test1 test 2 CPUs Returning Data There are two main types of server processors: # Define your function here X86. Hello () { • X86 processors are the most common type of processor found in servers. echo "Hello World $1 $2" return 10 • They are made by companies such as Intel and AMD. } • X86 processors are designed for general-purpose computing. # Invoke your function • They can be used for a variety of tasks, including web hosting, database Hello Zara Ali management, and file sharing. # Capture value returnd by last command ret=$? RISC echo "Return value is $ret" • RISC processors are designed for specific tasks. WEEK#5 • They are often used in high-performance servers. Servers • RISC processors are made by companies such as IBM and Oracle. A group of computer machines in an organization generall referd to as server. Selection of CPU It is used to provide different types of services. • The type of server processor you need depends on the type of server you are • Application services using. • Web services • If you are using a general-purpose server, an x86 processor is likely the best • Back-office processing choice. • Databases • If you are using a high-performance server, a RISC processor may be the • Batch computation better choice. • Etc. Clock speed Servers vs Services keep up with the demands of modern businesses A server offers one or more services. Server is also a more technical term, whereas ablity to process large amounts of data quickly and efficiently service is more a term off the problem domain. Cores Server as hardware (see post from Dan D) Number of cores in a processor can have a big impact on its performance. Server as software (eg. Apache HTTP server) More cores means that the processor can handle more tasks at the A server provides services to one or more clients, and a server (hardware) is a same time. computer. A server (hardware) can be anything from a home computer to a big in the market today have up to 32 cores and more server-rack with a lot of processor power. From the view of a computer, a server Memory support (software) is just a set of services which is available to clients on the network. A server is a computer machine consisting of Hardware support large amounts of memory CPU, Memory, Storage, Power Supply, NIC, Mother Board need to store and process large amounts of data • Operating System Expand-ability Need to be expandable so that businesses can add more features as their 4th Gen Intel® Xeon® Scalable processors have the most built-in accelerators of needs change. any CPU on the market to improve performance in AI, analytics, networking, Built-in features such as security or management tools storage, and HPC. Expansion slots so that businesses can add more features as they need them 2) Intel® Xeon® Max Series Efficient Data Management Maximize bandwidth with the Intel® Xeon® CPU Max Series, the first and only Data management is a key concern x86-based processor with high-bandwidth memory (HBM). Must be able to efficiently handle large amounts of data 3) Intel® Xeon® W Processor Designed for creative professionals, delivering the performance you need for VFX, keep the server running smoothly, even when under heavy load 3D rendering, and 3D CAD on a workstation. Cost and Power Consumption 4) Intel® Xeon® D Processor Most important factors to consider is cost When space and power are at a premium, these innovative system-on-a-chip Initial Cost processors bring workload optimized performance. Operational Cost 5) Intel® Xeon® E Processor energy-efficient Essential, business-ready performance, expandability and reliability for entry server Right balance between power consumption and performance to minimize solutions. your carbon footprint Server Memory Budget Memory key factors to consider is your budget Server memories are concerned witn Workload RAM The server load CPU Cache SERVER CPU Choices Two Major aspects of server memory are 1 Intel Xeon Large Capacity (a) - Multicore with two threads per core, 1.8 to 3.3 Ghz, 8 cores Higher speed (b) - upto 18 MB L3 Cache x86 supports up to 64GB with PAE. x86-64 supports 1 PB (1024 TB) 2 - AMD Opteron Servers need faster RAM than desktops. (a) - 4, 6, 8, or 12 cores @ 1.4 to 3.2 GHz • Higher memory speeds. (b) - Up to 12 MB L3 cache • Multiple DIMMs accessed in parallel. (c) - IBM Power 7/8/9/10... • Larger CPU caches. (d) - 4, 6, or 8 cores with 4 threads each @ 3.0 to 4.25 GHz Types of Memory (e) - 4 MB L3 cache per core (up to 32MB for 8-core) FBDIMM (f) - Sun Niagara 3 – Fully buffered (g) - 16 cores with 8 threads each @ 1.67 GHz UDIMM (h) - 6 MB L2 cache - Un-buffered dual in-line memory module Xeon vs Pentium/Core CPUs RDIMM Xeon based on Pentium/Core with changes that vary by model: - Registered dual in-line memory module • Allows more CPUs LRDIMM • Has more cores - Load Reduced dual in-line memory module • Better hyper-threading SODIMM • Faster/larger CPU caches - Used with laptops • Faster/larger RAM support Micro DIMM Intel Processor Families Transfer Speed of RAM 1) Intel® Xeon® Scalable Processors SDRAM - Synchronous Dynamic Random Access Memory use a 32-bit data path and provided 32 address lines, giving access to DDR 4GB of memory - Double Data Rate SDRAM ran at 8MHz in order for it to be compatible with ISA common DDR2 expansion bus types are – 2 x times the DDR VESA - Video Electronics Standards Association DDR3 invented to help standardize PCs video specifications low power, Twice clock multiplier with a four times clock multiplier , No F/B 32-bit data path and ran at 25 or 33 MHZ Compatibility, 2 x DDR2 TT VL-Bus was superseded by PC DDR4 PCMCIA - Personal Computer Memory Card Industry Association - low power higher module density and lower voltage requirements, coupled with (Also called PC bus) higher data rate transfer speeds AGP - Accelerated Graphics Port SCSI - Small Computer Systems Interface Universal Serial Bus (USB) Servers need high I/O throughput. • Fast peripherals: SCSI-3, Gigabit Ethernet • Often use multiple and/or faster buses. PCI • Desktop: 32-bit 33 MHz, 133 MB/s Cache Memory • Server: 64-bit 66 MHz, 533 MB/s The L1, L2, L3 Cache PCI-X (backward compatible) Cache memory is high speed memories placed between RAM and CPU. • v1.0: 64-bit 133 MHz, 1.06 GB/s RAM -> L3 -> L2 – L1-> Registers - > CPU • v2.0: 64-bit 533 MHz, 4.3 GB/s System Bus PCI Express (PCIe) Data sharing • Serial architecture, v3.0 up to 16 GB/s Addressing Power Supply Power Server Power Supply Timing Expansion Bus Types Servers based on the ATX or microATX form factors generally use an Common expansion bus types are ATX power supply ISA- Industry Standard Architecture Pedestal servers based on one of the Server System Infrastructure (SSI) designed for use in the original IBM PC form factors generally use an EPS12V power supply 8 bit / 16 bit ATX Power Supply Standards The ISA bus ran at a clock speed of 4.77 MHz and improved version ATX power supplies were originally designed for use in desktop computers. 8MHz. Widely used in entry-level tower and slimline servers. 16-bit version of the ISA bus is sometimes known as the AT bus (AT-Advanced Technology). ATX power supply standards include the following: MCA - Micro Channel Architecture ATX version 2.03 Older entry-level tower servers IBM developed this bus as a replacement for ISA when they designed ATX12V More recent entry-level tower servers the PS/2 PC in 1987 ATX1U 1U slimline servers speed of 10MHz and supported either 16-bit or 32-bit data ATX2U 2U slimline servers EISA - Extended Industry Standard Architecture ATX1U/2U Rack-Mounted Power Supplies as an alternative to MCA Rack-mounted servers use a variety of power supply standards. Most 1U and 2U servers use the power supply standards developed by the Advantages : Relatively cheap , Fast enough for most , Enough SSI Forum, such as the ATX1U and ATX2U power supplies capacity for most The ATX1U and ATX2U power supply standards use the same 20- Disadvantages : Limited number of writes , May not be suitable for pin ATX power supply, floppy, and hard disk power connectors used enterprise use by the ATX 2.03 and ATX12V v1.x power supply standards. NVME SSD Some 200w and larger ATX1U power supplies also feature the 4-pin Applications : High-performance computing , Boot drives , Databases ATX12V power supply connector Advantages : Currently the fastest persistent server storage type on the ATX2U power supplies feature the 6-pin auxiliary power supply market connector Disadvantages : Can be very expensive , Limited number of writes SSI Power Supplies Network-attached storage The SSI Forum has developed a series of power supply and connector form factors Applications : Provides access to the same data on multiple systems designed for use in various types of servers. Advantages : Can be very fast and very high capacity, Can mix SSDs and Pedestal-mounted servers hard drives to make a faster solution than hard drives alone, RAID storage EPS12V always ERP12V Disadvantages : Requires a separate computer, Expensive 1U rack-mounted servers Storage area network EPS1U Applications : Sophisticated databases , Virtualization deployments , Large 2U rack-mounted servers virtual desktop infrastructures (VDIs) , Enterprise resource management EPS2U workloads ERP2U Advantages : Resilient, Removes single points of failure , Easily Storage withstands failure of multiple components/devices Storage Types Disadvantages : Expensive , Complex Four Types of Storage Solutions are : Cloud Storage Direct-attached storage (DAS) Applications : Off-site backups , Provides access to data from Conencted with SAS, SATA or multiple locations PCIe Advantages :Peace of mind Network-attached storage (NAS) Disadvantages : Requires Internet connection , Usage rates may Storage area network (SAN) impact upload/download capacity SAS , SATA SSDs and NVME Cloud storage Hard Disks: Requires Internet Three RPM speeds: 7,200 RPM, 10,000 RPM, and 15,000 RPM Direct-attached storage Capacity varies, starting at about 300GB and going up to over 10TB HDD SATA 6, SAS 6, and SAS 12 are typical connection speeds Applications : Suitable for Write Heavy Applications, Backups SSDs Advantages : Cheap, High Capacity, Unlimited number of writes , Low NAND flash memory cells and are always faster than hard drives, primarily Cost because they do not have to “seek” the data on the disk, and latency is low Disadvantages : Frgile, Breaks down over time, Slow SAS or SATA connector SSD SATA 6, SAS 6, and SAS 12, SAS 24 Applications : General storage , Boot device High Cost SSDs wear out as they are written to Hard Disks: RAID 5 enables multiple write orders to be implemented concurrently SAS and SATA bus interfaces were originally designed for slow, because updated parity data is dispersed across the multiple disks. This mechanical hard drives feature ensures higher performance. NVMe storage can connect via a wide range of connector types from M.2 to RAID 6 U.2 to U.3 to newer standards such as E1.S, also known as “ruler” SSDs RAID 6 deploys two parity records to different disk drives (double parity) connect over the PCIe bus, typically an x4 connection, though x2 and x8 are enabling two simultaneous disk drive failures in the same RAID group to be also possible recovered. The maximum speeds for Gen 3 PCIe and Gen 4 PCIe are 3.5GB/s and 7 RAID 6 where parity updates are allocated separately across multiple disks. GB/s, respectively RAID 10 RAID and its Levels RAID 1+0 combines RAID 0 and RAID 1. RAID (redundant array of independent disks) By configuring both technologies in a single array, both data duplication A way of storing the same data in different places on multiple hard disks or and improved access speed can be provided. Although this combination makes installation more expensive compared to solid-state drives (SSDs) to protect data in the case of a drive failure. other technologies, both reliability and high I/O performance can be There are different RAID levels, however, and not all have the goal of guaranteed. providing redundancy. Importantly RAID 1+0 on ETERNUS AF/ETERNUS DX provide extra RAID 0 protection. This is because a single disk failure doesn't prevent striping to RAID 0 divides data into block units and writes them in a dispersed other disks. manner across multiple disks. RAID 50 (RAID 5 + 0) As data is placed on every disk, it is also called "striping". RAID5+0 stripes data across mulitiple RAID5 groups using a front-end This process enables high level performance as parallel access to the data RAID0 method. on different disks improves speed of retrieval. However no recovery Such multiple RAID5 striping enables one disk per group to be saved in the feature is provided if a disk failure occurs. event of disk failure. This provides higher reliability in large-capacity If one disk fails it effects both reads and writes, and as more disks are added to the array the higher the possibility that a disk failure will occur. configuration systems compared with a single RAID5 group. In addition, the RAID 1 rebuilding of transactions, which with RAID 5 and RAID 6 takes an This level is called "mirroring" as it copies data onto two disk drives increasingly longer time as disk capacity grows, can be executed much faster simultaneously. with RAID5+0 as the amount of data in each RAID group is smaller. Although there is no enhancement in access speeds, the automatic RAID 60 (RAID 6 + 0) duplication of the data means there is little likelihood of data loss or system RAID 60, also called RAID 6+0, combines the straight block-level striping downtime. of RAID 0 with the distributed double parity of RAID 6, resulting in a RAID RAID 1 provides failure tolerance. If one disk fails the other automatically 0 array striped across RAID 6 elements. It requires at least eight disks takes over and continuous operation is maintained. Advantages of RAID include the following: There is no storage cost performance improvement as duplicating all data Improved cost-effectiveness because lower-priced disks are used in large means only half the total disk capacity is able for storage. numbers. Raid 5 Using multiple hard drives enables RAID to improve the performance of a RAID 5 is the most popular RAID technology in use today. It uses a single hard drive. technique that avoids the concentration of I/O on a dedicated parity disk. Increased computer speed and reliability after a crash, depending on the Although RAID 5 divides the data and creates parity information. configuration. The parity data is written separately across multiple disks. Reads and writes can be performed faster than with a single drive with RAID 0. This is because a file system is split up and distributed across drives that work together on the same file. There is increased availability and resiliency with RAID 5. With mirroring, The rails and framework are typically made of steel or aluminum to support two drives can contain the same data, ensuring one will continue to work if hundreds or even thousands of pounds of equipment. the other fails. The width of the rails, the horizontal and vertical spacing of the mounting Disadvantages of RAID holes, the size of the equipment cabinets and other measurements are Nested RAID levels are more expensive to implement than traditional standardized RAID levels, because they require more disks. Rack Standardization The cost per gigabyte for storage devices is higher for nested RAID Most IT equipment is nominally 19 inches wide (including mounting because many of the drives are used for redundancy. hardware) and follows a standard set by the Electronics Industry Some RAID levels -- such as RAID 1 and 5 -- can only sustain a single drive failure. Alliance (EIA) and now maintained by the Electronic Components RAID arrays, and the data in them, are vulnerable until a failed drive is Industry Association (ECIA). replaced and the new disk is populated with data. The current 19-inch rack standard is called EIA-310-E, which is Because drives have much greater capacity now than when RAID was first essentially equivalent to IEC-60297-3-100 or DIN 41494 in other implemented, it takes a lot longer to rebuild failed drives. regions. (There’s also a standard for 23-inch wide telecom If a disk failure occurs, there is a chance the remaining disks may contain equipment. bad sectors or unreadable data, which may make it impossible to fully Rack Units rebuild the array. Although 19-inch racks are always the same nominal width, the height and When should you use RAID? depth vary. Instances where it is useful to have a RAID setup include: The depth of the rack rails is usually adjustable to some degree. When a large amount of data needs to be restored. If a drive fails and The height of the rack is divided into standardized segments called rack data is lost, that data can be restored quickly, because this data is also units. Each rack unit is 1.75 inches high, and the height of a rack or an stored in other drives. When uptime and availability are important business factors. If data equipment cabinet is expressed as the number of rack units followed by the needs to be restored, it can be done quickly without downtime. letter “U”. For example, a 42U rack contains 42 rack units. When working with large files. RAID provides speed and reliability That does not mean the rack is exactly 42 x 1.75 inches high because racks when working with large files. usually include at least a little extra space at the top and bottom that isn’t When an organization needs to reduce strain on physical hardware usable rack space. and increase overall performance. As an example, a hardware RAID It does mean that the rack will accommodate any combination of standard card can include additional memory to be used as a cache. rack equipment up to 42U— whether it’s 42 x 1U switches, 14 x 3U servers When having I/O disk issues. RAID will provide additional throughput or 21 x 1U switches with 7 x 3U servers. by reading and writing data from multiple drives, instead of needing to Remember that the rack also has to be deep enough for the equipment and wait for one drive to perform tasks. rated to support the combined weight of all the equipment. When cost is a factor. The cost of a RAID array is lower than it was in the Rack Types past, and lower-priced disks are used in large numbers, making it cheaper. Open frame racks are just that—open frames with mounting rails, but without sides or doors. WEEK#6 Rack enclosures have removable front and rear doors, removable side panels Server Racks and four adjustable vertical mounting rails (posts). Racks organize IT equipment into standardized assemblies that make Wall-mount racks are designed to be attached to the wall, saving floor space efficient use of space and other resources and fitting in areas where other racks can’t. They can be open frame racks At the most basic level, a rack consists of two or four vertical mounting rails or enclosed cabinets. and the supporting framework required to keep the rails in place. Basic rack options Factors influencing Rack Choice Doors Blade servers: Side Panels A hardware architecture that places many machines in one chassis Roof Cloud-based computer services: Casters and Levelers Renting use of someone else’s servers Locks Software as a service (SaaS): Hinged Wall Bracket Web-hosted applications Mounting Holes Other Considerations Server appliances: Color Power Distribution Purpose-built devices, each providing a different service Toolless Mounting Battery Backup All in One Rack enclosure and Cooling Device Management All in One/ All eggs in one basket Patch Panels Basic Airflow One Main Server Environmental Monitoring Side Panels Security All services like Database, Web, DNS, HTTP, Proxy all runs on same Airflow Management Shock Pallet High End Hardware Cable Management Knockdown Virtual Machines vs. Containers Thermal Ducts Stability Complete Failure Active Heat Removal Environmental Protection Complex process Close-Coupled Cooling Seismic Protection Snowflake Other Server Selection Factors Snowflake Architecture Buying Servers - additional features for server hardware A better strategy is to use a separate machine for each service. Extensibility purchase servers as they are needed, ordering the exact model and More CPU performance configuration that is right for the application High-performance I/ORack mountable Each machine is sized for the desired application: Upgrade options RAM, disk, number and speeds of NICs, and enough extra capacity, or No side-access needs expansion slots, for projected growth during the expected life of the High-availability options machine. Maintenance contracts Vendors can compete to provide their best machine that meets these Management options specifications. Server Architecture/Approaches The benefit of this strategy is that the machine is the meets the requirements. Primary Approaches The downside is that the result is a fleet of unique machines. All eggs in one basket: Each is a beautiful, special little snowflake One machine used for many purposes While snowflakes are beautiful, nobody enjoys a blizzard. Each new Beautiful snowflakes: system adds administrative overhead proportionally. Many machines, each uniquely configured For example, Buy in bulk, allocate fractions: It would be a considerable burden if each new server required learning an Large machines partitioned into many smaller virtual machines using virtualization entirely new RAID storage subsystem. or containers Each one would require learning how to configure it, replace disks, and Other Approaches upgrade the firmware, and so on. Grid computing: If, instead, the IT organization standardized on a particular RAID product, Many machines managed one as unit each new machine would simply benefit from what was learned earlier. Snowflake Architecture -> Asset Tracking Buy in bulk, allocate fractions When managing many unique machines it becomes increasingly important The next strategy is to buy computing resources in bulk and allocate to maintain an inventory of machines fractions of it as needed The inventory should document One way to do this is through virtualization. Technical information such as the operating system That is, an organization purchases large physical servers and divides Hardware parameters such as amount of RAM, type of CPU, and so them up for use by customers by creating individual virtual machines on. (VMs). It should also track the machine owner (either a person or a A virtualization cluster can grow by adding more physical hardware department), the primary contact (useful when the machine goes as more capacity is needed. haywire), Buying an individual machine has a large overhead. The services being used. It must be specified, ordered, approved, received, rack mounted, and Use automated means prepared for use. Makes it less likely the information will become outdated. It can take weeks. Virtual machines, by comparison, can be created If automated collection is not possible, an automated annual review, in minutes. requiring affirmations from the various contacts, can detect obsolete A virtualization cluster can be controlled via a portal or API calls, so information that needs to be updated. automating processes is easy. Inventory applications VMs can also be resized. Snowflake Architecture -> Reducing Variations You can add RAM, vCPUs, and disk space to a VM via an API call Always be on the lookout for opportunities to reduce the number of instead of a visit to the datacenter. variations in platforms or technologies being supported. If customers request more memory and you add it using a Discourage gratuitous variations by taking advantage of the fact that people management app on your iPhone while sitting on the beach, they will lean toward defaults. think you are doing some kind of magic. Make right easy: Virtualization improves computing efficiency. Make sure that the lazy path is the path you want people to take. Physical machines today are so powerful that applications often do Select a default hardware vendor, model, and operating system and not need the full resources of a single machine. make it super easy to order. The excess capacity is called stranded capacity because it is unusable Provide automated OS installation, configuration, and updates. in its current form. Provide a wiki with information about recommended models and Sharing a large physical machine’s power among many smaller options, sales contact information, and assistance. virtual machines helps reduce stranded capacity, without getting into Snowflake Architecture -> Global Optimization the “all eggs in one basket” trap. While it sounds efficient to customize each machine to the exact needs of the Benefits of Isolation service it provides, the result tends to be an unmanageable mess. Stranded capacity could also be mitigated by running multiple services on the Classic example of a local optimization that results in a global de-optimization. same machine. For example, if all server hardware is from one vendor, adding a single However, virtualization provides better isolation than simple multitasking. The machine from a different vendor requires learning a new firmware patching benefits of isolation include system, investing in a new set of spare parts, learning a new customer Independence support procedure, and so on. Each VM can run a different operating system. The added work for the IT team may not outweigh the benefits gained from On a single physical host there could be a mix of VMs running a variety of using the new vendor. Microsoft Windows releases, Linux releases, and so on. Resource isolation They often do not know what is required or reasonable. The disk and RAM allocated to a VM are committed to that VM and not One strategy is to simply offer reasonable defaults for each OS type. shared. Another strategy is to offer a few options: small, medium, large, and Processes running on one VM can’t access the resources of another VM. custom. In fact, programs running on a VM have little or no awareness that they are As in the other strategies, it is important to limit the amount of running on VMs, sharing a larger physical machine. variation. Stranded Capacity Mitigation Live Migration Granular security Most virtual machine cluster management systems permit live migration of VMs, A person with root access on one VM does not automatically have which means a VM can privileged access on another VM. Suppose you had five services, each run be moved from one physical host to another while it is running. by a different team. Aside from a brief performance reduction during the transition, the users of the VM If each service was on its own VM, each team could have administrator or do not even know they’re being moved. root access for its VM without affecting the security of the other VMs. Benefits of Live Migration If all five services were running on one machine, anyone needing root or Live migration makes management easier. administrator access would have privileged access for all five services. It can be used to rebalance a cluster, moving VMs off overloaded physical Reduced dependency hell machines to others that are less loaded. Each machine has its own operating system and system libraries, so they It also lets you work around hardware problems. can be upgraded independently. If a physical machine is having a hardware problem, its VMs can be VM Management evacuated to another physical machine. Like other strategies, keeping a good inventory of VMs is important. The owners of the VM can be blissfully unaware of the problem. They The cluster management software will keep an inventory of which VMs simply benefit from the excellent uptime. exist, but you need to maintain an inventory of who owns each VM and its Shared Storage purpose. The architecture of a typical virtualization cluster includes many Some clusters are tightly controlled, only permitting the IT team to create physical machines that share a SAN for storage of the VM’s disks. VMs with the care and planning reminiscent of the laborious process By having the storage external to any particular machine, the VMs previously used for physical servers. can be easily migrated between physical machines. Other clusters are general-purpose compute farms providing the ability for Buy in bulk, allocate fractions Other Aspects customers to request new machines on demand. Buy in bulk, allocate fractions – VM Packing provide a self-service way for customers to create new machines VMs can reduce the amount of stranded compute capacity, they do not The process can be fully automated via the API to avoid delays VM eliminate it. creation VMs cannot span physical machines. In addition to creating VMs, users should be able to reboot and delete As a consequence, we often get into situations where the remaining their own VMs RAM on a physical machine is not enough for a new VM. There should be limits in place so that customers can’t overload the system The best way to avoid this is to create VMs that are standard sizes that pack by creating too many VMs. nicely Typically limits are based on existing resources, daily limits, or per- Example department allocations. Define the small configuration Users can become confused if you permit them to select any amount of disk Define the medium configuration space and RAM. Large configuration to fit two VMs Buy in bulk, allocate fractions – Spare Capacity for Maintenance Containers are much lighter weight and permit more services to be packed If a physical machine needs to be taken down for repairs, there has to be a on fewer machines. place where the VMs can be migrated if you are to avoid downtime Docker, Always reserve capacity equivalent to one or more physical servers. Mesos N+1 redundancy : Reserving capacity equivalent to one physical Kubernetes server N+2 redundancy : Reserving capacity equivalent to two physical server N+x redundancy: higher redundancy if there is a likelihood that multiple physical \machines will need maintenance at the same time. One strategy is to keep one physical machine entirely idle spare machine is entirely unused Another strategy is to distribute the spare capacity around the cluster Buy in bulk, allocate fractions – Unified VM/Non-VM Management Most sites end up with two entirely different ways to request, allocate, and track VMs and non-VMs. It can be beneficial to have one system that manages both. Some cluster management systems will manage a pool of bare-metal machines using the same API as VMs. Creating a machine simply allocates an unused machine fromthe pool. Deleting a machine marks the machine for reuse. Another way to achieve this is to make everything a VM, even if that means offering an extra-large size, which is a VM that fills the entire physical machine. While such machines will have a slight performance reduction due to the VM overhead, unifying all machine management within one process benefits customers, who now have to learn only one system, and makes management easier. Buy in bulk, allocate fractions – Containers Containers are another virtualization technique. Process level instead of the machine level. VM is a machine that shares physical hardware with other VMs, each container is a group of processes that run in isolation on the same machine. All of the containers run under the same operating system, but each container is self-contained as far as the files it uses. Therefore there is no dependency hell.