0% found this document useful (0 votes)
71 views

Business Web Server

http://windsolarhybridaustralia.x10.mx/httpoutputtools-tomcat-java.html Programming the modern business web server, technologies . First half is a quick history of the technologies and their progression that made the internet today. Then about the speed of modern web servers and the pitfalls of I/O hardware and software tactics, with some breakdown of the modern site architecture of software , and the new multi core CPU's , concurrent software threads and other hardware available.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

Business Web Server

http://windsolarhybridaustralia.x10.mx/httpoutputtools-tomcat-java.html Programming the modern business web server, technologies . First half is a quick history of the technologies and their progression that made the internet today. Then about the speed of modern web servers and the pitfalls of I/O hardware and software tactics, with some breakdown of the modern site architecture of software , and the new multi core CPU's , concurrent software threads and other hardware available.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Written by Samuel A Marchant nicephotog@gmail.

com

About: Programming the modern business website


The later (half) section of this article has a useful tip in designing a personal
business web server …

First: A quick History of website programming the 'net , the technologies that
made it to present and grew…
Starting around 2000 as i learn more in depth problem solving
inside the web languages and other programming languages i
used over time, i realized the vast complexity and
size business websites have become at present in contrast of
the past two decades when i began to learn computing.

Remember when people said WOW to advertisements of hosting


provider server have 256 Mega Bytes RAM ? 32 Megabytes of RAM
was a standard recommended size for a computer with a Pentium
2 8086 type CPU (Central Processing Unit) to surf the net as
a client, normal everyday hard disk size was around 600
Megabytes, floppy disc drives were a standard accessory of
every computer and CD RW were new technology with one sided
discs, economically for Gigabytes storage computer tape
systems were the only large data storage systems and only for
backup, NT2000 had just been brought out and people were
somewhat unaware of a type of software commonly known as Open
Source GNU software, they were starting to discover a little
known new UNIX OS called Linux (a POSIX UNIX emulating OS)
in flavors of Red Hat , Slackware and Mandrake among others,
particularly suited to open source, although Solaris and
Macintosh UNIX computers and OS's were also much in use. One
of these latter mentioned computers' origin production
company were as any other large multi national reputable
worldwide computer software program and machine vendors
though little mentioned or known outside business circles not
to say it was not a worldwide name in computing generally,
that company would be a common household name with most people
for a new software operation technique and its' new
programming language in less than half a decade called a VM
(Virtual Machine), after a decade it would also hold some
notoriety of this language being the base language and OS of
many smartphones , it would be taken over by Oracle Corp. At
the time the language was developed , that company was called
Sun Microsystems Inc. USA.

Before 2000 a medium size business web server at that time


used a dual (single core) P2 CPU motherboard often with SCSI
discs and 256 Mb of RAM (Random Access Memory). The OS
(Operating System) software was often server specific
although many personal home web-server configured into the
name-server IP tables with IANA with 32 Mb of RAM were quite
sufficient for everything except a mid size business
(franchises , distributed departments of workers often at
different physical building addresses - with more than 20
people employed in the company). Beyond 2000 workstation and
home computers began to be in 1 to 2 Gigabytes hard disk size
with 64 to 128 Mega Bytes of RAM commonly and with a special
digital media motherboard add-in card.

A little before 2000 the multi media CPU was brought into
existence (CPU able to playback high quality level digital
music) but most multi media files at that time were an excessive
size for a workstation PC , particularly visual media files
and a multi-media add-in card , called an (AGP) Accelerated
Graphics Port for video and imaging (Video card or Multi-media
card added into the mainboard either PCi or ISA accessory slot),
or an add-in Sound card was developed to improve media file
playback and handling.

The odd point is, Today in 2020, often i see nothing much more
of presented complexity than i did in the pages of websites
that require programming control in the page and on a server
(the CGI scripting). These include login forms, search bars,
and email forms or survey forms (however i know from experience
have far greater complexity behind it all (back end
programming) after 20 years though i often question this as
being simply more choices being served today and better cross
browser X-Platform programming for the client e.g. most screen
resolutions in 2000 were no more than (1024 x 768) or (1280
x 1024), today, screen resolutions range from the former to
well over 2000 pixel width in size enough to care its catered
for alike other screen sizes such as 1440 ).

One of the first website learning projects i built i remember


did not require server CGI scripting to match many mid size
businesses or small business internet websites at that time,
too the test servers embedded as a localhost loop-back PC
(Personal Computer) did not come standard with any easy way
of configuring PERL (Practical Extraction and Report Language)
scripted interpreted language to integrate to the server (PHP
did not exist at that time), PERL also at the time did not
ship many modules for CGI in any free distributions such as
base64 module (i have a script i wrote to encode base64 "binary
file attachments (email or download)" that i carefully built
from using the IETF RFC recommendations specs) so such modules
had to be obtained from somewhere if allowed and registered
in the PERL configuration file (after installing whatever was
in the distro) or by command-line onto the @inc array first.

At that time (starting just a few years after 2000) too


javascript was not cross browser oriented and many Internet
Explorer pages were scripted in VBscript or to a lesser
extent Jscript (a near to ECMA262 compliant Microsoft Corp.
proprietary version of web javascript for internet explorer)
while both these proprietary javascript versions with some
conference with the w3c developed DHTML , a step up on their
earlier web page handling javascript systems.

Javascript itself for most was dominated by a now dissolved


company called "Netscape Communication Corporation", apart
Jscript this was the other version, the "Netscape Javascript".
Netscape produced server software that operated by server side
javascript processing in pages on the server side. At one time
the HTML script tag used an attribute to specify whether the
script was for client or server use. Among Netscape Corp.
achievements are the original development of SSL, the original
javascript and the Netscape web browser. SSL was later picked
up into Open Source, not to be confused with SSH and open source
SSH.
Next in the history,
an old cliche' ,

"Java is not javascript !"


or
"javascript is not Java !"
Java is a compiled "C like" language. Javascript is a loosely
typed runtime interpreted text script language (has few if
any data types) , both "javascript" and "java" are OOP (Object
Oriented Programming) languages. HOWEVER, *only Java is
polymorphic !

* While it is required for a businessman stakeholder to assess


which language(s) to use for computational systems,
particularly before a managing stakeholder expropriates or
requisitions the server program language and binaries under
the license, NEVER ask anyone for a complete clear definition
of "Polymorphism" whether the assigned programmer or assessor
e.t.c. .There are many types of polymorphism that vary from
language to language and there is often a confusion
with/between "polymorphism" and "overloading".
Unfortunately defining polymorphism can be severely
degenerated to and is "more conceptual" alike "Backus Naur
notation", for an OOP programmer, it is not intrinsic to have
heard of the word polymorphism or for them to code semantically
correct.

To answer the question "what is polymorphism?" at any one time,


would be the equivalent of writing all the pseudo code of a
program in Backus Naur notation ! (just a warning). When
answering for the language questioned of it is nothing but
one long "brew pound session".

A game changer shortly before 2000

Around this time a new pre-compiled (not scripted) OOP


programming language called "Java" (Originally called "Oak")
was presented to the world as a partial free product by Sun
Microsystems Inc. , the product more commonly referred to as
Java2 that would later become a hit with programmers and users
alike for its cross platform and integration ability either
for the server machine programmer or the client machine user
whether web or machine application specific. Because the JRE
runtime was free and available for any platform or machine
size, it started to take-off as a programming language of
choice, particularly when programmers found they could also
use the companies online tutorial and download the core JDK
free also. It is interesting to note that J2EE (Java2
Enterprise Edition for business services Web Serving,
Intranet and Network messaging machines or other) is much
older than its' popularity often knows (1999 was J2EE's first
release), because it was not until "version 1.3 J2EE tutorial"
(J2EE 1.3 API docs) that many features of the base of modern
J2EE services had been developed and tried to trusted.

By 2002 Java's only teething troubles were before "core"


versions 1.3 and 1.4 "core API (Application Programming
Interface)" , It was its' stability, that was more the problem
arising from inexperience with using the language, and the
fact it's engine and compiler binaries are written and
C/C++ (some distributions/versions and parts of the JVM are
created in Assembly language), the problem particularly
noticeable in "Java web Applets" before 1.4 of which Netscape
Navigator browser since before 2000 shipped the current Sun
Microsystems Inc. JRE with the browser installer download.
"Netscape Navigator web browser" is the base code source from
which Mozilla browser (or visa versa) and many other top line
web browsers are developed from. When Sun Microsystems Inc.
Java 1.4 was released, most serious bugs had been fixed and
with better error handling systems, this helped immensely
because the first predictions of the size of computers with
the speeds of CPU processing actions increasing 10 times of
efficiency for everyone (business or domestic) along with
greater ability to handle and incorporate multi-media over
high speed networks with high resolution had begun to become
reality (e.g. 10 to 30 GB disks existed, some CPU cores
approached GHz clock speed) !

The strategy of Java was to be human readable language


programmable semantics, cross-platform code (write once run
anywhere) using a client JRE (Java Runtime Environment) VM
(Virtual Machine).

The moment Java's popularity took hold (Long before version


"core 1.4"), not only did Microsoft build Java JRE's and JDK's
for their own Java version (somewhere around 1999), but most
of the worlds large computing companies that produce many
language compilers also had produced versions of the JRE and
JDK's from the Sun Microsystems (later taken over by "Oracle
corp.") specifications, these world name companies included
Borland , IBM, BEA, all whom make many types of CPU Assembly
levels ASM MASM, UNIX and DOS OS's and custom OS's and compilers
for languages and API systems such as Fortran ,C and C++ ,
Windows C , Objective C , or machine itself for a specialist
architecture of a CPU type.

Around the time Sun Microsystems JDK 1.4, the web had developed
sufficiently to acknowledge as a directive to support a new
web server type specification (often called a "container")
that later became a new specialist set of Java specifications
itself for its' web API's SDK's and JVM's called J2EE (Java2
Enterprise Edition) specification (community process with JSR
releases), the J2EE is a specification created by Sun
Microsystems Inc. USA (later taken over by "Oracle corp.")

An interesting note here is that CGI (Common Gateway Interface)


in the earliest web servers, Windows OS web servers and other
web servers used .bat batch files and small socket
port listeners C/C++ programs to program for, and are now a
myriad of web server *scripting languages (*run by a run -time
interpreter not compiled to an instruction set)
integratable to CGI web servers.

A web server was little more than a machines daemon/service


that listened to a hardware networking port on a communication
telephone line or a network integrated machine, their first
taste of web specialist programming was SSI (Server Side
Include) a small markup tag and incorporated server driver
to be read in the page by the server when outputting a page
with such a tag type in it (a simple extra service , a
forerunner program of a data parser program e.g. an XML parser).
However, after server side page integrated scripting parsers
were developed (Around the time of Netscape Corporations
javascript server) many of these script languages were far
superior to SSI, since PHP (became popular for use around 1999
with the Zend engine) and JSP custom tags, SSI has been quite
or completely redundant.

By 2005 many types of internet web-server and integrated CGI


system, programming language and server runtime programmed
page production systems had been specifically developed,
Microsoft Active Server Pages using either Jscript or VBscript
proprietary version language algorithm set or some of Microsft
Corp. other proprietary languages, Adobe Coldfusion and its
tagging and language , Netscape server with Javascript , first
Apache http and xml webserver able to have PERL , PHP, Python
or Ruby integrated and First Apache Java Tomcat and other J2EE
servers such as Resin, IBM Websphere, BEA Weblogic and JBoss.

New web specifications such as Web Services and WSDL (Web


Services Discovery Language) started to take hold, too the
standard analogue mobile phone that could interact with web
data was being made redundant by the smart phone with a
true color screen, able to surf web pages and take photos and
upload them.

Protocols for machine, machine control or messaging "alike"


email began e.g. SOAP (Simple Object Access Protocol). A
protocol for http connection as either non-state or stateless
url connection client side web page scripting system
integrated to web javascript called AJAX (Asynchronous
Javascript and XML).

Offbeat at this time a company called Google LLC was growing


and succeeding with its search engine business, it moved into
many areas of web service and one particularly well known is
its' javascript Google Maps platform API .

Android OS for smartphone is the development of Google and


Open Handset Alliance Android OS is Linux based and until
recent mobile apps were mainly programmed in Java . I have
from 2006 thereabout a whole GB disk of Sun Microsysystems
Inc. mobile and Micro edition documentation / API
documentation and mobile SDK compilers along with the Nokia
version Java mobile SDK .

Before smartphone Oracle Java (formerly Sun Microsystems Inc.


Java) became prominent in the phone, small devices and mobile
devices technology genre with the creation of a Java OS system
to operate small less processing power devices using Oracle
Corp. Java. Previously when the company was Sun Microsystems
Inc. Java binaries were modeled and developed to be used for
a variety of portable or reduced size devices and accessories
(e.g. smart card readers , LCD consoles and control system
browsers) and run with the KVM not a standard JRE VM.

Java has recently had its 25th birthday "THIS YEAR in 2020"
and one of the pinnacles of its current use is by USA's NASA
(National Aeronautics and Space Administration) that has
written a special SDK to help construct "satellite data" and
"imaging representation" for use inside real world and real
time applications, called WorldWind SDK.

Two important new technologies of programming (around 2008),


XML (extensible Markup Language) with XSLT (extensible Style
Language Template) and Database Servers with their
specification SQL languages,also at that time many new
specification directives for web programming became a type
of best practice such as (WCAG) "web accessibility guidelines"
and "cross browser scripting" for all web browser page
programming using javascript as specified by the "W3C.org".

Before the W3C , the web consortium's forerunner was the IETF
(Internet Engineering Taskforce) that survives to this day
with documents specifying a set of agreed recommendations
to apply programming and communication rules to various types
of machine communication activities and purposes over the
internet. These number indexed documents drafted by the IETF
are called RFC's (Request For Comments) examples are RFC for
the composition of various types of email transmission
formats , file binaries carriage encoding inside particular
purposes such as attachments and base64 encoding, standard
encryption algorithms, FTP (File Transfer Protocol) commands
and recommendations, transmitted document header formats and
meanings , all a tiny microscopic fraction of the final set
of RFC's to mention.

Microsoft Corp. ceased to produce its generic version of the


Sun Microsystem specification Java language SDK (Software
Development Kit) and JRE, Microsoft started producing
a proprietary VM and language called C# that had similar
semantics to Java2.

A few years later (than 2008 PHP CGI scripting language (a


partial offshoot or derivative of PERL incorporating modern
features such as database connectivity and new protocols
such as SSH (Secure SHell) and SSL (Secure Socket Layer))
became popular, With PHP (Hypetext Preprocessor) while at
around that time DTD for XML started to be replaced with Scbema
XML document definition, DTD (Document Type Definition) was
a special base template system "to validate the tag contents"
of a practical produced (implemented) SGML
(Standardised Generalised Markup Language) document of which
HTML and XML are a subset of SGML, Schema is a method of using
XML tags to form a tag validation base template for xml
documents produced with a DTD or Schema templates definition
of tags allowed in a document. XML's original use was to wrap
data in "custom named tags" for use with XSLT or any other
"xml markup parser system" that required data's
"validation(credibility . integrity)" before use, and to
assist cleaning up the internet's poorly constructed
documents by using a mildly stricter document markup system
rule-set for the programmers as HTML 4.01 with CSS2 was
becoming quite unregulated and unwieldy to program at a novice
or inexperienced level. The idea of Markup controlled graphics
of pages was not intended to be solely a programmer trained
accessed activity, neither was HTML 4 intended to be
"perfectly formed" tagging.

Oddly, this last point (not needing valid well formed


implemented document tags) applies to some web programmers
at immediate present under special circumstance as XHTML 1.0
came to existence and documents had a large quantity of "well
formed" tag "validation credibility" to succeed at being
rendered by client browsers.

Along with XML and XHTML came a specification called "DOM level
4 and 5" (DOM is Document Object Model - the programming
document tag level and data breakdown abstraction of its parts
with accord to its DTD or Schema) which is to effect new W3C
org specification "web javascript (ECMA262) that is usable
in XML documents" such as, XHTML documents.

It should be understood that XML or HTML or SGML "are not simply


a defined tag system alone", they require utilization of a
special "markup control program" called a "parser",
the parser program is sent the document to process before
output e.g. .html , .jsp , .asp , .xhtml , .xml through it
to detect errors of tag usage or to detect un-allowed character
symbol usage then either terminate outputting the document
or simply log the errors found.

2020 programming business web servers for the 'net:


By 2010 the web business site was a completely different system
to applying some graphic art to pages, some programming in
pages for either user interaction with page parts or server
data. Data for medium size businesses is held in SQL
(Structured Query Language) databases as norm in 2020 for
better protection and easier backup. The modern 2020 mid size
business website uses an HTTP (Hypertext Transfer Protocol)
server integrated with an SQL server both servers using
multiple quad core CPU's, and is able to be communicated with
by smartphone that is fully capable of surfing the web or using
WSDL web services.

As tactic a third server can be attached to such businesses


if required , this third is a multi media server to handle
the load of media file size information and data baud rate.
Such an action/technique is known as clustering servers ,
though it need not be committed beyond the http server
accessing and receiving outside communications through a
router by port forwarding, the other two servers can simply
be configured to connect to the http server by direct http
server port link at suitable safe specification baud rates
to supply the http servers requests for the resource types
of database or multi-media.

Clustering servers is not completely required since the


Terra-byte size hard-disk in a context of multi-media, Forums
or blog sites, however, a good audit of data space required
and the use of a greater number of smaller disks can give a
better read output time when accessing resources inclusive
the known EIDE (Enhanced Integrated Drive Electronics) data
serial BUS (Binary Unit Speed) of the hard disk device along
with the number of disc read write heads physically available
in a disk (faster disk control and faster motherboard
accessory BUS).

THE TRICK: There is a small flaw in only seeing


as far as per disk number of read write heads, a disk will
only store a file where the OS decides to place it on the drive,
and to predict this with any validity it requires
understanding a disk has platters and two faces upon which
those heads read and write according partition assignment
sizes. Given each face of a disk platter adds to the number
of "separate partitions" can be prepared by format as a drive
letter, each partition can have a replica copy of files of
significant size to download and the CGI servlets for
downloading can be caused to do a short lookup test for file
lock on a "associate / corresponding index dummy file (an
anchor signal)" and rotate their instances of which partition
is used to draw a file copy from. e.g. 10 hard disks with 8
platters each is 160 partitions, therefore 160 copies of sets
of all intensive download /page-call large files, 160 physical
read heads under control rather than 5 or ten. Too the
motherboard BUS channels should be examined for final rated
data speed specification and the disk controller throughput
I/O read speed (*note: almost all maximum speed specifications
given in docs are only under ideal conditions) and the number
of controller coupling lines to the output port. (For disks
geometry hardware strategy , see further below)

After manipulating this point at construction your high


throughput server of whatever your economic caliber and
requirements allow, it is best after copying all those files
that before going live to defragment all the disks to again
maximize the performance of the read write physical heads.

*a* Multiple "out" carrier lines through the router from the
main computer is also another assistant in lowering blockage.
That is achieved by multiple initial installations of the
server software (!!!! Database server software can only have
ONE server software instance installed because they(multiple
instances) cannot be coordinated safely upon the same data
repository UNLESS only one instance of software writes to file,
However, would also be unnecessary - comparatively anyhow the
multiple http server instance(s) suggested here are only
"taking a bare request for a resource outputting information
in that nameserver-similar for large images, video and audio,
and media items regularly sent (no processing or file saving
or emailing e.t.c.)", unlike the only read-write main
nameserver that is able to be called and identified visibly
by net clients) each configured on a different port though
initially they are all http and associate the one site(NOTE:
initially one of these server instances will require to be
the "initial face" Name Server, the other two are for heavy
file loading support e.g. HD images or multi-media served),
however, requires three data lines out from the router NOT
one phone line (3 different numbers *mentioning if the load
is immense and probably only video serving requires more than
one line out - the other is simply an immense number (thousands)
of users per minute at time) and each individual servers unique
chosen port number configured bound to an individual machine
hardware in/oulet port, also the equivalent of three IP
service subscriptions. This last scenario is a little vast
and the last one before buying top line server boards and top
speed tele networking equipment to math baud rate.

*a* clause: To output a second nameserver successfully "from


the same server machine (with two separate installations of
http server, requires two different registered domain names
e,g, mystuffhere.com.au (viewed) and mystuffherealso.com.au
(data transmission of AKA data-parts))" to accelerate image
and media items, two telephone lines and two IP contracts for
data are required and two routers (unless a second port
forwarding port pair can also be configured on the one machine)
and two high rate physical hardware outlet ports e.g. COM1
COM2 to bind each http server upon.

Another piece of help, each server (software machine overall


program) instance started can be assigned to a single core
if you are using a quad core (or more) CPU.

SERVER RAM and speed I/O

Here's where it gets interesting,


from the client machine perspective most of the beef and grunt
in the modern computer is because of multi media or heavy
processing from games imaging and animation, however the
subtlety is its mainly just one user looking at one subject,
whereas a server is attempting as many connections and
services of processing and data throughput as possible, this
means it will be holding large chunks of data such as parts
of download zip files maybe large 3Mb pdf documents or whole
URL encoded uploaded files in the middle of processing if their
are forms for such purposes in the site, too mid size business
is likely to constantly use an SQL database server and its
temporary table returned query's result sets are held in RAM
(for a set period e.g. minutes) until the temp table expires,
NOT written to disk buffer (but either way is often large and
RAM consuming from databases), with media video, a common
format of "streaming codec" is used so the whole file is not
required for the client end media-player to play segments ,
with indexed segments the video can start anywhere chosen and
requested back to the server by the player. That is why server
motherboards with large RAM capacities are used. (For disks
strategy , see further below)
CPU Cores and mainboards / motherboards concurrency

* It is a good idea to shift / update to newer software versions


of servers to versions specifically designed able to / for
operate with multi CPU cores (including VM's for NET or Java).

The bigger subtlety is that if your business site is of interest


at various hours of the day with hundreds of hits per hour
any one piece of data sent or received is surrounded with twice
as much send and receive transmission control machine
information though it is quite a small piece of data well under
a MegaByte of size, so 6 cores is not many and pipe-lining
again can become an issue apart the number of allowed
connections in the server software configuration file.

NOTE: The old machines that Australian business have are


clogging internet for speed, at least defragment your drives
for faster more efficient file pickup.

 * Running / Using a CPU in the following is recommended


to use a "fluid cooling tower and fan system" on the CPU.
 ** Never overclock a server CPU ! For business server
purpose overclocking requires throwing away the CPU
every six months whether it is faulty or not (use a version
above 3 GHz if possible)
 Another annoying feature of mainboards (apart learning
all the correct jumper settings) is installing the
motherboard software for the chip-set requires the CPU,
after, changing the type of CPU is difficult if allowed.
 * When obtaining a cooler system with fan together,
always not the fan wattage and understand how to paste
the top of the CPU to the heat-sink face of the cooler
for effective mounting (basically flat thin film not
touching after heat sink lock-down - and get the voltage
connection right for the fan also)

At present by contrast, client machines are around 8GB to 16


GB in RAM, [ random example 1 ] [ random example 2 ]

*1 server machines are around 64GB to 512GB RAM [ random example


1 PDF ] [ random example 2 ]

*1 note: [J2EE business looking to make a server will like


this one , it's a "Coffee Lake" use 9th generation only ] while
it is possible to load maximum RAM onto a main board it is
governed by the maximum the CPU can handle so the true quantity
of RAM is processor dependant !

[ For mid size business economically as dual CPU server


mainboard or small business as single CPU server Intel® Xeon®
E-2176G Processor ] 6 cores

[ example: Small Business single CPU server Mainboard (use


Intel® Xeon® E-2176G Processor - RAM 32GB - DDR4 2666) ]

( A REVIEW Article - CPU's )

From above to below there is a vast jump in cost and performance,


above is relatively cheap in server terms and requires
SATA/IDE PCi cards to enable adding some more hard disks.

[ dual CPU Mainboard for heavy processing and high throughput


if required (uses - Intel® Xeon® Platinum 9282 Processor) ]

[ CPU For genuine high throughput if required - Intel® Xeon®


Platinum 9282 Processor] 56 cores (simply so people know it
exists)

[ "Coffee Lake" small to mid size business for J2EE - For anyone
wanting to build a server for them-self "Coffee Lake" machine
not strictly much beyond this, not actually cheap, not
actually expensive (board)- single CPU mainboard X11SCH-F LGA
1151 up to 128GB DDR4 2666MHz (8x SATA3 - 6Gbps) C246 chipset ,
Dual LAN ports : (processor)- Intel® Xeon® E-2288G Processor
8 core 3.7 GHz 16mb cache nb: ! 9th Gen "Coffee Lake" the
generation to use but not well supported - X11SCH-F LGA 1151
mainboard manual (PDF) Note that to install a motherboard the
BIOS you must install the BIOS software with a disk provided
by the manufacturer for that model, and if any printed material
to tell you what choices will be available in the installer
setup of the BIOS software is present then read them to make
a good decision first, Usually there is some printed material
relating the "jumpers settings" for the CPU and other units
- however, most information can be gained from finding and
reading the manual on the BIOS setup disk in a computer
superDOM ] [ a perfomance tuning article for Tomcat server
(PDF) *best to set your own command line parameters and start
on command line with input argument switches ]
Link: Be sure to add together all the wattage of the components
(board and CPU + disks) you will put together to know the
correct level of power supply to buy.

Article link: A little about motherboard jumpers

Article: build server

Article link: Motherboards 20 terms to know

Article link: why your own business server

DISKS

Since recent, the TB size hard drive has come into


existence, !!!!! HOWEVER, there remain extremely slow hard
drives for data rate that are many TB and under GB/s and only
7200 rpm or 5400 rpm. !!!!! Other newer hard drives (generally
physically larger, that are also TB storage size) are designed
to have only two read heads and only one platter, so little
use either, A proven reliable set of high speed access I/O
SSD can possibly be one available choice, however there are
disks on the market have many platters and heads , high rate
I/O speed in Gb's , and not much more than a Terra Byte size
for good management principles of database-name-file per disk
platter geometry size (what is safe through time for it to
possibly expand to of size can be pre allocated to a DB server
individual .db file per platter sizes as partition per .db
file - e.g. "database names" in a query , "associates",
"accounts", "businessuse", "commodities" these would be
four .db files that are a file each so four files so four
partitions)

The trick is to get a reasonably fast (e.g 6 Gb/s rate) multi


platter disk. Below link is an example of the type of disk
median priced that can be divided into partitions relating
quantity of platters, so each type of server binary and types
of particular storage repository can be placed onto these
separate platter-partition sizes. *** The following example
disk (link below) has five platters so ten heads and 10 faces
so ten partitions all the same size require to be made. The
OS on one or two platters (depending minimum install size space
required) as a primary drive, each server type required on
other partitions each, main domain name server site image
folder on another, on another whatever else of large files
to be served for download (or for access time) if required
are complete copied sets replicated on more disks' platters
as a partition and perhaps on another disk formatted for the
required partitions to heads (! PROVIDED YOU HAVE WRITTEN A
REAL TIME DOWNLOAD CALL MANAGEMENT PROGRAM TO ROTATE THROUGH
THE PARTITIONS FOR EACH DOWNLOAD OR STREAMING REQUEST THAT
OPENS A FILE nb: A file download is known as an "attachment"
in http protocol).

*** EXAMPLE of a good server disk for large download file and
image serving (particularly sets replication to iterate
through each time a non page file is called) SQL databases
keep each permanent written database as a single file,
devoting a partition per DB-name file when installing the
server software (By SQL server administration) assists I/0
disk speed :

*-* DISK-INFO click-on-enlarge image SPECS

The following "ns" and "ns2" are the same physical machine
the 'A record' for the IP has ns2 as a backup internet pathway
link to the same site name, so with two distinct configures
servers operating , the A record appears similar but has "four
net addresses and two domain names" , so three servers
accessible over the net has six IP 'A records' , so two
accessible over the net as domain name servers has four records
and usually a Database is run on a machine as localhost or
some other "unregistered domain name" but essentially is
127.0.0.1 net address (called machine network internal
"loopback" address).

note: Article how to setup RAID from the BIOS (to the personal
backup (DORMANT server software) Nameserver Replica that is
in operation as a copying RAID repository over the RAID ports
but only storing an identical in use image not serving) Your
nameserver backup server must be capable of the same write
speeds onto disks with replicated partition structure and
their contents except the "disk brand size , type format and
folder structures is for best purpose totally identical too
as the un-started installed server sets"

*-* NOTE , when installing hard disk, you must be aware to


use the correct jumper settings and the correct BIOS settings,
some UNIX OS systems require NORMAL not LBA for the disk to
operate properly.
Such an assessment can allow the http server to run with all
two to three server types as pure software present in the one
machine. (note: that with a set of http servers, it may be
required to name, group and "chown" them into a multi headed
monster of all the multiple programs owning and grouping the
same as each other with an individual name each with all the
same permissions, great for keeping the resources out of the
way of the main name server and its' CGI).

The main thing to remember in all of this is to match all the


throughput maximum capacities / speed specifications from the
disk heads to output side of the router, with special reference
to the IP provider plan and data speed of maximum upload and
download service speed in the router and network card set in
their software configuration controls of baud / bit rate !

The basic pages and basic still images do not classify as multi
media, however, HD (High Definition) images of a huge size
often require special high rate processing.

The oddity, it's not the multi media server, its not the http
web server that is the difficulty for programming and
integration, it is the SQL data server. SQL data servers have
many real uses, they are not simply another or alternative
"architectural" way of building and formatting web data ready
for use or in easy understandable structures, SQL data servers
use is often CRM (Customer Relationship Management), Version
control backups, Separate secure customer information
repositories, separate non externalized screened company
information, web e commerce accounting, economic prediction
and trending statistics repository, group email account data
server, real time industrial factory machine control feedback
data storage, real time satellite data parameter recording.

The speed and power of the modern quad core servers allows
much data to be databased in through-put then used real time
over web / intranet connections, including multi-media data
binaries.

That's the simple pros and cons of the hardware for serving,
the real challenge is the customized and purpose driven custom
software solution construction of the CGI and any networked
client programs whether workstation service endpoints or
connected GUI desktop terminal software.
Construction of custom software called development requires
the programming language matching the server or servers
"purpose",

For most, PERL, PHP, Python, RUBY are excellent if no heavy


throughput or high quantity hits per minute are requir ed with
complex processing. After it does require large hit quantities
per minute handled, large quantity data transaction
throughput of the custom programming modules and complex
processing , more effective CPU management CPU and hardware
resource monitoring with "ability" to pre test custom
constructed software is required. Languages such as
Java2-J2EE or C# that are compiled and optomised by the
compiler have much better results in handling heavy traffic
loading, e.g. start of the business day logins, they also can
be used in intranet situations if client console work stations
require software applications specifically made for the
workstation operators jobs.

To put it sweetly about Java2-J2EE and C# as "the server web


programming language", a compiled to instruction set language
used by a business / company rather than interpreted text
scripts, has a greater scalability of jobs and service capable
from compiled language with web/intranet/VPN servers by the
ability to enact through the CPU and BUS'es more seamlessly
alike some mad scientist using Fortran to write workstation
GUI client applications and server processing daemons (Some
"brands" of Fortran do have GUI construction abilities in some
versions for OS's). Java2-J2EE seems to top the list because
it is inclusive every form of API along with one key feature,
the binaries of the servers are written in Java and run in
its JVM to which an underpinning format of coding practice
in the architectural standard of the J2EE programming is
adhered for both API programmer AND the initial server
software writers whether Tomcat or J2EE Glassfish or any other
J2EE compliant server. To be fair and impartial, it is possible
with many servers whether Microsoft Windows Server
(documentation is often found in a ,chm for C/C++ programmers)
or an Oracle J2EE web server, Apache Tomcat web server or Apache
HTTP server, to modify the server internals through what is
known as an API programming interface, it effectively is
either a modification of its actual shipped drivers or
additional daemon resource module that being of a DLL or .so
or in a java server a .class file.
The work of programming a web server:

Taking a look in job vacancy websites for projects it is plain


to see that when a more complex machine is used it will probably
have an administrator on standby to handle any troubles. Such
people can often install and configure the before mentioned
software required and may know some development programming
of the machine. There is however an interesting problem, not
merely the platform and the server software , but there are
many other intricate jobs requiring well beyond an
administrator. A site smaller than medium size business may
be able to utilize only a "web designer" and "web developer"
through something the alike of the before mentioned relating
the size of small business, but larger sites need not bother
specialists brought in to modify or construct a complete site
with all the violent difference in streams of thought.

That requires many more than one programmer.

 Back end and CGI coder (processing services, custom


content management, database querying, email processing,
meshing between other software components)
 SQL and Data architect (Database data storage assessment
and SQL query construction)
 UI front end developer (user experience, client
presentation and wiring for the client)
 networking architect (hardware , data rates , data
throughput - usually assessment stage)
 web graphic designer (non programmatic presentation of
the website - usually most is assessment stage)
 editor writer (Grammatical linguist for human readable
presentation - as much during assessment stage)

The attempt is of rude greed to get all of this list from one
person , i wonder if they get that ever, not so much a problem
in smaller sites but J2EE with MySQL at around mid size business
levels does not make a graphic artist or a networking architect
and neither a UI developer or markup user, to some extent
neither a desktop application UI developer.. One day i'll copy
onto this article "the best of... job position requirements
advertisement" some would be a person whom has programmed for
30 years and a trained and professional in each requirement
but in some ads , i cannot say an alien knows that much
proficiently (some of them do have enough but for most the
particular sets are around three people not one at all times).
Here is WHAT I DON'T MEAN , these two (examples) are around
normal size / level capacity job and is median or nominal level
for a qualified experienced trained expert programmer (the
second example is in my scope of knowledge to complete left
to myself , although my limit (i'm a self trained programmer
- no real difference what i know there)) :

What to remember is these job requirements usually contain


intricate unique levels of experience asked, so is unusual
to find anyone exactly that proficient, so often requires many
years of proof but does not stop there for some i'd enjoy
putting into the page as example for being as bizarre at "three
times as many unique requirements but fully experienced and
proven".

 5+ years of experience on NetIQ Access Manager (NAM) and


NetIQ Identity Manager (NIM).
 Single Sign On to applications using NetIQ Access
Manager.
 NetIQ Identity Manager experience covering tier-1
connectors including Oracle eBusiness Suite, Soap, REST,
LDAP, SAL, Active Directory and Exchange, JDBC.
 Integrating the HR application with NIM and Synchronize
user data across all applications.
 Integrating client’s on-premise applications with NIM
and Synchronize user data across all applications.
 Designing the Provisioning model and enabling user Life
Cycle Management.
 Experience with NetIQ eDirectory directory structures
and replication.
 Good Knowledge and experience working with SAML, OpenID
Connect, OAuth II.
 Proficient in Java/J2EE and SQL.
 Good Analytical and Communication skills, Planning and
Co-ordination skills

Also a normal size accreditation example of a UI Engineer


following:

 7 Year+ Web/UI development experience


 Javascript, Angular JS/Node JS/ Amber JS
 HTML5/CSS3
 J2EE JMS (MQ/Solace), JSP, RMI, Servlets, EJB, REST/SOAP,
Web Services
 Experience delivering in an agile (Scrum, XP) team
environment
 Unix/Linux shell scripting, sh, KSH
 XML, XSLT, XSD, JSON, XPath
 Maven releases
 Git/Stash
 Java 1.6 +
 Spring MVC and Spring Integration, IOC, Security
 Swing, JQuery
 WebSphere
 Oracle SQL PL/SQL, JDBC & Hibernate
 Jenkins/TeamCity builds
 Jira/RTC
 Junit/Mockito/Spock/Gherkin
 Retail financial product knowledge

This last one is where some jobs requirements START to be a


minor overload but at least this one is much inside one type
of framework or language system (what i am talking about as
ads I've seen are absurd and twice as intricate and diverse,
and nearly what it implies THE WHOLE LOT IN ONE FROM START
TO FINNISH including pages, xml/html x-browser javascript UI
design , web DB data design , email campaign design, platform
OS language shell integration, network hardware integration
design .... e.t.c. , that one is subtle but not finally out
of bounds (for lack of better expression - ...does the agreed
job gets paid)).

 You will have experience designing Mule components for


example Mule ESB, Anypoint Studio,ETLs, flows, MEL,
message modelling, routing, filtering, database,
exception handling, API management. You will also have
experience using Java, J2EE, Web Services (REST, SOAP),
Spring boot, Spring MVC, Spring 4.0, Hibernate or other
open source technologies; and Jira, Confluence and
Bitbucket. Experience developing for IOS, Android and
hybrid apps would be highly regarded; as would your
experience in monitoring and security tools like New
Relic and Splunk.

On business web page testing, below, you may pick the intricacy
and diverseness in a job...

One more gripe about web programming for business purpose


Checking client code service and machines in the Laboratory,
X-browser and X-Platformas it is called requires , Mac
processor (1.Win OS and a 2.Mac OS , 3.Linux OS) , x86 processor
(4.Win OS . 5.Mac OS , 6.Linux OS, 7.Solaris OS, 8.Free BSD) ,
Opera , Mozilla, Firefox, Sea Monkey, Internet Explorer,
Safari, Chrome. = 2x processor types , 5 platform OS , At least
14 different browser checks (minimum) !

You might also like