Transcript Oci
Transcript Oci
You can see the regions listed here. In Americas, we have four
regions. In Europe, we have three regions. In Asia, we have
four regions. And then for government, we have two US
government regions and three US DoD regions. Over the next
13 months or so, we are planning to operate 20 new regions,
which includes 17 commercial regions and three US
government regions.
There are three main reasons why we are doing this. First, we
really want to give our customers a truly geodistributed
footprint so they can run their application closest to their
users. The second is for meeting regional compliance needs.
And the third is for giving and providing customers an option
for in-country disaster recovery solutions. So as planned, 11
of the countries are jurisdictions served by local cloud regions
will have two or more regions to facilitate this in-country or
in-jurisdiction disaster recovery capabilities.
So as you can see here, this is the basis that lets us do bare
metal instances, and engineer systems like Exadata, and plug
them into our environment without making any changes. If
this was not the case, we would need to slap a hypervisor
here on Exadata to make it work. We don't have to do it
because of this capability called off-box network
virtualization. It is a massive enabler for us to deliver the
classes of services and meet our goals around performance
and security.
And finally-- this is not a small one-- you get support through
one org, which is a reality that most of the enterprises are
going to run in on a hybrid environment. So if you're running
something on premises, you're running something in the
cloud, you have one support model whether it runs in the
cloud or on prem. And you could get support through one
channel, one mechanism.
So this graphic here, the visual here, tries to show the main
components of the service. So as we talked about the Identity
and Access Management Service, the main things to keep in
mind are Principals, basically, Principals, you can think
about them as groups of users, or instances-- we'll talk about
why instances are here-- which access a set of resources.
Now, there are two kinds of Principal. One is called the IAM
users, which are your users who access the cloud
environments, and others are instances, and we call them
Instance Principals to distinguish them from just normal
compute instances or database instances.
And as you can see here, keys is an RSA key pair in the PEM
format, and you have some restrictions on the length, et
cetera. In the OCI console, you copy and paste the contents of
this public key file, and the private you file you keep with the
SDK or with your own client to sign your API request. Again,
very similar to how some of the other web services operate.
Like we saw with the users, policies also support the security
principle of least privilege. By default, users are not allowed
to perform any actions. Policies cannot be attached to the
users themselves but only to the groups. And we'll look into
what that, exactly, means.
So as you can see here, I'm trying to log into the Oracle Cloud
Infrastructure. So first thing you do here is provide the URL,
console.us-ashburn-1.oraclecloud.com. Now I'm trying to log
into the Ashburn region, and you could log in to Phoenix or
some other region, and your URL might be different.
So as you can see here, first thing I want to show you is there
is something called a home region. This is where you signed
your contract. This is where you probably got started. I have
been here 3 and 1/2 years. So this was our first region-- US
West (Phoenix). This is my home region. And right now, I'm
logging into US East (Ashburn). And you can see all these
different regions.
So to get to-- on the left-hand side, you can see the menu.
And the menu shows the various services we have available
in OCI. So there is core infrastructure. There are databases
there is on data in AI, solutions and platform, and then there
are services around governance and administration.
And now this particular user has been created. You can see
Traininguser1. Now, if I log into this user, right, I could do
things like multi-factor authentication. it provides me, my--
you know, like you can see auth tokens here. I can generate
an auth token. As we were talking, let's say we want to do an
auth token for Autonomous Data Warehouse. I could do this,
right? And I have to copy this from my own records.
But the thing which I can do here is I created a user, but that
is nothing which I have created beyond it, right? So if I log in
through this user, I would not be able to do anything.
So let me first create a password. And this is a first-time, one-
time password. So I'll copy this, and let me go ahead and
create a group here as well. So I go into my Identity menu,
and I Create a Group.
And now what I can do is I can add my user into this group.
So I could-- the user which I just created, Traininguser1, I
could come here, and I could add the user to the group.
Thank you for joining for this lecture. If you have time, please
join the next lecture on IAM policies. Thank you.
2. IAM POLICIES
Hello, everyone. Welcome to this module on IAM policies. My
name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure team.
In the simplest format, the policy syntax looks something like
this-- allow subject to do something-- verb-- on specific
resource types in location, which can be tenancy or
compartment. And you can also add a condition here to make
it more complex. But let's break it down into simpler terms,
and then we can look into each in greater details.
So subject here is your group, your group name. And the verb
here, we have, basically, four types of verb, going all the way
from inspect, read, use, to manage. Inspect basically means
you can list your resources. Read and inspect, most of the
cases are very similar. Read gives you some extra capabilities,
like you can read, get the metadata for the actual resources.
And use, when you write the verb called "use," you have the
ability to read plus the ability to work with existing resources,
like you could update the resource, et cetera, depending on
the type of resource you are trying to use. And then, manage
includes all the permissions for the resource.
So if you're not really sure which verb to use, you can go with
manage, or you could go with use, or you could even, if you
want to, just restrict the access. You could go with something
like inspect or read, depending on your use cases.
If that is not what you want to give access to your users, you
could go granular. So if you just want to restrict access to
database, you could say database family, instance family,
virtual network family, and so on, and so forth.
Now, you see in this case, I don't need manage because this
user is just using a subnet. It's not creating a new subnet in
most of the cases, right? So use is fine because it doesn't
need to create or delete. And then delete the subnet or the
virtual cloud network. And then, you need these other verbs,
right?
So first thing let's do here is write a policy. Now, you can see
there are three policies here. And I'm still in the root
compartment. What does that mean? We'll talk more in the
next module, so don't worry. But policies need to live
somewhere. You need to attach it either to a compartment or
a tenancy. You just cannot keep-- leave it hanging
somewhere. You have to attach it.
3. IAM COMPARTMENT
My name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure Team.
If you add more than one users in here, remember that they
are part of the Administrators group, so they have access--
just by virtue of being present there, they have full access to
all the resources.
So for example here, you say zero compute quotas, and you
have this keyword here, BM, bare-metal in tenancy-- meaning
you have-- you zeroed out bare metal instances, so nobody
could create a bare metal instance in your tenancy. But then,
you are overwriting this for the Phoenix region by setting a
compute quota to five for bare-metal instances where
request.region-- and we looked into this in the Advanced IAM
policy. If you want to restrict or make policies is more
conditional, you could have request and target.
Now, it's likely I don't have a virtual network here, but let's
see if I can see if there's a virtual network here, and click
Create. And this should let me, as-- bingo, there you go. It
lets me create an instance in the production compartment.
And if I remove that policy, I cannot create an instance in the
production compartment.
Thank you so much for joining this lecture. If you have some
time, please join the next lecture where we look into more
complex scenarios around policy inheritance, and
attachment, and what happens when you move resources
across compartments. Thank you.
4. POLICY INHERITANCE AND ATTACHMENT
Hi, everyone. Welcome to this lecture on policy inheritance
and attachment for compartments. And what happens to the
policies when resources are moved or compartments are
moved? My name is Rohit Rahi, and I'm part of the Oracle
Cloud Infrastructure Team.
So the way you will do that is you will say, allow group
network admins to manage virtual network family in
compartment. And right here, you would provide a path-- B
colon C. So if you do B colon C, and you do this, you write--
you can put up the policy right here. If you don't do this, then
A has no idea of where the C compartment is, and it would
not be able to-- if you do that, the system will give an error
saying that this policy could not be attached to compartment.
A.
Now, let's get very-- let's get a bit clearer on this. If you write
a policy here, only compact A admins can modify it.
Compartment B admins and compartment C admins might
not be able to modify it. And network admins can still only
manage networks in compartment c because your policy says
B colon C, so it's going all the way to compartment C.
So if you do this, and you write a policy right here, then the
policy automatically changes right here, and it gets updated
to dev colon A. And policy is automatically updated, and this
group G1 doesn't lose its permissions.
But in this case, if you had done Ops colon Test colon A, and
you did the move, it would have changed to Ops Dev colon A
as in the previous example because there's a shared ancestor.
And if you had specified this right here, it would have done
that.
So it's, again, little bit tricky. You just have to make sure that
you understand the concept. With the compartment being
moved you have to give the whole hierarchy path. Otherwise,
the policies doesn't get updated.
5. IAM – TAGS
Hi, everyone. Welcome to this module on OCI tags. My name
as Rohit Rai, and I'm part of the Oracle Cloud Infrastructure
Team. So when it comes to tagging in OCI, there are two
kinds of tags which are supported today. The first category
would be familiar to folks who have worked with other cloud
windows, and that's the concept of free form tags. So this is a
basic implementation, and it basically consists of a key and a
value.
The whole idea is, when you use defined tags, you have sort
of a schema, and you can secure them with policy. Later on,
I'll show you a slide where we talk about how you can secure
them using policies-- OCI policies.
So let's dive a little deeper into the tag namespace. As we
saw, tag namespace is nothing but a container for a set of
keys-- tag keys-- with tag key definitions. Now, what does
that look like? The tag key definition specifies its keys. So in
this case, we defined a namespace call Operations, and we
have a key which is called Environment. And we could specify
what kind of values are supported for this key, right?
A tag key can have a tag value type of string, or it can also
have a list of values from which the user must choose. So
now, you also can provide options where the user only
chooses a specific set of values, gets an option to choose from
that specific set of values.
You can also use a variable to set the value of a tag. When
you add the tag to a resource, the variable resolves to the
data it represents. For example, if you have a namespace call
Operations and you have a key called Call Center, for the
value, you could specify something like this--
iam.principal.name at oci.datetime, where, when you add this
tag to a resource, the variable resolves to your username--
that's IAM principal name-- and the date, time-date stamp for
when you added the tag.
So this just makes-- again, gives you the flexibility to use tags
in a variety of ways. So let me just quickly go to-- and I'll be
using this, so let me just complete this quickly. Let me just
quickly go into our console and show a couple of these
things, how you could use them.
So the first thing is, you want to find-- you want to use, let's
say, the tag name spaces. Using just normal, free-form tags is
pretty straightforward. Let's use something which is under
namespaces we just looked at.
Now, right here-- this is true with all the resources you have
in OCI-- have a place which shows the tag namespaces. And I
could use a free-form tag. I could do that, or I could pick one
from the namespaces we just have in the system which my
admins have created.
So the admins have created this-- the admin has created this
namespace for marketing. So pull that out, and right away,
you can see the tag keys here. And because it was a tag key
which had a set of values, specific values, I could pick these
values from here-- North America, EMEA, and APAC.
Now, last thing I want to show here is how you secure these
with policies, because we talked about that, right?
And you need to write policies where they can create and
delete instances, and so on, and so forth, right? And they are
using virtual network families, so just "use" here. They are
using the block volume, so there's the keyword "use" here.
But they're creating and deleting instances, so the is the
keyword "manage" here.
Now, you could secure your defined tags with policies as well,
right? So for example, if you want these users to use
namespaces, you could just leave it there, or you could make
it conditional and more powerful. So you could say that a
target namespace name-- remember, this is the example of a
complex policy. We talked about request.operation and
target.name. Those are the two keywords you use with
variables.
So again, it gives you a little bit more power because it's not
just free-form tags, but you could define tags. You can have
some consistency. Users can choose from certain values. And
you can also secure them using policies. So who can apply
the tags? You could control that.
The most significant bits are the left-most bits-- are the
network prefix which identifies a whole network, or it can be
a subnet. And the least significant bits forms the host
identifier, which specifies a particular interface of a host on
that network. So simply put, an IP address has two
components, as we just talked about-- the network address
and the host address. So you could logically think about your
IP address as network and the host.
So 0 and 255 you cannot use, but other than that, you could
use the other addresses for your host within your network or
your subnetwork. And next slide, we'll look into this in more
detail, but the notation is actually a pretty straightforward
notation. As you know, IP addresses are 32 bits long with for
octets-- octets meaning these are 8 bits. So you have 8 bits
here, you have 8 bits here, 8, 8.
And you specify the CIDR notation using this slash character
and a decimal number. So you could say something like
this-- 192.168.1.0/24. That slash 24 here is the subnet
mask. Now this is all good in theory-- how does this really
work in practice?
Let's look into a couple of slides, and we'll look into how you
can use this information. So examples of commonly used
netmasks-- subnet masks are class A networks, which all 8
bits-- the first octet being all 1's. Class B network which is
the first two octets being all 1's, and class C network which is
the first three octets all 1's.
And we'll look into a class C network, and we'll further divide
it into a subnetwork, and we'll see how this is all done in
reality. So first thing before we get into that is you will have
to have a grasp of the decimal and the binary notation. So
any time you use IP addresses, you use these decimal
numbers here.
And then 0 is all 0's. Now when we talk about this slash 24
subnet mask, remember, in the previous slide we talked
about this. Slash 24 basically means that you turn all these
network bits to 1.
So the first octet is all 1, the second octet is all 1, and the
third octet is all 1. So if you do the math, 8 plus 8, plus 8,
plus 8, you get to 24 bits, and that's basically the 24 subnet
mask here. Now what we do is we take the network and then
we take the subnet mask, and then we do a logical AND to get
the network and the host.
And logical AND basically says that if you have two bits, you
basically do a logical AND on them, meaning if the two bits
are 1, you get a 1. In all other cases, it's a 0. 0 and 0 is a 0. 1
and 0 is a 0. 0 and 1 is a 0.
So the idea is now you've got this-- you do a logical AND here,
and this is the range you get. So these are the hosts you can
use if I give you a network of 192.168.1.0. Now this first one
is the network address as we talked about, and this last one
is the broadcast.
So how did I get to that-- how did I do it? Let's look into it.
First thing is 192.168.1.0, that's the IP I got. The network is
still the same, so nothing changed here.
So how do I get this number here, dot 31? From where did I
get that? So now the thing to realize is look at the bits we
borrowed here. And by the way, this 224 is coming because
these three bits are 1. So if you do the math-- 128 plus 64,
plus 32, because these three would be all turned to 1-- you
would get to this 224 number. That's how I got that 224.
So now if you see the first three bits I borrowed here, so that
means-- because this is binary-- I can have 2 into 2, into 2--
8 subnetworks, because these are now borrowed for my
subnetwork. I have subnetwork, and now I have host. And
now this piece here-- because again, these are binary-- five
bits are for my host. So I if I multiply, I get to 32.
So as you can see here now, I took this bigger network, and
because of this 27 subnet mask, I have 1.0/27, I have
1.32/27 here, I have 1.64/27, and so on and so forth. So I
took that big network and now I divided it into eight smaller
networks. How did I get eight? I borrowed three bits here, so I
can have 2 into 2, into 2-- eight subnetworks. Five bits are
from my host. Remember, these are all binary, so I can have
32 hosts.
Now this is the basic information you need in order to
operate-- work with the OCI Virtual Cloud Network service,
because everything is sort of in the CIDR notation. So I hope
this is helpful. Thanks for joining this lecture. If you have
time, join the next lecture, where we introduce the Virtual
Cloud Network and some of the core concepts. Thank you.
2. INTRO VCN
[SOUND EFFECT]
Now one thing to keep in mind is that, again, these are not
addressable on the public internet. You can assign these
ranges within a private network. Each address is unique
within that network, but not outside of it. Now one thing to
keep in mind, because this comes up a lot of times, is OCI
within Oracle Cloud Infrastructure VCN, the size we support
is slash 16 to slash 30.
Now why don't we go all the way to slash 31, for example?
And the next bullet actually explains that. In VCN, the first
two IP addresses and the last one are reserved. In a typical
network, the first and the last are reserved. The first is
network, the last is broadcast.
Now let's look into this in a little bit more detail. So first thing
is you see this Oracle Cloud region here. In typical regions,
we had the previous-- the first regions we launched, we
started with three availability domains.
So you have AD1 here, you have AD2, and you have AD3. So
we always had three availability domains. Now what it's
showing here is in a region, irrespective whether it has three
ADs or one AD, the VCN is a regional service. And how you
create a VCN, you just simply specify a CIDR range--
recommendation is to use RFC 1918.
Now with this, let's quickly jump into the Console and show
you a quick demo of how you create a VCN within Oracle
Cloud Infrastructure. So I'm logged into my OCI Console. And
if you click on this sandwich or burger menu icon on the left-
hand side, you can see different tabs, and you can see the
Networking tab right. And within networking, the first link is
Virtual Cloud Networks-- VCN.
And you can see here it's doing a bunch of things for me. And
I'll just click on that and I'll click on Create here. And within
a couple of seconds-- a second or less, you can see my virtual
cloud network is created. And you can see this is in US east,
and this is my first VCN.
If I didn't want that, I could come here and I could say this is
my-- or let's call this production VCN in US East. It's
compartment is training. I want to create a virtual cloud
network only. I really don't want to go and create all the
subnets and all that, because I want to control what kind of
subnets and what kind of routing-- the CIDR rotations I can
use.
In the next module, we'll talk a little bit about public and
private IP addresses, and then we'll spin up an instance in
both the subnets, and we'll get into a little bit more details on
how things work. Thanks for joining this lecture. If you have
some time, join the next lecture where we talk about IPv6
addressing within OCI VCN service. Thank you.
3. IP ADDRESSES
Hi everyone. Welcome to this module on IP Addressing within
the OCI Virtual Cloud Network Service. My name is Rohit
Rahi, and I'm part of the Oracle Cloud Infrastructure team.
Now you are not just restricted to one private IP address for
an instance, you could have-- the first one is the primary
private IP. But you can have additional private IPs, and these
are called secondary private IPs. So you have a primary
private IP and you can have secondary private IPs.
Now how does this work? One question you would ask is how
do I know what is primary and what is secondary, or VNIC
right? So every VM has one primary VNIC. How? You create
this when you launch the instance, and we'll go and look into
this in the demo.
If you are manage Kubernetes, you get a public IP. You can
view, but you cannot choose or edit. In some cases, you
cannot even view them, but you definitely cannot choose or
edit but you cannot even view. So a good example is
somebody call an Internet Gateway. We'll talk about this in
the next module. You cannot see what public IP it has, and so
on and so forth. All right, they're from other services where
you might not-- you get a public IP but you cannot even view
those.
Now what does the route table consist of? It consists of a set
of route rules. Each rule specifies destination CIDR block and
it specifies the route target, the next hub for the traffic that
matches that CIDR. So what exactly do we mean?
The important thing to keep in mind, you can have more than
one NAT Gateway on a VCN, though a given subnet can road
traffic to only a single NAT Gateway. So this is a little
different than the Internet Gateway.
So there are two kinds of labels which are available today. For
example, if you're going to do object storage, you could
specify object storage as a regional service or you could
specify OCI region object storage here or you could specify all
services. So in this case, in future, if you have other services
you want to access, you could actually do that because you
have access to all the OCI services through the link to your
service gateway.
The last design pattern is around use cases where you have a
private subnet here, it might be a database, but now instead
of going to the internet, you are going to your own customer
data centers. So this can be for-- let's say you have your DNS
running on-prem and you want to access that through your
database. Something in the cloud wants to access it or you
have your on-prem environment from where you want to
migrate data. So you need to connect to that.
Now the DRG is a little bit different than the other gateways
we have looked. DRG's a standalone object. You must attach
it to a VCN after you create the DRG, and VCN and DRG have
a one-to-one relationship, meaning a single VCN can only
have one DRG and one DRG can be attached to a single VCN
at a time.
5. VCN DEMO1
This is part one of the two part series Demo. In this one we
create a public subnet and then we create a launch and
compute instance in their installed web server on it, and then
communicate to the web using an internet gateway. So pretty
straightforward demo. Hopefully it could give you a sense of
how things work with the OCI Virtual Cloud Network Service.
Now right here, I'll choose the default route table. We just
discussed what route tables do, it has routing rules for
routing the packets of the VCN. And subnet access, I would
choose a public subnet because I'm going to create a host in
here, compute host instance, and then I'm going to run a web
server in it. All right, so public is fine.
And then down below, I'm going to choose a default security
list. Security lists are nothing but virtual firewalls which
determine what kind of traffic can flow in and out of the
subnets and the VCN. For this discussion, we have not really
gone and discussed what security lists do and what they look
like, but for now, let's just go in and create this particular
subnet.
There are some other options, I'm not going to touch them.
And right here, I have to paste SSH keys. Now I already have
my SSH keys, which I am using. So I just paste it there. And
then I can click Create here. And now my instance would be
created. It would take a few seconds and my instance would
be up and running.
Thank you for watching this Demo. This is part one of the
Demo. In part 2, we will make it a little bit more advanced,
where we create a bastion host and then we create a private
subnet, install a database server, and using a NAT get we
tried to do to get some patches to the database server. Thank
you.
6. VCN DEMO2
[INAUDIBLE] look into installing a database on a private
instance in a private subnet, and using on NAT gateway get
that instance-- some patches from the internet. Now as we
had the setup done in our previous demo, we had a web
server running in a public subnet, subnet a. In this particular
demo, we are going to create a Bastion host, and we are going
to cheat a little bit because it's just a demo.
And then below here, I will choose a private security list. And
there you go, I just created my subnet b, a private subnet, to
host my database instance. Now I will go into the compute
console, and I will create a database instance here. So I
would call this as my db or database.
So it's saying do not assign public IP. That's great. All these
options, I'm not going to touch. I need an SSH keys here. I
think I have it here in my-- let me just copy. [INAUDIBLE]
just make sure that I have the whole key copied.
So right here, you can see that I'm using SSH proxy
command to go from my Bastion host-- so this Bastion host,
public IP 0.129.213.120.162 to my database instance. And
right here, I'm using a private IP 10.0.2.2. And I click yes,
and now you would see that I'm right inside my database.
7. PEERING
Hello, everyone. Welcome to this module on peering. My name
is Rohit Rahi, and I'm part of the Oracle Cloud Infrastructure
team.
And the other you would do here-- which is not on the slide
because we have not covered-- is you also need to open the
virtual firewalls so you can let the traffic into this VCN from
this VCN, and vise versa. There are a couple of things you
need to understand-- it comes up in the exams also. The first
one is the two VCNs in the peering relationship cannot have
overlapping CIDRs.
And it's a public subnet, so I can SSH into it, but this one is
a private subnet. And this VCN and this particular subnet I
have already created, and I've already instantiated this
particular instance so we don't spend time doing these
things, which seem pretty logical-- the way the setup should
be.
So let me jump to the Console. And right here, you can see
DemoVCN is the one we just talked about. So this is 10/16,
and it has subnet A and subnet B as we were discussing in
the slides [INAUDIBLE] subnet A.
This is the web server. I'm going to SSH into the web server.
And then there is no local peering gateway created here, so
you can see there's nothing here. And if I go jump quickly to
the other VCN-- I just created DemoVCN2-- address space of
192.168.1.0/24.
So let me just quickly SSH into this instance. And this is the
web server we have been using for the other demos. Now I
also have my instance running in this private subnet here. So
I already have it created.
You can see that it runs in this private subnet, and this is
part of the other VCN with this 192.168.1.0/24 address
space. So I picked this private IP here. You can see it doesn't
have a public IP.
So this is the path for packets destined for this IP-- I want
them to go through the local peering gateway, which is
straightforward. So I click Add Routes here, and then pretty
much, I am done with this particular VCN. So let me go to the
other reason and do the same kind of things.
So first things first, I need to open the security list here. So
for this one, I need to open the security list coming from the
other VCN. So this is the other reason address space. I could
actually limit it just to the subnet. I could do that, but right
now just for illustration purposes, let me just open the whole
VCN.
And you will see it says pending. And within a few seconds,
this will change to peered. If I go back to my other DemoVCN,
you can see here that it says connected to a peer. And now if
I come here, bingo.
8. SECURITY VCN
To keep the picture clean, I don't have the ADs shown here,
but these definitely are running in-- whether it's a single AD
region or a multi-AD region. So what is a security list? A
security list is a common set of firewall rules associated with
a subnet and applied to all instances launched inside a
subnet.
Second thing is if you see the rules itself, all three security
lists have the same rule-- it can be different rules. It basically
says ingress, meaning incoming traffic, I'm allowing all traffic
to come at port 80. And egress, meaning outgoing traffic, I'm
only allowing traffic to this particular subnet or port 1521.
Contrast this with security list rules, where you can specify
only a CIDR, and you can obviously do service for both of
them in case you are going to like a service gateway. But in
this case, you typically would always go with CIDR. But in
case of NSG, you can specify another NSG as the source or
destination. So it just makes life a little bit easier, and it
leads to more complex scenarios you could support.
Now you could use security lists alone, like we have done in
the demos. You could use network security groups alone, or
you could use both together, as you can see in this particular
picture here. So it has a couple of security lists, and it has a
couple of network security groups.
If you want security rules that you want to enforce for VNICs
in a VCN, all instances in a VCN, the easiest solution is to
put the rules in one security list, and then associate that
security list with all subnets in the VCN. Pretty
straightforward. We have done this in a couple of demos. If
you remember, we had a demo where we had a web host
instance and a bastion instance. And we said just for
simplicity of the demo, we wanted the same security rules for
both of those.
In real cases, you would separate them out, but in our demo
we did that, and we used the same security list for both of
those instances. Now if you choose to use both security lists
and network security groups-- this is very important-- the set
of rules that apply to a given VNIC is the union of these
items. It's very important-- it gets confusing.
So if you go into the browser, you put the IP address, you can
see a page come up. You're sending the traffic in and you're
receiving the traffic out. You don't have to write an egress
rule specifically.
So you have to write this rule explicitly, and you will have to
say that now my destination CIDR is going to be any IP. My
source is now 80 because my traffic is coming from here, and
destination port can be anything. If you don't write this rule,
you will basically have traffic come in, but traffic will not go
out.
So if you do that, let's say, with a web server, you put the
address in there on your browser and you would not get a
response page back. Basically, what is happening here is
there is a mechanism called connection tracking. And in case
of stateless, you basically are saying that we don't want
connection tracking.
If those two things were not there, then, of course, I could not
access it. And third, it also has a public IP. [? So ?] again, it
didn't have a public IP-- there was no way I could bring it up
in my browser. Really straightforward. Now if I go into my
DemoVCN--
[AUDIO OUT]
Now in case of security list, this option doesn't show up. You
cannot use another security list as the source or the
destination. So I pick CIDR. That's fine. I could say use all IP
addresses.
9. DNS
For subnet, similarly, you have more options. You can decide
what the subnet DNS label looks like. And the VCN DNS label
comes here because a subnet is part of a VCN. And then of
course, this part, you again cannot delete. For a host fully
qualified domain name, host name, subnet name, VCN name,
.oracleoraclevcn.com.
Right here, you can say you use DNS host names in this
VCN. So this is required for instance hostname assignment if
you plan to use VCN DNS or a third party DNS. This cannot
be changed later on if you do that.
Right here, you can see there is a DNS label, and it's coming
from the name I specified, but you could change this. So you
could say this is mydns, and now you will see that the DNS
which we are using is mydns.oracleoraclevcn.com. So I
created this particular VCN. And right here, if I go into the
DHCP options, you can see that this is-- because I'm not
using a custom resolver, it is using internet and VCN
resolver.
Now I can change that. If I click here Edit, I could specify like
an internet and VCN resolver, or I could do a custom resolver.
So if I want to do a custom resolver-- something like this-- I
could do it here, and I could save this change. Now this is-- of
course, I'm not going to use this resolver.
Thank you for joining this lecture. In the next module we'll
bring together all the concepts we have learned in the VCN
and conclude the lecture series. Thank you.
Route tables define what can be routed out of the VCN. You
don't need a local rule, because the traffic is already allowed
inside the VCN. But it basically decides what kind of traffic
can be routed out of the VCN. Private subnets are
recommended to have individual route tables to control the
flow of traffic outside the VCN.
And not just their own route tables, but also security lists. So
here, things are much cleaner. You don't mix and match
private and public subnets. All hosts within a VCN can route
to all other hosts in a VCN. There is no local route rule
required.
So you can test this out. And this is going back to the
whitelisting model we were talking about earlier. Final thing--
we looked into this in the previous module-- Oracle
recommends using network security groups instead of
security lists, because network security groups let you
separate the VCN subnet architecture from your application
security requirement.
So what does the increase look like? So for the frontend route
table, basically, we are allowing traffic from all packets-- from
all addresses to go to the internet gateway. We've looked into
this in the previous modules-- pretty straightforward.
Now one thing you will notice here, I'm still using security
list, but you could have used a network security group here.
There is no requirement which says you just have to use a
security list. Now in the case of the backend, again, I'm
saying traffic going to all IP addresses-- any IP address can go
to a NAT, can go to a service gateway, or even can go to a
DRG.
But in this case, I'm saying I'm locking out all the traffic-- I
don't want any traffic to go from here. I just want traffic to go
to the frontend. And because it's all stateful, if my packets
are coming in at port 1521, they're also going out from 1521.
So I don't have to open-- write a separate egress route. If this
was stateless, I would have to do that.
Really straightforward setup. We have seen this in the
previous demos. So hopefully, it gives you, again, a recap of
some of the concepts we have gone through.
Well, with that, thank you so much for joining this lecture
series on virtual cloud network. Virtual cloud network is one
of the core concepts you'll need to understand in cloud, and
of course, for OCI. I hope this was useful. If you have time,
please join me in the next lecture series on compute. Thank
you.
And then, the second option is you can run your own
software VPN. If you have a Linux VM, you could install your
own software like [INAUDIBLE] on, and you could run it
yourself. But remember, the first option here, the OCI-
managed VPN service, is offered for free. It's a standard VPN
between two two different sites, one site been your Oracle
Cloud environment, the other side being you on-premises
environment.
Now, IPSec, when you talk about it, there are two modes. One
is called a transport mode, where IPSec encrypts and
authenticates only the actual payload of the packet and the
header information stays intact. The other mode is called a
tunnel model, which we talked about here, where IPSec
encrypts and authenticates the entire packet. After
encryption, the packet is then encapsulated to form a new IP
packet that has different header information.
Now, how does this whole thing work? Well, let me just run
this animation here. So first thing here is you have your on-
prem environment, right? And we have this particular
address space. In fact, I'll be using the same address space in
my demo.
And you have a long list of devices here which are available
on our documentation page, and the configuration for those
devices, the supported devices, right? And so there's a good
chance whatever devices you are running today would be
supported by the OCI VPN Connect service.
Now, does this whole thing work? Well, there are a bunch of
steps here, and it's actually rather straightforward. But let
me just quickly run through the steps here, right? First thing
you do is create a virtual cloud network. Pretty
straightforward. We have seen that in the previous lecture
series on VCN.
Then you have to attach your route table to send the traffic to
the DRG. Then you create a CPE object which is your on-
prem, which is basically a virtual representation of your on-
prem router, and you would get this IP address running from
your router here, right? So whatever router you're running
here will have an IP, public IP address. And there are things
like if your CPE is behind a MAC device, what to do, and all
that. It gets into more complex details which we will cover in
our level 200 module. But you the CPE device, basically, will
have a public IP address.
So you create the CPE object in OCI, you add the public IP
address. Then, on the DRG, you create your IPsec tunnels so
you create these IPSec tunnels here and then-- between CPE
and DRG. And you could choose to use a static route, or you
could choose to use a BGP route, right? So you could decide
what kind of routing I want, right? Now you can see there's a
static route here.
So thank you for joining this lecture. If you have time, please
join me in the next lecture, where we'll talk about the VPN
Connect demo. Thank you.
And I'm going to use the default route table, and I'm going to
use the default security list, right? That's fine. And because
it's-- I'm going to access it to my on-prem environment, let
make it a private subnet and click Create here. And now, it's
created, right? So plain, simple VCN has one subnet, and I'm
using the default route table, the default security list.
So let me just grab that and put that public IP here, because
I would need that, and create a CPU. And now my virtual
representation of that network device is created in OCI right
here. If I see DRG, my DRG is up and running. So it took less
than a minute.
And now it's asking for a static route. I could have used a
dynamic routing as well. So right now, just to keep the demo
simple, I'll use my static route here. But if you click on
Advanced Options here, you can see that there are there--
you could actually pick BGP routing as well, right? It's a new
feature.
And you could pick up your ITE version-- so ITE v1 or v2.
And again, some of these complex things we talk about in the
level 200 module. But I could have chosen a dynamic routing
here as well, right? I'm going with static. That's fine.
Let me just make sure that this is the static route I have. This
is my AWS VPC site here, 10.0.0.0/16, 16, right? That's my
static route, right? And I click here, and my IPSec
connections would now be created.
Then, you can see that the tunnel has a public IP address
here, right? 139.213.7.49, and 129.213,6.52, right? So the
two different public IP addresses. And the IPv6 status is
down, of course, because we have not set up the LibreSWAN
end, and we have not done all the configuration, and it's in
the provisioning stage right now, right?
So the first file which we are going to use is the IPSec config
file. And if you can see here, it has certain parameters which
I was testing earlier. So there is a connection here we're
calling connection OCI1, and you can see some parameters.
So let's go ahead and change this file as well. The first thing
I'm going to do is I'm going to change my IP here. So it's 49,
and then right here is this shared secret. It's going to be the
shared secret coming from my tunnel. Let me just delete this
whole thing and go back to my tunnel. Grab this whole thing
here. Copy it.
All right, so that took a minute or so. And as you can see
now, my tunnel IPSec status is up, which means that my
tunnel is up and running. Now, if I had an instance running
inside that subnet, I could have pinned my libraries on VM
and vice-versa to show you the connectivity. But you can see
the IPSec status up here, and basically, what it means is the
tunnel has been established between the on-premises TP
device, which is LibreSWAN VM running in AWS, and my OCI
DRG-- the two tunnels I have, right?
And right here, you can see some of the matrix. I can do this
in less than a minute. And there is no data here, but you can
see that tunnel state, packets with errors, et cetera, right?
3. FAST CONNECT
[ORACLE UNIVERSITY]
So the idea here is you can connect to OCI directly or via pre-
integrated network partners. Think about this as having your
own high-occupancy vehicle lane in the internet. So your
traffic doesn't go through the normal internet, which can be
unreliable because internet is a collection of networks which
are all peered together.
There are two different ways you can use FastConnect. One is
called Private Peering, where you extend your on-premises
data centers into Oracle by using private connectivity, and
you access services running in a virtual cloud network. Or,
the other model is called public peering, where you can
connect to your on-prem environment with some of the OCI
public services such as object storage. And we'll look into
these in more detail in subsequent slides. There is no charge
for inbound/outbound data transfer, and as you can imagine,
FastConnect uses BGP protocol.
You can have a virtual circuit for this site CIDR. You can
have another for this CIDR, right? So you could have multiple
virtual circuits for reaching different parts of your
organization, or you could just do multiple virtual circuits
just for redundancy purposes. And like we said, FastConnect
uses BGP, and then I think it might have a bullet point here
which is missing. It might use layer 2 our layer 3
connectivity.
So these are the two models which are supported, and this
also comes up in exams. The question might be, if I'm using
public peer, which statement is not true? And they will give
you four options. And you have to make sure that, in public
building, DRG is not used, right? So just be aware of that--
DRG is used only in case of private peering.
All right, so with that, let's quickly jump onto the console and
show a quick demo. Now, for the demo, one thing which I
want to call out is in the demo, what I'm going to do is I'm
going to show you the connectivity. Sorry, let me just get the
slide back here.
In the demo, I'm going to show you the connectivity from here
to the provider network, to OCI. This connectivity, from your
existing on-premises environment to the provider edge, I'm
assuming that you already have this running.
[WHOOSH]
4. FASTCONNECT DEMO
Hello, everyone. Welcome to a quick demo of the FastConnect
service. So let me jump to the console.
So for the first one, basically, it means that you are co-
locating with Oracle in a FastConnect location, or you're
using a third-party provider, right? And if you click on that,
you can see things like cross-connect groups, cross-connect,
link aggregation groups, et cetera, et cetera. If you are
interested in more details on how this works, please check
out our level 200 module where we cover this and also go
through, show you this to our demo.
So right now, I have the DRG created, so I'll just use that. If
you don't have a DRG, you need to create one for this
purpose, right? And then I need to choose my provision
bandwidth. I'm going to choose 1 Gbps. Some providers
would go 1, 2, 3. Megaport supports 1 and 10, so I'm going to
use 1.
The BGP Auth, I'm going to leave it blank. And then, for my
Override MCR ASN, this is the same as the Customer ASN
Number that we had earlier, right? 64556. So I'll click Add
here. And then, I'm going to click Next and add this VXC--
virtual circuit, right?
And as you can see here, this virtual circuit, I need to order
and click Order here. And now, as I order this service, what
Metaport is trying to do is to provision this virtual circuit on
my behalf to my Oracle Cloud Infrastructure location. I
specified US East, Ashburn region.
And now it's deploying, and this would take a few minutes, It
typically takes anywhere from five minutes to 15 minutes,
sometimes even shorter than that. And you can see here, it's
in the process of deploying. As soon as it's deployed, this will
turn to green, and when I come back to my console, I can see
that my life cycle stage would change to provisioning and my
BGP state, if the BGP information we have provided is
consistent and correct, it would change from Down to Up.
So let me just pause the video here. It's going to take a few
minutes. And I'll come back, and I'll show you these things
working in action.
All right, so that took a few minutes. Let's come back to the
Megaport portal. And as we can see here, we have these
Megaport cloud routers, and this is the circuit we just
provisioned, DemoFC, right? And if I come here I can see
some of the details-- the BGP connection we added-- look at
some of the details.
LOAD BALANCER
1. LOAD BALANCING INTRO
Hello, everyone. Welcome to a quick demo of the FastConnect
service. So let me jump to the console.
So for the first one, basically, it means that you are co-
locating with Oracle in a FastConnect location, or you're
using a third-party provider, right? And if you click on that,
you can see things like cross-connect groups, cross-connect,
link aggregation groups, et cetera, et cetera. If you are
interested in more details on how this works, please check
out our level 200 module where we cover this and also go
through, show you this to our demo.
So right now, I have the DRG created, so I'll just use that. If
you don't have a DRG, you need to create one for this
purpose, right? And then I need to choose my provision
bandwidth. I'm going to choose 1 Gbps. Some providers
would go 1, 2, 3. Megaport supports 1 and 10, so I'm going to
use 1.
The BGP Auth, I'm going to leave it blank. And then, for my
Override MCR ASN, this is the same as the Customer ASN
Number that we had earlier, right? 64556. So I'll click Add
here. And then, I'm going to click Next and add this VXC--
virtual circuit, right?
And as you can see here, this virtual circuit, I need to order
and click Order here. And now, as I order this service, what
Metaport is trying to do is to provision this virtual circuit on
my behalf to my Oracle Cloud Infrastructure location. I
specified US East, Ashburn region.
And now it's deploying, and this would take a few minutes, It
typically takes anywhere from five minutes to 15 minutes,
sometimes even shorter than that. And you can see here, it's
in the process of deploying. As soon as it's deployed, this will
turn to green, and when I come back to my console, I can see
that my life cycle stage would change to provisioning and my
BGP state, if the BGP information we have provided is
consistent and correct, it would change from Down to Up.
So let me just pause the video here. It's going to take a few
minutes. And I'll come back, and I'll show you these things
working in action.
All right, so that took a few minutes. Let's come back to the
Megaport portal. And as we can see here, we have these
Megaport cloud routers, and this is the circuit we just
provisioned, DemoFC, right? And if I come here I can see
some of the details-- the BGP connection we added-- look at
some of the details.
Here you can see the burger menu. And if I click on it, I can
see the various services. Right now, I'm in the US East region.
Right here in Networking, I can bring up a load balancer from
the link here.
And right now, I will choose the web server 1 and the web
server 2 backends which are running in the same VCN in
different subnets. But as you can see here, bastion, database,
web, my auto scaling instance pool all show up here. These
are not in the same VCN-- they exist in some other VCN.
But the reason they all show up is, like I said, I could have a
load balancer running in one VCN, and I could have my
compute instances running on an altogether different VCN.
Still security lists, network security groups, and the route
tables are configured properly. So I choose web server AD1,
web server AD2, and add my selected backends.
Now right here, it's asking me to choose my health check
policy. I will go with the TCP because I'm just making a TCP
connection, getting response back. With STTP, I'll have to
configure my URL and all those things. But probably, I'm just
showing a quick demo. So TCP is fine, port 80 is fine, and I
can change some of these options like interval, et cetera.
And within a minute or so, you will see that I will get a public
IP address, and I should be able to bring that up in the
browser and chain the two web servers in round robin
fashion. So let me just pause the video here for 15 seconds,
and the load balancer will come up, and we'll use the public
IP address. So it looks like my load balancer is up and
running.
And I can click here, and I can see the public IP address
here-- it's available. So if I go ahead and bring this up in my
browser, you can see that my load balancer is working, and
it's sending the traffic in a round-robin fashion. So I could
click web server 1 and I could click web server 2, and I can
see the traffic is coming here. Well, that was a quick demo of
the OCI load balancer service in action. In the next module,
we're look into a private load balancer thank you.
[SOUND EFFECT]
But I can also do things like drain the state. And if I click
here, basically draining means that I disable new
connections, the load balancer stops forwarding new TCP
connections, and new [INAUDIBLE] requests to the backend
server. So this is good for scenarios where I want to do
maintenance, I want to take out a backend set out of the
rotation of the backends I have.
Let's see how this works in action as we saw with the public
load balancer. So I'm using regional subnets here. So I have a
[INAUDIBLE]. I'm only showing two just to keep the picture a
little cleaner.
And then like the public load balancer, you could send the
traffic to the backends-- whichever ADs exist. Now in case of
AD-specific subnets, there is a change here where my active
and my failover are both in the same AD [INAUDIBLE]. So
yes, we still have two copies, but they're both running in the
same AD.
COMPUTE
1. COMPUTE INTRO
Hello, welcome to this module on basics of the OCI compute
service. My name is Rohit Rahi, and I'm part of the Oracle
Cloud Infrastructure Team. In this module, we'll look at the
basics of the OCI compute service. So before we get into lots
of details, let's look at the various form factors that the
service supports today.
So what are the use cases for bare metal? Well, any time you
have the highest security requirements or the highest
scalability requirement or the performance requirement, you
would use a bare metal machine. So the first thing is if you
have performance-intensive apps, probably you would go with
bare metal. For workloads which are not virtualized-- and
there are still lots of workloads like those-- you would, of
course, go to bare metal. Workloads that require a specific
hypervisor-- so you want to install your own hypervisor, do
certain things-- you would go with a bare metal machine. And
then also, in cases where you have bring your own licensing,
and there are specific examples, you would use a bare metal
machine.
So these are four predominant use cases. But there the other
use cases as well where you would use a bare metal offering.
Now, these are the different ships which are available today
in Oracle Cloud Infrastructure. And the best place to check
this, because this information keeps changing all the time, is
on the documentation site. But you can see some shapes
here, starting with standard shapes, which have only block
storage. You have DenseIO shapes, which have local storage.
You can see bigger model local storage here.
You have shapes where we support AMD processors. AMD
EPYC processors, those are denoted by E here. So we have
those shapes. We have HPC shapes. We have a bunch of GPU
shapes. And these are gen one shapes. And again, as I said,
we keep launching new families and instances all the time. So
the best place to check these are the documentation pages.
Now also note here we have various OCPUs listed here. And
we have memory here listed, and the network bandwidth. You
can see some of these instances have bandwidth going to 50
Gbps. The virtual NICs, you can use, et cetera.
And then there are various scenarios like big data, et cetera,
where you can run the AMD instances. And you can see some
numbers here, different scenarios, big data, HPC scenario,
competition of fluid dynamics where, some of these numbers,
you can test and see it really is, in fact, price performance
run.
2. COMPUTE DEMO
In this case, it's a demo, so I'm just going to skip it, and my
keys are deleted. If I go to my directory here, I can see my
private and public keys, right? So id_rsa is my private key,
and id_rsa.pub is my public key.
So let me just get the public portion of the key and just copy
this one. And I need to provide this value right here in my
SSH window, the public portion of my SSH keys, right? And
then there are a bunch of advanced options here. We're just
going to skip all these. We'll talk about these subsequently in
other modules, right? And then, I'll click on Create Instance.
Now, on this host, I can go ahead and create VMs now, right?
This is-- my host is dedicated to me, but I get a chance to
create VMs. See, if I click here, same experience as before.
Default name is fine. Oracle Linux is fine.
Thank you for watching this demo. If you have time, please
join the next module on the Compute service. Thank you.
3. IMAGES
So custom images only care about your boot disk, not about
your block volumes. A custom image has some limitations. It
cannot exceed 300 gigs, and there are some limitations
around Windows custom images.
So these are the three different modes you could use when
you spin up your instances and you create your custom
images. And again, you can find more details here. There is a
white paper, and there are more details around that.
Now, the way this process works is you have your on-prem
environment. You bring the image in a qcow2 format. Like we
said, the import-export uses object storage, so you store the
image here. And from there, we can create a custom image, or
you can create-- you could do the vice-versa, right? From an
instance, you create a custom image that you could store in
object storage.
Now, when you do that, of course, you have to comply with all
the licensing requirements. And this is a topic we will discuss
in greater details in the level 200 module on compute.
So with that, let me just quickly jump to the console and
show you a quick demo on custom images. So if I go back to
my Compute console, you can see a bunch of instances we
have been running and we have been terminating. It's a good
idea to terminate images which you're not currently--
terminate instances which you are not using.
I could let Oracle choose, and if you click on this page, you
can see the various options which are supported, right? So
you can see the difference between paravirtualized and SR-
IOV, which family supports which shape, et cetera, et cetera,
right? Or I could choose it here, or let Oracle decide.
But right now, let me just pick this bucket. Actually, I have
bucket called Pictures, but I'll just use that right now. And I'll
Save that and Export Image. And now, what it would do is it
would create-- it would put this image, it would give me that
URL which, now, I could use, share with other groups, and
they could use that to import this image and create instances
out of this.
4. BOOT VOLUME
Hi, everyone. Welcome to this module on Boot Volume. My
name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure Team.
Now, with OCI, when you create an instance, you can specify
a custom boot volume size. So for Linux, the default is 46.6,
but as you can see in this picture, we could go to 100 gigs,
right? For Windows, it's to 256 default, but you could go to a
bigger shape. You could go all the way up to 32 terabytes,
because that's the maximum size supported by a block
volume.
And then, the second one is, while the instance is running,
you get the whole entire state, it also creates a crash-
consistent backup. So it's always a good practice to shut
down your instance and then take a backup, right? Because
that way, if you want to-- you're running SharePoint, or
Exchange, or something, it's not a good idea to take a backup
while your application is running.
OK, it looks like it's still going on. So right here, you can see
the boot volumes, right? And if I scroll here, I can see a
bunch of the boot volumes which I have created in the [?
second ?] right?
Right here, you can see the detached instance, and you can
see that Detach From Instance is grayed out. I cannot detach
it because an instance is still running. And it says it's in a
running state. And I could do things like in-transit encryption
and a bunch of other things, right?
There is also Boot Volume Clone. And I could come here, and
I could do the clone here, right? The thing is, clone and
backup are mutually exclusive, meaning only one can run at
a time. So I could not run both of them at the same time.
Thank you for joining this lecture. If you have time, please
join the next lecture where we talk about instance spools,
auto-scaling configuration, et cetera. Thank you.
[WHOOSH]
5. AUTOSCALING
Why would you do this? Well, you will do this because, with
the config, you create a config, and then it basically becomes
a template, and you could spin up multiple instances using
that template. You could put them in different availability
domains if you have a multi-AD region. You can manage all of
them together. So you could stop them, you could start, you
could terminate.
I'll use Oracle Linux 7.7. It's fine. It's a multi-AD region. AD1
is OK. One core machine is fine. Where do I spin up? We have
been using this demo VCN network, and this Subnet A, the
public subnet. That's fine. I assign a public IP address.
Right here, I could do custom boot and all that. I'm probably
just going to skip it. Right here, it asks me to pick up the
SSH keys. Let me just get my SSH keys here, [INAUDIBLE]
private SSH keys here. And below here, you can see some
advanced options, right? I can choose my fault domain, et
cetera, et cetera.
Now, depending on your use case, you might change that, but
it's a good idea to not do frequent scale-in or scale-out, right?
So I keep it at 300. It already picked my autoscale and my
instance pool. So it did that. And then, right here, it says,
what is my policy? So like I said, today, we support CPU and
memory, and the only policy we support is the threshold.
If I click on this instance, the first thing you would see here
is-- of course, we did the public IP, and it's launching in
Subnet A, which is a public subnet, et cetera. You would see
here that this instance, it has-- if you click on Matrix, you
can see that the matrix, I can see my CPU utilization is
something around 66%, right? It's definitely breaching the
threshold of 50%.
I can SSH into this instance. And if I've got a command like
top, I can see the Stress commands we ran, right? You can
see here the various Stress commands. Remember, we had a
startup script where we gave the Stress command with a
timeout of seven minutes to spawn threads on this machine.
So you can just do Control-C here, then something like iostat
minus cpu. You can see that the cpu utilization is 83%,
right? And if we go back to my-- refresh this page, go to my
matrix, you can see here it's actually more than 82%. It's
going to 98%, right now, right?
So this, what it could mean is, if it stays like this for five
minutes, this will trigger an autoscaling action, meaning you
would see one more instance get spun up because of this
behavior. Other way to look at it is, if I go back to my-- go to
my monitoring tab and I click on Service Matrix, you can see
the matrix for various resources running here, right? So if I
go into my Matrix Explorer, I could actually run a custom
query here.
So if I update this chart, you can see right here that my pool
is running a new frequency here. There is an orange and a
blue light. So it looks like I have spun up another instance.
And the way I can see that is you can see the initial starts
with INST, meaning my autoscaling triggered in, and I was
actually able to spin up a couple of more instances because
it's constantly staying beyond 50%.
And we paused the video for a minute or so. And as you can
see here, this is my original instance which was running as
part of the pool. You can see that another instance is getting
provisioned. And it's been less than five minutes, so you can
see that another instance is getting provisioned, again,
because my load is more than 50%. This will be [INAUDIBLE]
because of all the 20 threads I spawned. And so that's the
reason why I'm spinning up another instance.
And if you can see the difference between the time interval,
842 and 849, roughly five minutes' difference, that's the cool-
down period we had. It means that the that's the time period
between scaling action-- scale-in or scale-out. And this
instances is running here in the matrix.
[WHOOSH]
Now, you can also add and update custom metadata for an
instance using the SDK or the CLIs. Let me quickly jump to
the instance. And we have been running a bunch of things.
This is my instance where we were doing some auto-scaling.
So if I just clear my screen and I run my instance metadata,
you can see here that I just did a call to this particular IP
address, 169.254.169.254. And I'm trying to get all the
metadata respective of the values we just took.
So I can see that it's an AD-1 obligatory domain. I can see the
far domain, compartment ID, display name, image, so on and
so forth. Everything which is on the instance, I can get here. I
can also get the public portion of the SSH key and all the
values. Now, I can also update a few values, if I want to,
using the CLI or the SDK. So it's pretty straightforward, like
with any other cloud product.
BLOCK VOLUME
1. LOCAL NVME
Hello, everyone. Welcome to this lecture series on OCI Block
Volume Service. Before we dive deeper into block volume and
local NVMe storage, let's look at the gamut of storage services
supported across the OCI platform. So starting on the left-
hand side, you see in this table, we have local NVMe, we have
block volumes, file storage, object storage, and archive
storage. So this is the whole gamut of storage services
supported by the platform.
And then, the last two storage services are sort of-- you can
think about those as storage for the web, right? So if you
have a lot of unstructured data, you would store them in
object storage. Highly durable. We maintain multiple copies
in the data centers, across the data centers, in a multi-area
region. Capacity is petabytes, and you can see some of the
numbers here. And then, as I said, this is good for
unstructured data.
OK, so let's move and talk a little bit about local NVMe
storage. So in this section, we are going to cover local NVMe
storage. And in the next module, we are going to talk about
block volumes.
So what do we mean by local NVMe storage? In OCI, some
instances have locally-attached and NVMe devices. And what
this means is, if you have applications that have very high
storage performance requirements, lots of throughput, lots of
IOPS, local storage, you don't want to go through the
network, you would use these local NVMe devices.
And you can see some instances here that support local
SSDs, right? BM, bare metal, dense IO shapes, [INAUDIBLE]
the virtual machine dense IO shapes, and you can see the
sizes we support, rate? Going from 51 terabytes, all the way
come down to something like 6.4 terabytes for the smallest
shape.
[WHOOSH]
And then, there are some other cases like expanding instant
storage, instant scaling, et cetera. But the most important
and the most relevant one why customers would use block
volume is for the persistent store and the durability of the
data.
The disk type is, as we said, is NVMe SSD based, and IOPS,
Input/Output Per Second, IOPS performance, it varies. It
goes all the way from 2 IOPS per gig all the way to 75 IOPS
per gig. And you can see the IOPS per volume, we support up
to 35,000 IOPS for volume. And we'll look into these in
greater details.
And then you can see some of the things that are on security.
Data is encrypted at rest. You could bring in your own keys.
Otherwise, you could use the keys provided by us. And you
can also do in-transit encryption.
So that's one way to do it right. The other way is, you can use
Volume Backup, and you could backup to a larger volume, or
you could do Clone, and again, you could go with the larger
volume. And you do backup and clone, you're not restricted
to do backup and clone as the same size as the original
volume. You can actually go higher, right? So again, we'll look
into these in subsequent modules.
[WHOOSH]
Right here, I'm in the OCI console. We have been using the
OCI console for some of the other modules. And if I click on
the sandwich or the burger menu here, I can see the various
service links here, right? So there's Compute, Doc Storage,
Object Storage, et cetera.
So I'll click on Doc Storage, and the first link here is Block
Volumes. And right here, it gives me an option to Create a
New Block Volume. So let's create a new block volume. And I
have been creating a bunch of these block volumes in my
account previously. I'll call it blockvolume1. Compartment
training is fine. I'm in a multi-AD region, so it gives me a
choice of three different ADs. If I'm a single-AD region, I'll just
see one AD here, and that's fine.
It gives me a size. Let's just pick 100 gig, right? Below, you
can see that the sizes can go from 50 gigs all the way to 32
terabytes, right? We looked into this when we were discussing
the service. There are backup policies, et cetera. We'll look
into those in a subsequent module.
And then, right here, the third one is the higher performance.
If I go into this, I get a 75 IOPS per gig. The use case is the
first one-- is for applications like streaming, data warehouse,
log ingestion, where you need a lot of sequential throughput.
You will go with the lowest cost.
As it's getting created, you can see here that I have some
links for attached instances, matrix, backups, clones, et
cetera, right? So if I click on Attached Instances, I can see
that there is no instance which is attached right now.
So I can click Attach Instance, right? And I can attach this
block volume to an instance. If you recall from the slide,
block volume, the whole idea is to give you the durable and
persistent storage. So you can attach it to an instance, then
you can detach it. Even if the instance goes away, your data
is still persistent and durable.
And so I'll choose read-write. And read only means you want
to protect the data. You just want to read, don't-- not write to
it, right?
So I can select the instance here. And I have these four, five
instances running. If you recall from the other module we
had on Compute, we were running auto-scaling. So let me
just pick this auto-scaling instance, and then it's asking me
to pick a consistent device path.
And if you scroll here, you can see the device path and get
more details. The whole idea is, if you are rebooting your
instance and you want your block volumes to mount
automatically, it's a good idea to use the consistent device
path. Otherwise, when you do that, you'll have in your ETC
[INAUDIBLE] file.
You can see that this one here is the one we just attached,
right? /dev/oracleoci/oraclevdb. And to confirm that, if I go
back to my console, you can see the consistent device path
here is the same as what appears on my screen here, right?
Let me just clear my screen.
And you will see that even though I can change the
provisioning, it's a dynamic provisioning, right? So I don't
have to detach my volume. I don't have to incur a downtime
to do that, right? And you can see my performance is now
changed, and now I am at the higher performance tier, right?
So it's pretty straightforward.
You'll have to create partitions, and all that stuff you'll have
to do. Again, depending on Windows or Linux, the behavior
will be slightly different. But right there, you can see I am
going from a 100-gig volume to 200 gigs.
Because one of the common use cases which comes up all the
time is [INAUDIBLE] running some application in this region,
but I also want to quickly clone that application, let's say, in
another region, right. So the easiest way to do that is you
copy your block volume backups from one region to another.
There are two kinds of backups you can do. Right, so there is
on-demand, one-off volume [? backups ?] which you could do,
or you could do policy-based backups.
And don't worry, I'll show this in the console. But today, you
cannot do like a customized backup policy, so you could not
say that, you know, I want to combine Bronze and Silver or
do my own sort of policy. It's not supported today. So, with
that, let me just quickly jump to the console and show you
were the backup policies are.
So first thing you see, here, is the backup policies are listed,
here, right in the console, so Gold, Silver, Bronze. If you click
on Gold, you can see that there are different backup types. So
there is a daily backup which happens. Right?
And as you can scroll here, it will show me some of the-- like,
the times when the backups will happen, right. So it's
showing me the timing for the next three backups. [? It's ?]
going to happen next few-- you know, three days, right? It's
showing me, for weekly, the scheduled for the next three
weeks, right, and so on and so forth. So you can see these
schedules here.
For this one, if I-- let's say I want to do a Gold, you will see
right away that there is a backup being created for this
particular volume. See, if I go to Block Volume Backups, now,
in a couple of seconds, you will see that policy would entail a
backup, and the backup would start there, right, because it's
like a daily backup we take. And then there's a weekly
backup.
So, hopefully, this gives you a good idea of how the backup
works, whether it's on-demand or it's a policy-based backup
and the various different tiers of policy-based backups we
support. Thanks for watching this module. If you have time,
please join me the next module where we talk about cloning
and volume groups. Thank you.
Now, why would you do that? The reason you would do that
is, in reality, as you're working through your applications,
you will have many, many block volumes and many boot
volumes, right? If you have to do a backup and cloning and
management of those building block volumes, it becomes
cumbersome to do it by one by one, right?
Typically, folks will write shell scripts, and they would try to
automate it using Terraform, or write scripts. Now, that's,
obviously, a good way to automate the operations, but OCI
provides this capability out of the box, across volume groups
where you could do that, right, in a seamless manner.
Now, I get this, and you can see here, my volume group has
two block volumes, and it has one boot volume, right? So first
thing I could do here is I could create a backup. So I would
say, you know, create a backup for my volume group, call it
backup1, and just create here. And now you will see that the
backup, you will see number of backups three because there
is a block volume, two block volumes, and one boot volumes.
I could create a clone as well.
Thank you for watching this module. I hope that you found
this useful. If you have time, please join me in the next
module, where we'll talk a little bit about boot volumes.
Thank you.
[WHOOSH]
6. BOOT VOLUMES
Hi, everyone. Welcome to this module on boot volumes. We
have already covered boot volumes under the compute lecture
series. So I'll really go through this really fast because we
already covered it. But if you haven't yet watched that lecture
series, it's good to recap some of the key points here.
So a compute instance is launched using operating system
image stored on the remote boot volume. We talked about
this earlier. You have a compute instance. You have a block
volume where you keep your data and applications. And then
you have boot volume, which is a special kind of a block
volume where your operating system is stored.
And then all the things you could do with block volumes, you
could do with boot volumes. So you could do manual
backups. You could do a policy-based backup. You could
create clones of boot volumes.
We also looked into this earlier. You could create custom boot
volumes. A default size Linux is 46 gig. For Windows, it's 256
gig. But nothing stops you from going all the way
[INAUDIBLE] to 32 terabytes. You don't need that much
space. But you can, of course, go well beyond the default
sizes.
FILE STORAGE
1. FILE STORAGE INTRO
Hi, everyone. Welcome to this lecture series on OCI File
Storage Service. And in this particular module, we are going
to introduce the File Storage Service and look at it some of its
characteristics. My name is Rohit Rahi, and I'm part of the
Oracle Cloud Infrastructure Team.
So we have been using this slide to show you the range of
storage services available on OCI, starting with local storage,
block storage, file storage, and object storage. These have
different storage architectures. In this particular module, we
are going to look into file storage service.
So what are the use cases for the File Storage Service? There
are several use cases, some of which are related to Oracle
applications like EBS, which are neat and works on file
storage requirements. Then you have general-purpose file
systems. There are scenarios on big data analytics, HPC
scale-out apps, and several other scenarios where a file
storage service and be used.
What are some of the features of the File Storage Service? The
first thing is, the services is AD-local. If you have a multi-AD
region, it's an AD-local service. Supports NFS v.3 protocol. It
supports network lock management for file locking. It has full
POSIX semantics. Data protection, we support snapshot
capabilities. And you could create up to 10,000 snapshots
per file system.
For security, we do support security in the sense of
encryption for data at rest for all file systems and metadata,
and very soon, we are also going to support encryption in
transit for data on the file systems. Of course, you can access
the service through the Console, APIs, CLI, SDKs, and all
that. You can create 100 file systems and two mount targets
per AD per account. And of course, these are soft limits. You
can always increase them.
So let's get into some of the details on what the file storage
service entails-- what is a mount target, what is a file system,
what is an export path, et cetera. So before I [INAUDIBLE] let
me have the ability to write on the screen.
We see a VCN, which, if you recall from the VCN module, it's
a regional service, and it's running at this particular-- it has
this particular address space. I have two subnets, smaller
subnets within the VCN, 10.0.0/24 and 10.0.1.0/24 here,
right?
And the way your NFS client accesses the file system is going
through the mount target. So you can see there are two NFS
clients here in two different areas, two different subnets. They
are accessing a file system right here on this particular
mount target.
Now, placing NFS client and mount target in the same subnet
can result in IP conflicts. Why? Because when you create the
mount target, you are not sure which IP address is used for
the mount target.
Like I said, 10.0.0.6, you see this, but there are two more IP
addresses which get used, right? We don't know what those
two are. And if you don't know those, either one of the clans
could actually grab one of it-- one of those IP addresses.
Now, it's not a requirement, but it's a good idea to place FSS
mount in its own subnet, the File Storage Service mount
target in its own subnet, where it can consume IPs as it
needs, right? So create its own subnet and let it run there,
instead of having a single subnet where you put the mount
target as well as you put the instances. But just again, keep
in mind, there is no hard rule which says you cannot do that.
You absolutely can do it. It's just a good best practice to
separate them out.
Now, how do you use it, right? Export path, along with the
mount target IP address, is used to mount the file system to
an instance, all right? So what do I mean by that? You run a
command like this-- typical mount command-- sudo mount.
This is your mount target. This is your export path, separated
by a colon here. And then, this is your directory on the NFS
client instance on which external file systems are mounted,
right?
[WHOOSH]
2. FILE STORAGE DEMO
We'll take a quick look at the File Storage Service and some of
its workings in action. My name is Rohit Rahi, and I'm part of
the Oracle Cloud Infrastructure Team.
So let me first show you the setup for the demo. So as you
can see here, we are going to use a VCN for this particular
address space, and we are going to run this in a US East,
which is a multi-AD region, but I'm just going to use a single
AD. I could have used multiple ADs, but I'm just going to--
it's a demo-- going to use a single AD here.
You've got NFS clients which will access this particular file
system here through the mount target. And what I'm going to
show you in the demo is the client one would have the
Read/Write access, and, of course, client two also has
Read/Write access. Both of them would take in--
[AUDIO OUT]
So with that, let me quickly jump to the console and start the
demo. So right now, I'm in the console. We have been using
this OCI Console for some of the lecture series. We have this
burger menu here. If I click on that, I can see links for
various services. Right here is the File Storage Service. So
we're going to use that.
I will choose the default route table, and I'll choose the
default security list, and I'll change this subsequently,
because now, we can edit them. If we have a new one created,
we could have used that here, or we could edit it later on. So I
create-- let me just make sure it's a private subnet here. Let
me create this subnet here, right?
And then, what I'm going to do is I'm going to create another
subnet for the client. I'm calling it computesubnet, and this is
where my Compute instances will be running, right? So this
is the address space which I had on the slide, and I'm going
to make it a public subnet, right?
And I'm, once again, choosing the default tables, but we'll
change that, right? So I'll choose the default security list and
the default route table, right? And this is a public subnet. All
right, got it.
And right here, I'll pick the VCN we just created, FSSVCN,
right? And now, for the subnet, I don't want the compute
subnet. I want the mount target subnet, right? Because it's
private, and that's where I'm going to create my mount target.
So as I create the mount target, you would see that first thing
I want to see here is that I get a private IP address.
Remember, the way we identify a mount target is by that IP
address. And that IP address, along with the export path, is
the way we expose a file system to the clients, right? So that's
how it works.
So right now, you can see, I got 10.0.1.3, and this is in the
address space 10.0.1.0/24. As you'll recall, the first three IP
addresses cannot be used, right? So the first one, dot 0, is
reserved and reserved for-- you cannot-- that's your network
address. And then, the first two IPs and the last IP in a region
gets reserved, right? So I cannot use those. But I could use
the 10.0.1.3 right here.
But I don't want to use their default names. Let me call this
file system demo. AD1 Is fine. I could use Oracle Managed
Keys for server-side encryption, or I could bring in my own
keys. I'm just going to let Oracle user the Oracle-managed
keys.
And right here is the export path. Now remember, export path
is how the file systems are exposed, in a given mount target,
to the clients. This is how you expose them-- so through this
path here, export path.
So that-- don't do that. But other than that, any path is fine.
And then, right here, you can see it chose the mount target
demo, the mount target we have because we have the only
one mount target. If I wanted to create one more, I could just
come here and create a new mount target-- I could have done
that. Because I have already this mount target running, so
I'm just going to use that.
So with that, let me just click Create here. And now my file
system would be created, right? And export path is there. So
now I could just click on Mount Commands here, and I get
the commands to use with my instances in order to mount
this file system to my clients, to my compute instances.
And using that, now, I can access the file system, right? So
it's a rather straightforward way where we are managing all
the complexities behind the scenes and you get a file system
service, highly available, running in the cloud, right? So
pretty amazing, that, really.
Now, you might say, why are we doing ingress and egress?
Because didn't we say that when ingress-- basically, it's
stateful, so if for a packet coming in, you automatically
guarantee the packet going out. Yeah, we do that. The reason
we do this is this concept around the TCP connections, to
survive the reboots.
I remember we talked about the fact that the mount target is
highly available. So if I go back to my slide, if you see this
mount target, the mount target here is highly available, right?
So what happens her is, if your clients are connecting to
this-- this is the client-- of course, the response is going to go
back. But what happens is, because it's highly available,
sometimes, your mount target has to be moved to another
machine in case this underlying server has a problem, or it
has a reboot or something, right?
So I could have done this right now. I could have picked the
CIDR for my public subnet here-- 10.0.2.0/24-- but I'm just
going to do it for the whole subnet, for the whole VCN. The
reason being, if there are other subnets, they could just use
the security list. Otherwise, I'll have to go out and open for
each individual subnet if I have more than one.
I'll open another one here. TCP, and this is UDP, and the port
has to be 111. And another one, and it's UDP again, and the
port has to be 2048. So let me make sure I have all the ports.
So IP is fine. The VCN, CIDR, TCP 2048 to 2050, TCP 111,
port is fine. Destination, UDP 111 and UDP 2048, right? So
this is my ingress routes.
All right, so egress rules-- TCP 2048-2050, TCP 11-- 111, and
then UDP 111, right? So we add these egress, right? So now
my rules are all down here.
And then, let me just do that for one more instance. So these
become my two compute instances, which I'm going to use to
connect to my file system and mount target and run my
demo. So FSS2, virtual machine is fine, computer subnet is
fine, assign a public IP address, and right here, I can paste
my SSH keys. Do that, and click Create.
And you will see that these instances will be up and running
in a few seconds. Unfortunately, I missed supplying our
public IP here. We'll solve that. We'll go into the I and we'll
assign a public IP there. It takes literally a few seconds to do
that.
All right, so the ports are all open. Let's see if one of the
compute instances is up and running. They're still getting
provisioned. All right, it looks like the compute instances are
running. So let me just copy the public IP address, clear the
screen. Now it's Oracle Linux, so the user name is "opc", and
let me SSH into the first instance.
All right, so right now, I'm in the instance, and I could go and
I could access the file system. And I'll do the same thing for
the other instance as well. So right here, if I click on File
Storage or File Systems and click on the Export Path--
remember export path is how you expose a file system to your
clients.
So it looks like it's done. Let me just clear the screen. Let me
just create a local mount point-- pretty straightforward. And
right here, I'm going to mount the file system. And bingo,
there, that's-- there we go, right?
Create a local mount point, and then mount the file system to
that local mount point. OK, really straightforward. Now, if I
come and cd to that local mount point and run an ls
command, I can see this file is existing, right? And if I do a
cat, I can see that the file has this content, which we just
wrote earlier.
I can control the kind of access these have, and I could create
another file. I could create a second file system here. I could
create a third file system here, and so on, and so forth, right?
So hopefully, this gives you a quick overview of how the File
Storage System works. It's the highly available file system in
the cloud, massive with massive capacity, scalable, elastic,
and it's really, really simple to use. Thank you for watching
this demo. In the next module, we'll talk about FSS security.
[WHOOSH]
And again, every service we have talked until now and every
service in OCI, you could leverage the Identity and Access
Management Service to control actions like who can create
instances, client instances, who can create the FSS VCN, and
even who can create, list, and associate file systems and
mount targets.
And then, finally, you could, of course, leverage the NFS UNIX
security. So when you mount your system, you read and
write the files, you could use different options. Againm, that
caveat goes here-- when mounting file systems, don't use
mount option such as nolock, resize, rsize, or wsize. These
options can cause issues with file locking and performance.
And again, if you go on documentation, you can read all
about these.
And the way this works is-- let me see if I can discard those
comments. So the way this works is-- we saw this in the
previous demo. In this case, we have to open second ports for
ingress, certain ports for egress, as we just talked about.
Ingress-- just right now, only this client is accessing the file
server. So this is the IP address here, right? And I have to
open TCP ports, the station ports, these ports. And for egress,
I have to open these specific ports as source ports, right?
Exactly the scenario we just talked about.
Now, what does the export option really do? So now, let's look
at a scenario The previous scenario, we had something like
this, and, of course, we were running both instances in the
same subnet in our demo. But right now, those instances are
running in different subnets.
So what we just did is, for the first client, we changed our
permission to Read-only, and for the second client, we
changed our permission to Read and Write. And so we could
access it even though they belong to the same subnet. Right
now, I have only one file system running, but if I had many,
many file systems running, I could control granular access
using a capability like this.
The second, the one we just did, is limiting the ability to write
data for specific IP addresses. So if my client is running this
IP, private IP which we just did, not Read-only, we did that,
Read-only for one client. And for the other client, we did Read
and Write, Read and Write both. Both were running in the
same subnet.
And the third one is, we can have more secure access to limit
the root's user privilege and things like that, right? So in our
200 module, we talk more about what a privilege code is and
what identity squash are, et cetera, right? But this one, we
can skip all the details. But this is the page where you can
find all the information.
All right, so with this, this module, again, we talked a little bit
about security lists. And we didn't really cover network
security groups, but their behavior is very similar to how we
did open certain ports for TCP and UDP, both ingress and
egress.
I hope that was useful. Thank you for joining this module. In
the next module, we'll talk about snapshots. Thank you.
[WHOOSH]
And right here, it's giving me-- you know, it's giving me a
default name. But I can change that, right? So I can call this
Snapshot One and then hit Create here, right? Then my
snapshot would be created. I just have one file in my file
system, so it would create a snapshot of that file. If I had a
directory with multiple file systems, with multiple files, it
would actually create a snapshot for all of them, right? And
you can see, it's active.
See, if I go to my interface client, I have two running here in
the previous demos-- FSS1 and FSS2. And if I cd to the
snapshot directory, I can see my snapshot1 here, right? And
if I-- if I cd to the snapshot1 directory, I can see my file is
available here, right?
OBJECT STORAGE
Now, what are some of the key features of the OCI Object
Storage Service? The first one is this concept of strong
consistency. Strong consistency means that Object Storage
Service always serves the most recent copy of the data when
retrieved. So what happens is if you write a data, and then
you update that data, sometimes, if your service-- there's a
concept called eventual consistency. If your service is based
on that, and you try to write a data. And then you update it
subsequently. And you try to retrieve it. Sometimes, it will
return the stale data. It will return the old copy, not the
updated copy.
And you have several features, like you can define your own
metadata. There is server side encryption. And we also allow
you to bring your own keys, if you want to encrypt data using
your own keys.
So you have the namespace here. You have the bucket here.
And then you have the object here. This is the fully qualified
domain name, or the fully qualified string, if you will, which
you'll need.
Now you can use CLI to perform bulk downloads and bulk
delete of all objects at a specified level of the hierarchy,
without affecting objects in levels above or below. So what do
I mean by that? So look into this example here. You can
download or delete all objects at this level, at the Marathon
level, without downloading or deleting objects at the
Marathon Participants level.
So even though it looks like Marathon Participants is sort of
children directory structure from Marathon, you can still
operate. If you create prefixes like these, you can operate at
them independently. And you can have another object here,
which is Start Line and Finish Line and Middle Line, et
cetera, et cetera.
So with that, let's just complete one more slide. And we'll
quickly jump into our demo. So we talked about the Object
Storage tiers. So what are the tiers which OCI Object Storage
supports today?
so the first tier is the standard storage tier. It's also referred
to sometimes as the hot tier. This is where you store your
data. And you get fast, immediate, and frequent access. You
can retrieve your data instantaneously.
So you upload the data. Let's say 90 days have passed. You
want to get it back. You make a request. It takes at least four
hours before you can download your data.
And like we saw with the standard tier, once you designate a
bucket as an archive bucket, you cannot upgrade to standard
storage tier, and vise versa. Standard cannot be downgraded
to archive. Archive cannot be upgraded to standard. And
right here, you can see when you create a bucket, you get a
choice of either standard tier or archived tier.
But I can come here and I can edit the visibility. I can
make it public. Now there is a checkbox here which
allows users to list objects from this bucket, and I'm OK
with that. I'll say it's a public bucket. And it gives me a
warning that enabling public visibility will let
anonymous and unauthenticated users access data
stored in the bucket.
[SOUND EFFECT]
And you can see the prefix here, slash p. If you remember
from the previous module, slash n is the namespace, slash b
is the bucket, slash o is the object, and slash p here shows
that this URL, this object, is being accessed using a pre-
authenticated request. You can revoke the links at any time.
So suppose you give users access to a bucket or an object
without having their own credentials and their job is done,
you can always revoke the links, and they will have-- they will
lose the access to the object of the bucket going forward.
Right now you can see it's private. So if I want to access this
particular object which we uploaded in the previous demo,
you can see that it gives me an error, saying the bucket
doesn't exist-- we know bucket exists-- or you're not
authorized to access it. So we don't have authorization--
that's the reason we are not able to access it, because it's in a
private bucket, and it doesn't allow for anonymous
unauthenticated access.
And now I can also say what kind of access I want, whether
it's read, write, or read and write both. Read is fine. And then
I can also choose the time which this link will be valid. And
you have to choose this time-- you cannot just create a pre-
authenticated request for an infinite amount of time. You
have to have a timebound access.
So you can see here, there is a policy-- if you don't write this
policy, cross-region copy isn't going to work. And you also
need to specify an existing target bucket. If you don't do that,
it will not let you do the copy.
And then it has various values here where I can-- there are
various options here. So I could choose to override
destination object if the destination object exists. I could
choose not to overwrite. I could choose to overwrite only if it
matches the specified Entity Tag-- the ETag.
If you see here, we have a couple of objects, and they all have
a prefix which is this thing here. So for-- if you don't want
this lifecycle management rule to apply at the bucket level,
you could actually apply it at the prefix level. And in this case
the prefix is gloves_27.
Now I didn't apply any filter in particular here, like the prefix.
If I had done that, I could pick and choose individual objects.
I don't have to do it for all the objects. If I don't want this rule
to apply anymore, I could just disable it instead of deleting it.
So then now it's disabled, but it's still in the history so I can
get some more information here.
So it's as simple as that. And it's really for managing the cost,
managing your objects, because you'll be managing literally
hundreds of objects. So it's a good way to manage the
lifecycle of various objects in various stages, whether you
want to delete them, to save some cost, or move them to
archival storage to, again, reduce the cost and keep them for
long-term backup and retention.
You don't see that in action, but the service is actually doing
that. Now you could do that-- you could use multipart upload
using CLI or SDK. The way it happens is-- first thing you do
is you create object parts. So you can see some numbers
here-- individual parts can be as large as 50 gigs or as small
as 10 MB.
So you could do that using the CLI. CLI does that for you,
and it assigns a part number. Then it initiates an upload, and
you can see the API call it makes to initiate an upload. Then
it uploads the object part and makes sure that all the parts
are uploaded. You can restart a failed upload for an
individual part, et cetera. And then you commit the upload.
So I'm uploading this file-- it has I think both parts. And you
can see that the part size, et cetera, the count it shows you.
And it's splitting the file into 12 parts for upload, and then
it's uploading the file. You can list the parts of unfinished or
failed uploads if there are parts which failed to upload.
And then you can remove them also if there were parts which
could not be uploaded. So the service takes care of breaking
down the files, uploading them, committing them, doing the
checksums, making sure that it's all good. And as I said, if
you're uploading some large files, the service actually does
this internally. But you could, as an end user, do this as well.
[SOUND EFFECT]
And note that the available storage value you specify during
provisioning determines the maximum storage available
through scaling. And I have a note down there at the bottom
of the slide for more information in this regard. VM RAC DB
systems cannot be deployed using this option because we
need the great infrastructure for deploying Real Application
Clusters. Currently, we support Oracle database 18c and 19c
releases when you use the Fast Provisioning option for
deploying your VM DB system.
2. DATABASE PART 2
And under the total storage, it gives me the total storage used
by the database once it's deployed. I need to provide public
key here. We just need it if I want to access the SSH into
database system. So I selected one which I already have. And
I will choose a license type of License Included for this demo,
but if you are a customer looking to bring your own license,
you will select this option.
And for the subnet, I will select a private subnet that I have,
which is a regional subnet. But I also have a choice of
selecting a subnet from a different compartment if I need it.
And I can also assign network security group to control
traffic. But I would not do that so this demo I'll give a
hostname prefix of dest dv and let me show you the advanced
options available for me here I can choose a far domain if I
want for making sure that my database system gets deployed
in one of these four domains.
And I'll click Next, And this is where I can change the
database name if I want to. Note that the database name
cannot be last longer than eight characters here. I can select
between 18 or 19 C for the database system, Joyce. And since
it's a 19 C, I can optionally provide a lovable database name. I
will leave that alone and this is where I provide the password
for the system user.
I can add additional SSH keys here by pasting the private this
is its key here and took him on Add as a search key. And if I
wish to move this resource from this compartment to enter
the I can select a target compartment here and click on More
Resource. Note that the person performing this action needs
to have the access to the target compartment Otherwise they
won't be able to move this resource to the target
compartment,
4. DATABASE DEMO 2
I'm going to click on DSS19, and you can see that this
database has backups occurring automatically every night.
To demonstrate restore database backup process, I'm going to
click on restore, and I will just select the restore to the latest
backup, and click on restore database. And this process will
kick in, and while this backup restore is happening, the
database goes from available to updating state, which means
that database won't be available for access at this point of
database restoration.
[SOUND EFFECT]
This slide shows you the two kinds of offerings that Oracle
Cloud Infrastructure has for Oracle database customers. One
is the traditional automated database services model, wherein
customers get to manage their Oracle database, but Oracle
incorporates database lifecycle automation into the services.
Customers will have DBA and operating system root access.
And they can run older database versions like [INAUDIBLE]
Release 2.
The other service that I'm going to talk about today is the
autonomous database service. In this service, all database
operations are fully automated. The user runs SQL with no
access to operating system or the container database.
The next one here is the Exadata, which is the world's best
database platform. In this, Oracle will build, optimize, and
automate the infrastructure deployment. All in database
automation features that included.
The customer is responsible for provisioning the databases
and managing the databases here as well. As far as use cases
go, this is a great place for customers to build a private cloud,
either on premise or on Oracle Cloud Infrastructure. This is
perfect for consolidation use cases, where customers have a
lot of databases. And they are looking to consolidate on one
platform. So Exadata works well there. And it's very ideal for
highest performance workloads, and has a lot of scalability
features, which is ideal for mission critical workloads.
This slide shows you the various cloud deployment models for
database on Oracle Cloud Infrastructure. We can start off
with the Database as a Service, virtual machine or bare
metal, or deployed Exadata cloud service on Oracle Cloud
Infrastructure or at the customer site. And then finally, we
have the Autonomous Serverless and Autonomous Dedicated
offerings. I won't go through all this slide, but I will just
pause on briefly here for you guys to read this before moving
on.
So on the slide in the right side of the screen, you'll see that
this particular service has automatically scaled up OCPUs up
when there is a demand for more computing power, and then
scales it down once the demand goes down. Let us now look
at securing Oracle Autonomous Database. The autonomous
database stores all data in encrypted format in the Oracle
database. Only authenticated users and applications can
access the data when they connect to the database.
And over here you can see the metrics information, and this
kind of becomes very useful as you start using this database.
You'll see the CP Utilization, Storage Utilization, the Session
Information, Running Statements, Skewed Statements, et
cetera. To find information needed to connect to this test data
warehouse click on the DB connection button here. And from
here, you can download the client credentials of the wallet
that you can use for establishing connectivity from your
computer to the Autonomous Data Warehouse.
And here are the TNS names that you can use for the
connections. You can go with high, medium, or low. Click on
Close. And clicking on Performance Hub gets you to this page
here, which shows you the activities that are currently
happening.
And the last tab I want to show you here is the Development
tab. I'll click on this. This is where I can access Oracle APEX.
As you know, Oracle APEX is now included with Autonomous
Data Warehouse and Autonomous Transaction Processing.
I'll provide a password for the wallet and download the file.
I'm now in Oracle SQL Developer. To create a new connection
in SQL Developer, I'll click on New Connection here. And I'll
change the connection type to Cloud Wallet. I will browse to
the wallet file that I just downloaded.
I'll then provide the password, and I'll click on Test. And the
database connection succeeded. So I'm going to save this and
now connect to the database. Since I did not save the
password, I'll be prompted for this password.