0% found this document useful (0 votes)
99 views14 pages

MQ L3 Uniform Clusters Demo

The document outlines a demonstration of deploying an IBM MQ uniform cluster on Red Hat OpenShift, emphasizing the importance of high availability and scalability for messaging solutions. It details the steps involved in creating a uniform cluster, deploying applications, and validating connectivity while ensuring ease of scaling without complex configurations. The demo aims to showcase the capabilities of IBM MQ in enhancing operational efficiency and customer satisfaction through reliable messaging services.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views14 pages

MQ L3 Uniform Clusters Demo

The document outlines a demonstration of deploying an IBM MQ uniform cluster on Red Hat OpenShift, emphasizing the importance of high availability and scalability for messaging solutions. It details the steps involved in creating a uniform cluster, deploying applications, and validating connectivity while ensuring ease of scaling without complex configurations. The demo aims to showcase the capabilities of IBM MQ in enhancing operational efficiency and customer satisfaction through reliable messaging services.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 14

WEBVTT

1
00:00:06.695 --> 00:00:07.485
Hello everyone.

2
00:00:07.735 --> 00:00:10.125
Brian Wilson and Raphael Zorio here.

3
00:00:10.815 --> 00:00:13.805
Today we will walk you through the deployment process

4
00:00:14.225 --> 00:00:17.845
of a MQ uniform cluster for each Q Manager

5
00:00:18.595 --> 00:00:20.605
uses the native HA functionality

6
00:00:20.985 --> 00:00:22.605
to provide high availability.

7
00:00:25.435 --> 00:00:28.905
We're running a cluster of I-B-M-M-Q managers

8
00:00:28.925 --> 00:00:30.185
and Red Hat OpenShift.

9
00:00:30.945 --> 00:00:34.105
Together with a large number of client applications, putting

10
00:00:34.165 --> 00:00:35.505
and getting messages to them.

11
00:00:36.575 --> 00:00:40.865
This workload will vary over time, so we need flexibility in

12
00:00:40.885 --> 00:00:42.225
how we scale all of this.

13
00:00:44.535 --> 00:00:48.215
Simplifying your messaging solution deployment is very

14
00:00:48.215 --> 00:00:49.895
important for your operations team,

15
00:00:50.515 --> 00:00:52.335
but it is good for your business too.

16
00:00:53.005 --> 00:00:56.535
Increased business high availability can provide the

17
00:00:56.535 --> 00:00:59.175
difference between a satisfied customer who wants

18
00:00:59.175 --> 00:01:00.495
to do more business with you

19
00:01:01.235 --> 00:01:03.695
or a disappointed customer who is looking

20
00:01:03.875 --> 00:01:05.295
for an alternative option.

21
00:01:06.115 --> 00:01:08.095
In addition, it improves innovation

22
00:01:08.605 --> 00:01:12.175
because a team that is used to quickly shipping experiments

23
00:01:12.175 --> 00:01:14.775
and getting back user validated results fast

24
00:01:15.605 --> 00:01:18.575
will soon find itself naturally innovating.

25
00:01:21.005 --> 00:01:24.975
This demo will show how we can easily scale the number

26
00:01:24.975 --> 00:01:27.855
of instances of our client applications up

27
00:01:27.955 --> 00:01:30.055
and down without having

28
00:01:30.115 --> 00:01:32.055
to reconfigure their connection details

29
00:01:32.555 --> 00:01:35.215
and without needing to manually distribute
30
00:01:35.395 --> 00:01:36.535
or load balance them.

31
00:01:39.385 --> 00:01:41.365
It will also show how to quickly

32
00:01:41.585 --> 00:01:44.525
and easily grow the Queue Manager cluster,

33
00:01:44.985 --> 00:01:46.405
adding a new queue manager

34
00:01:46.625 --> 00:01:50.965
to the cluster without a complex new custom configuration.

35
00:01:52.065 --> 00:01:55.925
In this demo, we will see the uniform cluster capability

36
00:01:56.385 --> 00:01:57.885
of I-B-M-M-Q in action.

37
00:02:00.865 --> 00:02:03.765
In the demo, we will execute the following steps,

38
00:02:05.085 --> 00:02:07.825
access the Cloud Pak for integration environment

39
00:02:07.885 --> 00:02:10.025
and explore the messaging capabilities,

40
00:02:11.765 --> 00:02:16.105
deploy a uniform cluster, deploy an MQ application,

41
00:02:17.505 --> 00:02:19.305
validate the uniform, cluster connectivity,

42
00:02:20.075 --> 00:02:24.145
scale the MQ application, and rebalance the connections.

43
00:02:26.105 --> 00:02:27.015
Let's get started.

44
00:02:34.025 --> 00:02:36.975
Let's see how to scale the IBM MQ Cluster
45
00:02:37.275 --> 00:02:39.375
and client applications in OpenShift.

46
00:02:40.205 --> 00:02:44.015
Here we have an IBM Cloud PAK for Integration Environment

47
00:02:44.245 --> 00:02:46.375
with IBM MQ operator installed.

48
00:02:47.035 --> 00:02:49.735
We have a cloud version of the product on IBM Cloud.

49
00:02:50.995 --> 00:02:52.215
Let me log in here.

50
00:02:54.735 --> 00:02:57.505
Welcome to IBM Cloud Pak for integration.

51
00:02:58.355 --> 00:03:01.285
We're now at the home screen showing all the capabilities

52
00:03:01.745 --> 00:03:04.285
of the pack brought together in one place.

53
00:03:05.115 --> 00:03:07.245
Specialized integration capabilities

54
00:03:08.065 --> 00:03:12.205
for API management application integration messaging

55
00:03:12.345 --> 00:03:16.165
and more are built on top of powerful automation services.

56
00:03:17.665 --> 00:03:19.525
As you can see, you are able

57
00:03:19.525 --> 00:03:23.045
to access all the integration capabilities your team needs

58
00:03:23.045 --> 00:03:26.245
through a single interface by now.

59
00:03:26.825 --> 00:03:28.925
We have a basic MQ instance here.

60
00:03:30.565 --> 00:03:33.405
IBM MQ is a universal messaging backbone

61
00:03:33.405 --> 00:03:35.965
with robust connectivity for flexible

62
00:03:36.145 --> 00:03:38.445
and reliable messaging for applications

63
00:03:38.945 --> 00:03:41.325
and the integration of existing IT assets.

64
00:03:41.945 --> 00:03:45.005
In this demo, to scale our IBM MQ cluster,

65
00:03:45.345 --> 00:03:47.165
we will create a uniform cluster.

66
00:03:48.815 --> 00:03:51.765
Let's check our environment on the OpenShift web console.

67
00:03:54.865 --> 00:03:58.045
On the installed operators page, we can confirm

68
00:03:58.435 --> 00:04:01.005
that IBM MQ operator is installed,

69
00:04:01.745 --> 00:04:04.085
but we have only one queue manager so far.

70
00:04:04.955 --> 00:04:08.525
Next step is to create our uniform cluster in mq.

71
00:04:14.505 --> 00:04:18.045
The objective of a uniform cluster deployment is

72
00:04:18.045 --> 00:04:21.925
that applications can be designed for scale and availability

73
00:04:22.505 --> 00:04:24.125
and can connect to any

74
00:04:24.145 --> 00:04:26.725
of the queue managers within the uniform cluster.

75
00:04:27.635 --> 00:04:31.045
This removes any dependency on a specific queue manager,

76
00:04:31.835 --> 00:04:33.685
resulting in better availability

77
00:04:34.185 --> 00:04:35.525
and workload balancing

78
00:04:35.585 --> 00:04:40.585
of messaging traffic that's

79
00:04:41.425 --> 00:04:43.245
Now. First, we need

80
00:04:43.245 --> 00:04:45.725
to create our uniform cluster configurations,

81
00:04:48.585 --> 00:04:49.755
uniform clusters

82
00:04:49.975 --> 00:04:53.115
or a specific pattern of an IBM MQ cluster

83
00:04:53.115 --> 00:04:54.875
that provides a highly available

84
00:04:55.055 --> 00:04:58.555
and horizontally scaled small collection of queue managers.

85
00:04:59.845 --> 00:05:03.515
These queue managers are configured almost identically so

86
00:05:03.515 --> 00:05:06.635
that an application can interact with 'em as a single group.

87
00:05:07.415 --> 00:05:11.145
This makes it easier to ensure each queue manager in the

88
00:05:11.145 --> 00:05:12.505
cluster is being used
89
00:05:13.005 --> 00:05:17.025
by automatically ensuring application instances are spread

90
00:05:17.165 --> 00:05:19.025
evenly across the qs.

91
00:05:23.215 --> 00:05:26.355
Now we need to create our two Qs. Let's do it.

92
00:05:29.585 --> 00:05:32.615
Great. Now let's confirm the instances have

93
00:05:32.615 --> 00:05:33.855
been deployed successfully.

94
00:05:33.875 --> 00:05:35.615
Before moving to the next step,

95
00:05:38.895 --> 00:05:42.195
we need to create the client channel definition table

96
00:05:42.415 --> 00:05:45.795
or CCDT to be used by our application

97
00:05:46.015 --> 00:05:48.595
and deploy a Nginx instance.

98
00:05:48.695 --> 00:05:50.275
To serve CCDT,

99
00:05:57.185 --> 00:06:00.605
The nGenx service was created to be used by our application.

100
00:06:01.225 --> 00:06:03.005
Now we can deploy our application.

101
00:06:09.255 --> 00:06:12.515
Now that the uniform cluster is running, we can proceed

102
00:06:12.535 --> 00:06:15.635
to deploy the application that will be interacting

103
00:06:15.865 --> 00:06:17.155
with the queue managers.
104
00:06:20.735 --> 00:06:24.125
First, we will switch to the developer perspective.

105
00:06:24.585 --> 00:06:27.325
In this perspective, you can view the queue managers.

106
00:06:27.995 --> 00:06:31.445
Here you will see the tiles representing each queue manager.

107
00:06:33.465 --> 00:06:37.565
For demo purposes, we have pre-created the JMS application

108
00:06:37.565 --> 00:06:39.405
that will use our queue managers.

109
00:06:41.745 --> 00:06:42.615
Let's deploy it.

110
00:06:51.795 --> 00:06:54.735
Now let's review our application connection.

111
00:06:55.205 --> 00:06:58.815
From here, we can easily check the application log.

112
00:07:01.695 --> 00:07:04.245
Great. Our application was able to connect

113
00:07:04.265 --> 00:07:07.525
to a queue manager and it is sending messages.

114
00:07:14.165 --> 00:07:16.545
Now that the MQ application is deployed,

115
00:07:17.035 --> 00:07:19.665
let's check the behavior with the uniform cluster.

116
00:07:24.065 --> 00:07:25.975
Let's open Q Manager two.

117
00:07:29.415 --> 00:07:33.555
The pod ending with zero is by default, the active instance.

118
00:07:34.165 --> 00:07:38.795
Let's explore it in order

119
00:07:38.975 --> 00:07:40.515
to check the connection status.

120
00:07:41.055 --> 00:07:43.955
We will use the command display connections

121
00:07:44.735 --> 00:07:47.075
and we will filter by the MQ app name.

122
00:07:47.265 --> 00:07:48.995
That is my producer.

123
00:07:50.055 --> 00:07:51.675
We will execute the command

124
00:07:52.395 --> 00:07:55.035
directly from the terminal in each MQ pod.

125
00:07:55.825 --> 00:07:59.415
Right now, we don't have any connections in this pod,

126
00:07:59.875 --> 00:08:02.855
but in the next step we will get a better picture on

127
00:08:02.855 --> 00:08:04.775
how the connections are distributed.

128
00:08:07.475 --> 00:08:09.935
Now let's explore Q Manager one

129
00:08:10.515 --> 00:08:12.695
that's repeat the same procedure as

130
00:08:12.695 --> 00:08:14.855
before to select the active pod.

131
00:08:17.115 --> 00:08:20.815
Now let's check the connection status in this MQ pod.

132
00:08:24.145 --> 00:08:27.555
This time we see a couple of active connections proving

133
00:08:27.665 --> 00:08:29.595
that the application we deployed is

134
00:08:29.665 --> 00:08:31.195
connected to the cluster.

135
00:08:37.845 --> 00:08:41.545
At the moment, our application is running in a single pod

136
00:08:41.925 --> 00:08:44.465
and therefore it is only connected to one

137
00:08:44.485 --> 00:08:45.545
of the queue managers.

138
00:08:45.965 --> 00:08:47.785
But what if the workload increases

139
00:08:47.885 --> 00:08:49.305
and I need to scale my app?

140
00:08:50.195 --> 00:08:51.665
Let's simulate the scenario

141
00:08:52.125 --> 00:08:54.505
and see how the connections are distributed.

142
00:08:58.565 --> 00:09:01.715
Let's explore the deployment's view of our application.

143
00:09:02.695 --> 00:09:06.545
Here we can see there is only one pod that's increase it

144
00:09:06.545 --> 00:09:08.025
to have two instances.

145
00:09:13.165 --> 00:09:17.625
Now let's check how many connections we have per Q manager.

146
00:09:18.615 --> 00:09:21.385
This time we should see that each Q manager

147
00:09:21.925 --> 00:09:23.625
has a couple of connections.
148
00:09:31.095 --> 00:09:33.955
We observed how each instance will connect

149
00:09:33.955 --> 00:09:35.955
to a different Q manager trying

150
00:09:36.135 --> 00:09:38.315
to keep a homogeneous distribution,

151
00:09:39.155 --> 00:09:41.175
but what would happen if one

152
00:09:41.175 --> 00:09:42.895
of the queue managers goes down?

153
00:09:43.545 --> 00:09:44.375
Let's find out.

154
00:09:48.865 --> 00:09:50.935
Let's check our queue managers in the

155
00:09:50.935 --> 00:09:52.615
installed operator's page.

156
00:09:57.285 --> 00:10:00.105
We could kill one of the active pods for any

157
00:10:00.105 --> 00:10:01.345
of the queue managers,

158
00:10:01.645 --> 00:10:04.585
but since we have configured native ha, one

159
00:10:04.585 --> 00:10:06.545
of the standby instances will take over

160
00:10:06.965 --> 00:10:09.545
and at the end, each queue manager will keep

161
00:10:09.665 --> 00:10:10.705
a couple of connections.

162
00:10:11.285 --> 00:10:13.225
So in this case, we will go ahead
163
00:10:13.225 --> 00:10:15.025
and fully delete the queue manager.

164
00:10:19.065 --> 00:10:22.405
If we try to navigate back to the active pod

165
00:10:22.425 --> 00:10:23.525
for Q Manager two,

166
00:10:24.025 --> 00:10:27.165
we will get an error message since the Q manager and

167
00:10:27.165 --> 00:10:29.965
therefore its pods have been deleted already.

168
00:10:33.145 --> 00:10:37.275
However, if we navigate to the active pod for Q Manager one

169
00:10:37.495 --> 00:10:39.435
and submit the command to check the number

170
00:10:39.435 --> 00:10:40.755
of active connections,

171
00:10:41.335 --> 00:10:43.995
we will see all the connections are directed

172
00:10:44.095 --> 00:10:47.555
to the active queue manager assuring the client application

173
00:10:47.695 --> 00:10:49.435
can continue sending messages.

174
00:10:51.415 --> 00:10:53.795
Now let's recreate queue manager two.

175
00:10:54.295 --> 00:10:57.955
For this demo, we will recreate using the command line

176
00:10:57.955 --> 00:11:01.075
interface, but in a production environment,

177
00:11:01.495 --> 00:11:03.395
we can use a GI ops approach.

178
00:11:09.165 --> 00:11:11.815
Once we confirm both queue managers are up

179
00:11:11.815 --> 00:11:14.095
and running, we can go back to the terminal

180
00:11:14.315 --> 00:11:18.335
of the active pod for each queue manager to check the number

181
00:11:18.355 --> 00:11:19.535
of active connections.

182
00:11:22.625 --> 00:11:26.325
And a similar behavior would happen if additional queue

183
00:11:26.325 --> 00:11:28.845
managers were added to the uniform cluster.

184
00:11:29.465 --> 00:11:32.445
The connection would be rebalanced providing a way

185
00:11:32.465 --> 00:11:33.885
to scale horizontally.

186
00:11:36.475 --> 00:11:39.625
Great. Here we have arrived at the conclusion

187
00:11:39.845 --> 00:11:40.985
of our demonstration,

188
00:11:44.655 --> 00:11:46.845
Let's summarize what we've done today.

189
00:11:49.915 --> 00:11:53.785
In the demo, we accessed the Cloud Pak

190
00:11:53.805 --> 00:11:55.305
for integration environment

191
00:11:55.445 --> 00:11:57.985
and explore the IBM MQ capabilities.

192
00:11:59.595 --> 00:12:04.435
Deployed a uniform cluster, deployed an MQ application,

193
00:12:05.765 --> 00:12:08.055
validated the uniform, cluster connectivity,

194
00:12:08.995 --> 00:12:12.855
scaled the MQ application and rebalanced the connections.

195
00:12:16.495 --> 00:12:19.225
From an operations perspective, we showed

196
00:12:19.245 --> 00:12:22.265
how we can easily scale the number of instances

197
00:12:22.565 --> 00:12:24.265
of your client applications up

198
00:12:24.265 --> 00:12:26.025
and down without having

199
00:12:26.085 --> 00:12:28.225
to reconfigure their connection details

200
00:12:28.685 --> 00:12:30.865
and without needing to manually distribute

201
00:12:31.005 --> 00:12:32.585
or load balance loan.

202
00:12:36.205 --> 00:12:38.945
And here we demonstrated how to quickly

203
00:12:39.165 --> 00:12:41.425
and easily grow the queue manager cluster,

204
00:12:41.765 --> 00:12:43.265
adding a new queue manager

205
00:12:43.565 --> 00:12:46.505
to the cluster without complex configuration.

206
00:12:50.285 --> 00:12:51.915
Thank you for your attention.

You might also like