IIB Questions - 1
IIB Questions - 1
Multiple records in XML file with different designations, same designations are in one by
one code:
(7) IIB Interview Ques-26 || Sorting data by same 'Field Values' || Use case implementation. -
YouTube
API specifications can be written in YAML or JSON. The format is easy to learn and readable to both
humans and machines. The complete OpenAPI Specification can be found on GitHub: OpenAPI 3.0
Specification
S.No.SOAP HTTP
SOAP stands for Simple Object HTTP stands for Hypertext Transfer
1. Access Protocol. Protocol.
It is XML based used for sending It is used to transfer information over the
2. and receiving messages. internet.
It supports web socket or WS- It do not supports web socket or WS-
3. Addressing, WS-Security, SwA. Addressing, WS-Security, SwA.
4. SOAP is over HTTP. HTTP is over TCP and IP.
It support runtime checking against It do not support runtime checking against
5. WSDL. WSDL.
It support Automatic processing of It also support MTOM, but it must use the
Message Transmission MIME message domain and design flow
Optimization Mechanism should be done to handle the attachments
6. (MTOM). explicitly.
This protocol’s design is Data
7. centric. This protocol’s design is Document centric.
It is a light weight data interchange It is not light weight data transfer protocol as
8. protocol. SOAP.
DFDL Usage:
DFDL- records separated by CRLF- Carriage return line feed.
Testing the DFDL- Click on -> Create logical instance->Run as serializer.
Testing DFDL through input message Click on TEST parse model.
RESTAPI:
WSDL:
SOAP:
Difference between failure and catch:
The difference is that message routed through the failure terminal will consist
of the original message as if just taken from the input queue. If it's routed
through the catch terminal, it will consist of the original message AND an
exception list.
(339) IIB Interview Ques-2|| MQ Failure and Catch Terminals || What if Catch is not connected? -
YouTube
Types of queues:
Different types of Queue's in IBM MQ and its usage - Ask Middleware Expert
What is the difference between the mapping node and Compute node transformation?
In Compute node you can transform the message by coding ESQL in the ESQL resource file
attached. Whereas, In mapping node, you can use graphical maps to transform input
message by associating an input message model such as a DFDL or XML schema, or an MRM
Message Set and an output message model.
In compute node you can change the entire message even the header assemblies. But, In
the mapping node, you can change the message assembly, message body, and properties.
ACE makes low utilization of resources such as CPU, Memory and Disk
ACE uses only single OS process while IIB needs multiple process for single
Integration server
ACE does not provide data flow engine process because it removed dependency on
Integration IIB running on multiple processes such as bipbroker, bipservice,
bipMQTT, DataFlowEngine
ACE supports since step deployment and configuration with policies in the BAR file
while in IIB we would use config services to override or abstract some specific
property need by our message flow
For Example : Sensitive data which might differ between DEV, TST and PROD we
abstract it like we create config files and then deploy the bar file as next step in IIB
In ACE we use policies instead of config files. Policies can be packaged into Bar file so
change release will be a singe deployment
In IIB if we make changes to config files we have to reload the Integration server for
the changes to take effect
But in ACE V11 provides developer to create policies and policy projects
mqsistartmsgflow and mqsistopmsgflow command was enhanced in ACE it extended
the control towards Integration server and applications
In ACE fix pack 6 we can store credentials in encrypted form these encrypted
credentials are used by flow running in an Integration server. For Example: To access
JDBC in an Integration server we can access database using jdbc provider policy a
credential is referred using –vault -key myvaultkey and credentials command
In IIB v10 we used mqsisetdbparms to store credentials
Route to label:
How route to label node route the messages without writing the code-
1. First check the Properties ‘Set destination list’ is checked or not.
2. Data coming from Local environment.
3. Label name and operation name must be same.
2xx Succesful
200 OK
201 Created
202 Accepted
203 Non-Authoritative Information
204 No Content
205 Reset Content
206 Partial Content
207 Multi-Status
208 Already Reported
226 IM Used
3xx Redirection
300 Multiple Choices
301 Moved Permanently
302 Found (Previously "Moved Temporarily")
303 See Other
304 Not Modified
305 Use Proxy
306 Switch Proxy
307 Temporary Redirect
308 Permanent Redirect
ELSE DO
-- invoice number is within permitted range
-- complete normal processing
ENDIF;
Client said change the time out internal to 50 sec and next day he said 60 sec how to do it.
We need to create configurable service through WEBUI or run the below command.
Time out internal is selected as 50 sec this way we must change the time.
We have to control the timeout notification node through timeout control node Must match
the Unique identifier.
Basic Commands:
Link: IIB Useful Commands – vaithu
Link: IIB Admin and Development Useful Commands – Arquitectura IBMss
Functions:
ESQL functions - IBM Documentation
How do you handle the exception in your message flow?:
We had created a shared library which handle the exceptions, it basically accesses the
exception tree and finds the type of failure like database or cast or parsing and retrieves the
exception code and exception message in a XML format.
In the main flow we will use that exception code and message to customize the response.
We have created a common subflow in a shared library, basically that subflow contains a
compute node first it will find out the type of the failure in the exception list like database or
parsing or user generated or socket exception, and it will fetch the corresponding exception
message and exception code.
This information will be stored in environment variable and along with that error message
will be create in a xml format.
This xml message will be send to the error flow when the exception details will be inserted
into the DB.
Environment variables are global variables once you declare it will be available till
the transaction completes.
Local environment variables is local to the node and you have to keep compute
mode in local environment and we mainly use local environment for http request
URL and file and MQ output destination details.
We had created the shared library and where ever we want to log the message then
we use that library subflow and attach the corresponding terminals.
What is difference between transaction modes available in compute node?
There are 2 transaction modes in compute node
Automatic: DB transaction will be committed only after successful completion of a transaction.
Commit: DB transaction will be committed soon after completion of DB query.
Function:
Listeners:
Switching from an integration node listener to embedded listeners - IBM Documentation
Default http nodes are using broker wide listeners.
Default soap nodes are using embedded listeners called as server listeners.
Check the port number:
Certificates:
Global Cache:
Container server = Another server different application is deployed, and that application is
accessing the variables in CatLog server then called as Container Server.
CatLog Server – primary server = In which application we are implemented the shared
variables concept that application is deployed server called as CatLog server.
Reload the node and CatLog server in that time the values are gone.
Catlog servers max -2, One catlog server is need for global cache, without catlog server can’t
implement global cache.
Container servers max - 4
Multiple nodes we are enabling Global cache we need Policy File.
In that policy file contains Integration nodes, How many servers running in integration node
and CatLog server details.
Request:
{
"entities": [
{
"entityId": "104477",
"systemId": "CCCTP1",
"segmentType": "Company",
"customerName": "104477",
"countryCode": "CYP",
"modelId": "BOCESM",
"sourceCurrency": "EUR",
"accountingMethod": "",
"consolidated": "N",
"addresses": [
{
"entityId": "104477",
"addressType": "",
"address1": "104477",
"address2": "104477",
"country": "CYP",
"state": "",
"city": "104477",
"postCode": "104477"
}
]
}
]
Response:
DECLARE i INTEGER 1;
CREATE LASTCHILD OF refResponse NAME 'entities';
SET refResponse.entities TYPE = (JSON.Array);
CREATE FIELD refResponse.entities.Item IDENTITY (JSON.Object)Item;
DECLARE refEntities REFERENCE TO OutputRoot.JSON.Data.entities;
FOR refEntitiesInput AS refInputRoot.entities.[] DO
SET refEntities.Item[i].entityId=refEntitiesInput.entityId;
SET refEntities.Item[i].systemId=refEntitiesInput.systemId;
SET refEntities.Item[i].segmentType=refEntitiesInput.segmentType;
SET refEntities.Item[i].customerName=refEntitiesInput.customerName;
SET refEntities.Item[i].countryCode=refEntitiesInput.countryCode;
SET refEntities.Item[i].modelId=refEntitiesInput.modelId;
SET refEntities.Item[i].sourceCurrency=refEntitiesInput.sourceCurrency;
SET refEntities.Item[i].accountingMethod=refEntitiesInput.accountingMethod;
SET refEntities.Item[i].consolidated=refEntitiesInput.consolidated;
DECLARE j INTEGER 1;
SET refEntities.Item[i].addresses TYPE = (JSON.Array);
CREATE FIELD refEntities.Item.addresses.Item IDENTITY (JSON.Object)Item;
DECLARE refAddresses REFERENCE TO refEntities.Item[i].addresses;
FOR refAddressesInput REFERENCE TO refInputRoot.entitities.
(JSON.Array)addresses DO
SET refAddresses.Item[j].entityId=refAddressesInput.entityId;
SET refAddresses.Item[j].addressType=refAddressesInput.addressType;
SET refAddresses.Item[j].address1=refAddressesInput.address1;
SET refAddresses.Item[j].address2=refAddressesInput.address2;
SET refAddresses.Item[j].country=refAddressesInput.country;
SET refAddresses.Item[j].state=refAddressesInput.state;
SET refAddresses.Item[j].city=refAddressesInput.city;
SET refAddresses.Item[j].postCode=refAddressesInput.postCode;
SET j=j+1;
END FOR;
SET i=i+1;
END FOR;
PassThru:
WebSphere MQ:
Queues:
Channel:
Linux Commands link:
https://www.youtube.com/watch?v=qjlwbS8SGEA
Policy:
If you require a policy to control the node behavior at run time, you can either generate a
new policy document based on the current operational properties of the node by clicking
Generate new policy, or you can use a predefined policy document. If you want to use a
predefined policy document, enter the policy URL manually.
Note, you can find out the URL of a saved policy by viewing the policy in the web user
interface. In the navigation tree, expand Operational Policy followed by the policy type, and
select the policy name that you want to view.
For more information, click the Operational policy link.
Trace Node:
Used to handle the exceptions and store the data in logs and saved in one location.
Opaque Parsing:
Parsing is a creates the tree structure, Opaque parsing is not creating the tree structure and
improve the performance because like XML file contains large number of tags but some tags
are not required then we use Opaque parsing.
ATOMIC Block:
Used to control the shared variables cache refresh in database.
EVAL Statement:
Using EVAL dynamically in runtime we can decide.