MCIA-Level 1 MuleSoft Exam Practice Questions
MCIA-Level 1 MuleSoft Exam Practice Questions
What's Inside:
Important Note:
For full access to the complete question bank and topic-wise explanations, visit:
CertQuestionsBank.com
FB page: https://www.facebook.com/certquestionsbank
Share some MCIA-Level 1 exam online questions below.
1.According to MuteSoft, which principle is common to both Service Oriented Architecture (SOA) and
API-led connectivity approaches?
A. Service centralization
B. Service statefulness
C. Service reusability
D. Service interdependence
Answer: C
2.In Anypoint Platform, a company wants to configure multiple identity providers (IdPs) for multiple
lines of business (LOBs). Multiple business groups, teams, and environments have been defined for
these LOBs.
What Anypoint Platform feature can use multiple IdPs across the company’s business groups, teams,
and environments?
A. MuleSoft-hosted (CloudHub) dedicated load balancers
B. Client (application) management
C. Virtual private clouds
D. Permissions
Answer: A
Explanation:
To use a dedicated load balancer in your environment, you must first create an Anypoint VPC.
Because you can associate multiple environments with the same Anypoint VPC, you can use the
same dedicated load balancer for your different environments.
Reference: https://docs.mulesoft.com/runtime-manager/cloudhub-dedicated-load-balancer
3.An organization's security requirements mandate centralized control at all times over authentication
and authorization of external applications when invoking web APIs managed on Anypoint Platform.
What Anypoint Platform feature is most idiomatic (used for its intended purpose), straightforward, and
maintainable to use to meet this requirement?
A. Client management configured in access management
B. Identity management configured in access management
C. Enterprise Security module coded in Mule applications
D. External access configured in API Manager
Answer: B
Explanation:
Reference: https://blogs.mulesoft.com/dev-guides/api-security-ways-to-authenticate-and-authorize/
4.An organization's IT team follows an API-led connectivity approach and must use Anypoint Platform
to implement a System AP\ that securely accesses customer data. The organization uses Salesforce
as the system of record for all customer data, and its most important objective is to reduce the overall
development time to release the System API.
The team's integration architect has identified four different approaches to access the customer data
from within the implementation of the System API by using different Anypoint Connectors that all meet
the technical requirements of the project.
A. Use the Anypoint Connector for Database to connect to a MySQL database to access a copy of
the customer data
B. Use the Anypoint Connector for HTTP to connect to the Salesforce APIs to directly access the
customer data
C. Use the Anypoint Connector for Salesforce to connect to the Salesforce APIs to directly access the
customer data
D. Use the Anypoint Connector tor FTP to download a file containing a recent near-real time extract of
the customer data
Answer: C
5.According to the Internet Engineering Task Force (IETF), which supporting protocol does File
Transfer Protocol (FTP) use for reliable communication?
A. A Secure Sockets Layer (SSL)
B. B Transmission Control Protocol (TCP)
C. Lightweight Directory Access Protocol (LDAP)
D. Hypertext Transfer Protocol (HTTP)
Answer: B
A business process involves two APIs that interact with each other asynchronously over HTTP. Each
API is implemented as a Mule application. API 1 receives the initial HTTP request and invokes API 2
(in a fire and forget fashion) while API 2, upon completion of the processing, calls back into API l to
notify about completion of the asynchronous process.
Each API Is deployed to multiple redundant Mule runtimes and a separate load balancer, and is
deployed to a separate network zone.
In the network architecture, how must the firewall rules be configured to enable the above Interaction
between API 1 and API 2?
A. To authorize the certificate to be used both APIs
B. To enable communication from each API’s Mule Runtimes and Network zone to the load balancer
of the other API
C. To open direct two-way communication between the Mule Runtimes of both API’s
D. To allow communication between load balancers used by each API
Answer: B
Explanation:
* If your API implementation involves putting a load balancer in front of your APIkit application,
configure the load balancer to redirect URLs that reference the baseUri of the application directly. If
the load balancer does not redirect URLs, any calls that reach the load balancer looking for the
application do not reach their destination.
* When you receive incoming traffic through the load balancer, the responses will go out the same
way. However, traffic that is originating from your instance will not pass through the load balancer.
Instead, it is sent directly from the public IP address of your instance out to the Internet. The ELB is
not involved in that scenario.
* The question says “each API is deployed to multiple redundant Mule runtimes”, that seems to be a
hint for self hosted Mule runtime cluster. Set Inbound allowed for the LB, outbound allowed for
runtime to request out.
* Hence correct way is to enable communication from each API’s Mule Runtimes and Network zone
to the load balancer of the other API. Because communication is asynchronous one
Reference: https://docs.mulesoft.com/apikit/4.x/configure-load-balancer-task
7.A company is planning to extend its Mule APIs to the Europe region. Currently all new applications
are deployed to Cloudhub in the US region following this naming convention
{API name}-{environment}. for example, Orders-SAPI-dev, Orders-SAPI-prod etc.
Considering there is no network restriction to block communications between API's, what strategy
should be implemented in order to apply the same new API's running in the EU region of CloudHub
as well to minimize latency between API's and target users and systems in Europe?
A. Set region property to Europe (eu-de) in API manager for all the mule application No need to
change the naming convention
B. Set region property to Europe (eu-de) in API manager for all the mule application Change the
naming convention to {API name}-{environment}-{region} and communicate this change to the
consuming applications and users
C. Set region property to Europe (eu-de) in runtime manager for all the mule application No need to
change the naming convention
D. Set region property to Europe (eu-de) in runtime manager for all the mule application Change the
naming convention to {API name}-{environment}-{region} and communicate this change to the
consuming applications and users
Answer: D
8.A MuteSoft developer must implement an API as a Mule application, run the application locally, and
execute unit tests against the Running application.
Which Anypoint Platform component can the developer use to full all of these requirements?
A. API Manager
B. API Designer
C. Anypoint CLI
D. Anypoint Studio
Answer: D
9.A leading e-commerce giant will use Mulesoft API's on runtime fabric (RTF) to process customer
orders. Some customer's sensitive information such as credit card information is also there as a part
of a API payload.
What approach minimizes the risk of matching sensitive data to the original and can convert back to
the original value whenever and wherever required?
A. Apply masking to hide the sensitive information and then use API
B. manager to detokenize the masking format to return the original value
C. create a tokenization format and apply a tokenization policy to the API Gateway
D. Used both masking and tokenization
E. Apply a field level encryption policy in the API Gateway
Answer: A
10.A company is designing a mule application to consume batch data from a partner's ftps server The
data files have been compressed and then digitally signed using PGP.
What inputs are required for the application to securely consumed these files?
A. ATLS context Key Store requiring the private key and certificate for the company PGP public key of
partner
PGP private key for the company
B. ATLS context first store containing a public certificate for partner ftps server and the PGP public
key of the partner
TLS contact Key Store containing the FTP credentials
C. TLS context trust or containing a public certificate for the ftps server
The FTP username and password
The PGP public key of the partner
D. The PGP public key of the partner
The PGP private key for the company
The FTP username and password
Answer: D
11.An API has been updated in Anypoint Exchange by its API producer from version 3.1.1 to 3.2.0
following accepted semantic versioning practices and the changes have been communicated via the
API's public portal. The API endpoint does NOT change in the new version.
How should the developer of an API client respond to this change?
A. The update should be identified as a project risk and full regression testing of the functionality that
uses this API should be run.
B. The API producer should be contacted to understand the change to existing functionality.
C. The API producer should be requested to run the old version in parallel with the new one.
D. The API client code ONLY needs to be changed if it needs to take advantage of new features.
Answer: D
Explanation:
* Semantic Versioning is a 3-component number in the format of X.Y.Z, where: X stands for a major
version.
Y stands for a minor version: Z stands for a patch.
So, SemVer is of the form Major.Minor.Patch Coming to our question, minor version of the API has
been changed which is backward compatible. Hence there is no change required on API client end. If
they want to make use of new featured that have been added as a part of minor version change they
may need to change code at their end. Hence correct answer is The API client code ONLY needs to
be changed if it needs to take advantage of new features.
Diagram
Description automatically generated
12.Which Mulesoft feature helps users to delegate their access without sharing sensitive credentials
or giving full control of accounts to 3rd parties?
A. Secure Scheme
B. client id enforcement policy
C. Connected apps
D. Certificates
Answer: C
Explanation:
Connected Apps
The Connected Apps feature provides a framework that enables an external application to integrate
with Anypoint Platform using APIs through OAuth 2.0 and OpenID Connect. Connected apps help
users delegate their access without sharing sensitive credentials or giving full control of their accounts
to third parties. Actions taken by connected apps are audited, and users can also revoke access at
any time. Note that some products do not currently include client IDs in this release of the Connected
Apps feature. The Connected Apps feature enables you to use secure authentication protocols and
control an app’s access to user data. Additionally, end users can authorize the app to access their
Anypoint Platform data.
Mule Ref Doc: https://docs.mulesoft.com/access-management/connected-apps-overview
13.A Mule application is being designed for deployment to a single CloudHub worker. The Mule
application will have a flow that connects to a SaaS system to perform some operations each time the
flow is invoked.
The SaaS system connector has operations that can be configured to request a short-lived token
(fifteen minutes) that can be reused for subsequent connections within the fifteen minute time
window. After the token expires, a new token must be requested and stored.
What is the most performant and idiomatic (used for its intended purpose) Anypoint Platform
component or service to use to support persisting and reusing tokens in the Mule application to help
speed up reconnecting the Mule application to the SaaS application?
A. Nonpersistent object store
B. Persistent object store
C. Variable
D. Database
Answer: D
Explanation:
Reference: https://docs.mulesoft.com/mule-runtime/4.4/reconnection-strategy-about
14.A Mule application contains a Batch Job with two Batch Steps (Batch_Step_l and Batch_Step_2).
A payload with 1000 records is received by the Batch Job.
How many threads are used by the Batch Job to process records, and how does each Batch Step
process records within the Batch Job?
A. Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance
receives ONE record at a time as the payload, and RECORDS are processed IN PARALLEL within
and between the two Batch Steps
B. Each Batch Job uses a SINGLE THREAD for all Batch steps Each Batch step instance receives
ONE record at a time as the payload, and RECORDS are processed IN ORDER, first through
Batch_Step_l and then through Batch_Step_2
C. Each Batch Job uses a SINGLE THREAD to process a configured block size of record Each Batch
Step instance receives A BLOCK OF records as the payload, and BLOCKS of records are processed
IN ORDER
D. Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance
receives ONE record at a time as the payload, and BATCH STEP INSTANCES execute IN
PARALLEL to process records and Batch Steps in ANY order as fast as possible
Answer: A
Explanation:
* Each Batch Job uses SEVERAL THREADS for the Batch Steps
* Each Batch Step instance receives ONE record at a time as the payload. It's not received in a block,
as it does not wait for multiple records to be completed before moving to next batch step. (So Option
D is out of choice)
* RECORDS are processed IN PARALLEL within and between the two Batch Steps.
* RECORDS are not processed in order. Let's say if second record completes
batch_step_1 before record 1, then it moves to batch_step_2 before record 1. (So option C and D are
out of choice)
* A batch job is the scope element in an application in which Mule processes a message payload as a
batch of records. The term batch job is inclusive of all three phases of
processing: Load and Dispatch, Process, and On Complete.
* A batch job instance is an occurrence in a Mule application whenever a Mule flow executes a batch
job. Mule creates the batch job instance in the Load and Dispatch phase. Every batch job instance is
identified internally using a unique String known as batch job instance id.
15.A manufacturing company is planning to deploy Mule applications to its own Azure Kubernetes
Service infrastructure.
The organization wants to make the Mule applications more available and robust by deploying each
Mule application to an isolated Mule runtime in a Docker container while managing all the Mule
applications from the MuleSoft-hosted control plane.
What is the most idiomatic (used for its intended purpose) choice of runtime plane to meet these
organizational requirements?
A. Anypoint Platform Private Cloud Edition
B. Anypoint Runtime Fabric
C. CloudHub
D. Anypoint Service Mesh
Answer: B
Explanation:
Reference: https://blogs.mulesoft.com/dev-guides/how-to-tutorials/anypoint-runtime-fabric/
16.A project team uses RAML specifications to document API functional requirements and deliver API
definitions. As per the current legal requirement, all designed API definitions to be augmented with an
additional non-functional requirement to protect the services from a high rate of requests according to
define service level agreements.
Assuming that the project is following Mulesoft API governance and policies, how should the project
team convey the necessary non-functional requirement to stakeholders?
A. Create proxies in API manager for the non functional requirement and publish to exchange
B. Add all non functional requirements as comments to RAML specification and publish to exchange
C. Create various SLA's in API manager for the non functional requirement and publish to exchange
D. Update API definitions with the fragment for the appropriate policy and publish to exchange
Answer: D
17.What Mule application can have API policies applied by Anypoint Platform to the endpoint exposed
by that Mule application?
A. A Mule application that accepts requests over HTTP/1x
B. A Mule application that accepts JSON requests over TCP but is NOT required to provide a
response.
C. A Mule application that accepts JSON requests over WebSocket
D. A Mule application that accepts gRPC requests over HTTP/2
Answer: A
Explanation:
* HTTP/1.1 keeps all requests and responses in plain text format.
* HTTP/2 uses the binary framing layer to encapsulate all messages in binary format, while still
maintaining HTTP semantics, such as verbs, methods, and headers. It came into use in 2015, and
offers several methods to decrease latency, especially when dealing with mobile platforms and server-
intensive graphics and videos
* Currently, Mule application can have API policies only for Mule application that accepts requests
over HTTP/1x
18.Customer has deployed mule applications to different customer hosted mule run times.
Mule applications are managed from Anypoint platform.
What needs to be configured to monitor these Mule applications from Anypoint monitoring and what
sends monitoring data to Anypoint monitoring?
A. Enable monitoring of individual applications from runtime manager application settings Runtime
manager agent sends monitoring data from the mule applications to Anypoint monitoring
B. Install runtime manager agent on each mule runtime
Runtime manager agent since monitoring data from the mule applications to Anypoint monitoring
C. Anypoint monitoring agent on each mule runtime
Anypoint monitoring agent sends monitoring data from the mule applications to Anypoint monitoring
D. By default, Anypoint monitoring agent will be installed on each Mule run time
Anypoint Monitoring agent automatically sends monitoring data from the Mule applications to
Anypoint monitoring
Answer: C
19.Mule application A receives a request Anypoint MQ message REQU with a payload containing a
variable-length list of request objects. Application A uses the For Each scope to split the list into
individual objects and sends each object as a message to an Anypoint MQ queue.
Service S listens on that queue, processes each message independently of all other messages, and
sends a response message to a response queue.
Application A listens on that response queue and must in turn create and publish a response Anypoint
MQ message RESP with a payload containing the list of responses sent by service S in the same
order as the request objects originally sent in REQU.
Assume successful response messages are returned by service S for all request messages.
What is required so that application A can ensure that the length and order of the list of objects in
RESP and REQU match, while at the same time maximizing message throughput?
A. Use a Scatter-Gather within the For Each scope to ensure response message order Configure the
Scatter-Gather with a persistent object store
B. Perform all communication involving service S synchronously from within the For Each scope, so
objects in RESP are in the exact same order as request objects in REQU
C. Use an Async scope within the For Each scope and collect response messages in a second For
Each scope in the order In which they arrive, then send RESP using this list of responses
D. Keep track of the list length and all object indices in REQU, both in the For Each scope
and in all communication involving service Use persistent storage when creating RESP
Answer: D
Explanation:
Correct answer is Perform all communication involving service S synchronously from within the For
Each scope, so objects in RESP are in the exact same order as request objects in REQU
Explanation: Using Anypoint MQ, you can create two types of queues: Standard queue These queues
don’t guarantee a specific message order. Standard queues are the best fit for applications in which
messages must be delivered quickly. FIFO (first in, first out) queue These queues ensure that your
messages arrive in order. FIFO queues are the best fit for applications requiring strict message
ordering and exactly-once delivery, but in which message delivery speed is of less importance Use of
FIFO queue is no where in the option and also it decreased throughput. Similarly persistent object
store is not the preferred solution approach when you maximizing message throughput. This rules out
one of the options. Scatter Gather does not support ObjectStore. This rules out one of the options.
Standard Anypoint MQ queues don’t guarantee a specific message order hence using another for
each block to collect response wont work as requirement here is to ensure the order. Hence
considering all the above factors the feasible approach is Perform all communication involving service
S synchronously from within the For Each scope, so objects in RESP are in the exact same order as
request objects in REQU
20.An organization is successfully using API led connectivity, however, as the application network
grows, all the manually performed tasks to publish share and discover, register, apply policies to, and
deploy an API are becoming repetitive pictures driving the organization to automate this process
using efficient CI/'CD pipeline.
Considering Anypoint platforms capabilities how should the organization approach automating is API
lifecycle?
A. Use runtime manager rest apis for API management and mavenforAPI deployment
B. Use Maven with a custom configuration required for the API lifecycle
C. Use Anypoint CLI or Anypoint Platform REST apis with scripting language such as groovy
D. Use Exchange rest api's for API management and MavenforAPI deployment
Answer: D
21.Insurance organization is planning to deploy Mule application in MuleSoft Hosted runtime plane.
As a part of requirement, application should be scalable. highly available. It also has regulatory
requirement which demands logs to be retained for at least 2 years. As an Integration Architect what
step you will recommend in order to achieve this?
A. It is not possible to store logs for 2 years in CloudHub deployment. External log management
system is required.
B. When deploying an application to CloudHub, logs retention period should be selected as 2 years
C. When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
D. Logging strategy should be configured accordingly in log4j file deployed with the application.
Answer: A
Explanation:
Correct answer is It is not possible to store logs for 2 years in CloudHub deployment. External log
management system is required. CloudHub has a specific log retention policy, as described in the
documentation: the platform stores logs of up to 100 MB per app & per worker or for up to 30 days,
whichever limit is hit first. Once this limit has been reached, the oldest log information is deleted in
chunks and is irretrievably lost. The recommended approach is to persist your logs to a external
logging system of your choice (such as Splunk, for instance) using a log appender. Please note that
this solution results in the logs no longer being stored on our platform, so any support cases you
lodge will require for you to provide the appropriate logs for review and case resolution
22.An organization is designing multiple new applications to run on CloudHub in a single Anypoint
VPC and that must share data using a common persistent Anypoint object store V2 (OSv2).
Which design gives these mule applications access to the same object store instance?
A. AVM connector configured to directly access the persistence queue of the persistent object store
B. An Anypoint MQ connector configured to directly access the persistent object store
C. Object store V2 can be shared across cloudhub applications with the configured osv2 connector
D. The object store V2 rest API configured to access the persistent object store
Answer: D
A shopping cart checkout process consists of a web store backend sending a sequence of API
invocations to an Experience API, which in turn invokes a Process API. All API invocations are over
HTTPS POST. The Java web store backend executes in a Java EE application server, while all API
implementations are Mule applications executing in a customer -hosted Mule runtime.
End-to-end correlation of all HTTP requests and responses belonging to each individual checkout
Instance is required. This is to be done through a common correlation ID, so that all log entries written
by the web store backend, Experience API implementation, and Process API implementation include
the same correlation ID for all requests and responses belonging to the same checkout instance.
What is the most efficient way (using the least amount of custom coding or configuration) for the web
store backend and the implementations of the Experience API and Process API to participate in end-
to-end correlation of the API invocations for each checkout instance?
A) The web store backend, being a Java EE application, automatically makes use of the thread-local
correlation ID generated by the Java EE application server and automatically transmits that to the
Experience API using HTTP-standard headers
No special code or configuration is included in the web store backend, Experience API, and Process
API implementations to generate and manage the correlation ID
B) The web store backend generates a new correlation ID value at the start of checkout and sets it on
the X-CORRELATlON-lt HTTP request header In each API invocation belonging to that checkout
No special code or configuration is included in the Experience API and Process API implementations
to generate and manage the correlation ID
C) The Experience API implementation generates a correlation ID for each incoming HTTP request
and passes it to the web store backend in the HTTP response, which includes it in all subsequent API
invocations to the Experience API.
The Experience API implementation must be coded to also propagate the correlation ID to the
Process API in a suitable HTTP request header
D) The web store backend sends a correlation ID value in the HTTP request body In the way required
by the Experience API
The Experience API and Process API implementations must be coded to receive the custom
correlation ID In the HTTP requests and propagate It in suitable HTTP request headers
A. Option A
B. Option B
C. Option C
D. Option D
Answer: B
Explanation:
Correct answer is "The web store backend generates a new correlation ID value at the start of
checkout and sets it on the X¬CORRELATION-ID HTTP request header in each API invocation
belonging to that checkout No special code or configuration is included in the Experience API and
Process API implementations to generate and manage the correlation ID" Explanation: By design,
Correlation Ids cannot be changed within a flow in Mule 4 applications and can be set only at source.
This ID is part of the Event Context and is generated as soon as the message is received by the
application. When a HTTP Request is received, the request is inspected for "X-Correlation-Id" header.
If "X-Correlation-Id" header is present, HTTP connector uses this as the Correlation Id. If "X-
Correlation-Id" header is NOT present, a Correlation Id is randomly generated. For Incoming HTTP
Requests: In order to set a custom Correlation Id, the client invoking the HTTP request must set "X-
Correlation-Id" header. This will ensure that the Mule Flow uses this Correlation Id. For Outgoing
HTTP Requests: You can also propagate the existing Correlation Id to downstream APIs. By default,
all outgoing HTTP Requests send "X-Correlation-Id" header. However, you can choose to set a
different value to "X-Correlation-Id" header or set "Send Correlation Id" to NEVER.
Mulesoft
Reference: https://help.mulesoft.com/s/article/How-to-Set-Custom-Correlation-Id-for-Flows-with-HTTP-
Endpoint-in-Mule-4
24.An insurance company is implementing a MuleSoft API to get inventory details from the two
vendors. Due to network issues, the invocations to vendor applications are getting timed-out
intermittently. But the transactions are successful upon reprocessing
What is the most performant way of implementing this requirement?
A. Implement a scatter-gather scope to invoke the two vendor applications on two different route
Use the Until-Successful scope to implement the retry mechanism for timeout errors on each route
B. Implement a Choice scope to invoke the two vendor applications on two different route
Use the try-catch scope to implement the retry mechanism for timeout errors on each route
C. Implement a For-Each scope to invoke the two vendor applications
Use until successful scope to implement the retry mechanism for the timeout errors
D. Implement Round-Robin scope to invoke the two vendor applications on two different routes
Use the Try-Catch scope to implement retry mechanism for timeout errors on each route
Answer: A
25.As a part of project, existing java implementation is being migrated to Mulesoft. Business is very
tight on the budget and wish to complete the project in most economical way possible.
Canonical object model using java is already a part of existing implementation. Same object model is
required by mule application for a business use case.
What is the best way to achieve this?
A. Make use of Java module
B. Create similar model for Mule applications
C. Create a custom application to read Java code and make it available for Mule application
D. Use Anypoint exchange
Answer: A
Explanation:
Mule 4 is built to:
• Minimize the need for custom code.
• Avoid the need for you to know or understand Java.
However, some advanced uses cases require integration with custom Java code, such as:
• Reuse of a library, such as a tax calculation library.
• Reuse of a canonical object model that is standard in the organization.
• Execution of custom logic using Java.
Mule ref doc: https://docs.mulesoft.com/java-module/1.2/
A Mule application is being designed to be deployed to several CIoudHub workers. The Mule
application's integration logic is to replicate changed Accounts from Satesforce to a backend system
every 5 minutes.
A watermark will be used to only retrieve those Satesforce Accounts that have been modified since
the last time the integration logic ran.
What is the most appropriate way to implement persistence for the watermark in order to support the
required data replication integration logic?
A. Persistent Anypoint MQ Queue
B. Persistent Object Store
C. Persistent Cache Scope
D. Persistent VM Queue
Answer: B
Explanation:
* An object store is a facility for storing objects in or across Mule applications. Mule uses object stores
to persist data for eventual retrieval.
* Mule provides two types of object stores:
1) In-memory store C stores objects in local Mule runtime memory. Objects are lost on shutdown of
the Mule runtime.
2) Persistent store C Mule persists data when an object store is explicitly configured to be
persistent.
In a standalone Mule runtime, Mule creates a default persistent store in the file system. If you do not
specify an object store, the default persistent object store is used. MuleSoft
Reference: https://docs.mulesoft.com/mule-runtime/3.9/mule-object-stores
27.Which Anypoint Platform component helps integration developers discovers and share reusable
APIs, connectors, and templates?
A. Anypoint Exchange
B. API Manager
C. Anypoint Studio
D. Design Center
Answer: A
28.A company is using Mulesoft to develop API's and deploy them to Cloudhub and on premises
targets. Recently it has decided to enable Runtime Fabric deployment option as well and
infrastructure is set up for this option.
What can be used to deploy Runtime Fabric?
A. AnypointCLI
B. Anypoint platform REST API's
C. Directly uploading ajar file from the Runtime manager
D. Mule maven plug-in
Answer: D
30.An application deployed to a runtime fabric environment with two cluster replicas is designed to
periodically trigger of flow for processing a high-volume set of records from the source system and
synchronize with the SaaS system using the Batch job scope
After processing 1000 records in a periodic synchronization of 1 lakh records, the replicas in which
batch job instance was started went down due to unexpected failure in the runtime fabric environment
What is the consequence of losing the replicas that run the Batch job instance?
A. The remaining 99000 records will be lost and left and processed
B. The second replicas will take over processing the remaining 99000 records
C. A new replacement replica will be available and will be process all 1,00,000 records from scratch
leading to duplicate record processing
D. A new placement replica will be available and will take or processing the remaining 99,000 records
Answer: B
31.An organization is creating a Mule application that will be deployed to CloudHub. The Mule
application has a property named dbPassword that stores a database user’s password.
The organization's security standards indicate that the dbPassword property must be hidden from
every Anypoint Platform user after the value is set in the Runtime Manager Properties tab.
What configuration in the Mule application helps hide the dbPassword property value in Runtime
Manager?
A. Use secure::dbPassword as the property placeholder name and store the cleartext (unencrypted)
value in a secure properties placeholder file
B. Use secure::dbPassword as the property placeholder name and store the property encrypted value
in a secure properties placeholder file
C. Add the dbPassword property to the secureProperties section of the pom.xml file
D. Add the dbPassword property to the secureProperties section of the mule-artifact.json file
Answer: B
Explanation:
Reference: https://docs.mulesoft.com/runtime-manager/secure-application-properties
32.A company is planning to migrate its deployment environment from on-premises cluster to a
Runtime Fabric (RTF) cluster. It also has a requirement to enable Mule applications deployed to a
Mule runtime instance to store and share data across application replicas and restarts.
How can these requirements be met?
A. Anypoint object store V2 to share data between replicas in the RTF cluster
B. Install the object store pod on one of the cluster nodes
C. Configure Persistence Gateway in any of the servers using Mule Object Store
D. Configure Persistent Gateway at the RTF
Answer: D
33.An API implementation is being developed to expose data from a production database via HTTP
requests. The API implementation executes a database SELECT statement that is dynamically
created based upon data received from each incoming HTTP request. The developers are planning to
use various types of testing to make sure the Mule application works as expected, can handle specific
workloads, and behaves correctly from an API consumer perspective.
What type of testing would typically mock the results from each SELECT statement rather than
actually execute it in the production database?
A. Unit testing (white box)
B. Integration testing
C. Functional testing (black box)
D. Performance testing
Answer: A
Explanation:
In Unit testing instead of using actual backends, stubs are used for the backend services.
This ensures that developers are not blocked and have no dependency on other systems.
In Unit testing instead of using actual backends, stubs are used for the backend services.
This ensures that developers are not blocked and have no dependency on other systems.
Below are the typical characteristics of unit testing.
-- Unit tests do not require deployment into any special environment, such as a staging environment
-- Unit tests san be run from within an embedded Mule runtime
-- Unit tests can/should be implemented using MUnit
-- For read-only interactions to any dependencies (such as other APIs): allowed to invoke production
endpoints
-- For write interactions: developers must implement mocks using MUnit
-- Require knowledge of the implementation details of the API implementation under test
34.An ABC Farms project team is planning to build a new API that is required to work with data from
different domains across the organization.
The organization has a policy that all project teams should leverage existing investments by reusing
existing APIs and related resources and documentation that other project teams have already
developed and deployed.
To support reuse, where on Anypoint Platform should the project team go to discover and read
existing APIs, discover related resources and documentation, and interact with mocked versions of
those APIs?
A. Design Center
B. API Manager
C. Runtime Manager
D. Anypoint Exchange
Answer: D
Explanation:
The mocking service is a feature of Anypoint Platform and runs continuously. You can run the
mocking service from the text editor, the visual editor, and from Anypoint Exchange. You can simulate
calls to the API in API Designer before publishing the API specification to Exchange or in Exchange
after publishing the API specification.
Reference: https://docs.mulesoft.com/design-center/design-mocking-service
36.How are the API implementation, API client, and API consumer combined to invoke and process
an API?
A. The API consumer creates an API implementation, which receives API invocations from an API
such that they are processed for an API client
B. The API consumer creates an API client which sends API invocations to an API such that they are
processed by an API implementation
C. An API client creates an API consumer, which receives API invocation from an API such that they
are processed for an API implementation
D. The API client creates an API consumer which sends API invocations to an API such that they are
processed by API implementation
Answer: C
Explanation:
The API consumer creates an API client which sends API invocations to an API such that they are
processed by an API implementation
This is based on below definitions API client
• An application component • that accesses a service
• by invoking an API of that service - by definition of the term API over HTTP API consumer
• A business role, which is often assigned to an individual
• that develops API clients, i.e., performs the activities necessary for enabling an API client to invoke
APIs API implementation
• An application component
• that implements the functionality
37.A Mule application is being designed To receive nightly a CSV file containing millions of records
from an external vendor over SFTP, The records from the file need to be validated, transformed. And
then written to a database. Records can be inserted into the database in any order.
In this use case, what combination of Mule components provides the most effective and performant
way to write these records to the database?
A. Use a Parallel for Each scope to Insert records one by one into the database
B. Use a Scatter-Gather to bulk insert records into the database
C. Use a Batch job scope to bulk insert records into the database.
D. Use a DataWeave map operation and an Async scope to insert records one by one into the
database.
Answer: C
Explanation:
Correct answer is Use a Batch job scope to bulk insert records into the database
* Batch Job is most efficient way to manage millions of records.
A few points to note here are as follows:
Reliability: If you want reliabilty while processing the records, i.e should the processing survive a
runtime crash or other unhappy scenarios, and when restarted process all the remaining records, if
yes then go for batch as it uses persistent queues.
Error Handling: In Parallel for each an error in a particular route will stop processing the remaining
records in that route and in such case you'd need to handle it using on error continue, batch process
does not stop during such error instead you can have a step for failures and have a dedicated
handling in it.
Memory footprint: Since question said that there are millions of records to process, parallel for each
will aggregate all the processed records at the end and can possibly cause Out Of Memory.
Batch job instead provides a BatchResult in the on complete phase where you can get the count of
failures and success. For huge file processing if order is not a concern definitely go ahead with Batch
Job