Free MCPA-Level-1 Exam Braindumps

Pass your MuleSoft Certified Platform Architect - Level 1 exam with these free Questions and Answers

Page 4 of 19
QUESTION 11

A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios.
What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?

  1. A. Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry
  2. B. Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers
  3. C. Create API Notebooks and include them in the relevant Anypoint Exchange entries
  4. D. Make relevant APIs discoverable via an Anypoint Exchange entry

Correct Answer: C

Correct Answer
Create API Notebooks and Include them in the relevant Anypoint exchange entries
*****************************************
>> API Notebooks are the one on Anypoint Platform that enable us to provide code-centric API documentation

QUESTION 12

A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?

  1. A. se a CloudHub autoscaling policy to add CloudHub workers
  2. B. Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
  3. C. Increase the size of the CloudHub worker(s)
  4. D. Increase the number of CloudHub workers

Correct Answer: D
Correct Answer
Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space message Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means, all data is handled by ONE worker only at a time. So, the disk space issue should be tackled on "per worker" basis. Having multiple workers does not help as the batch may still fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of the worker so that a new worker with more disk space will be provisioned.

QUESTION 13

An organization has implemented a Customer Address API to retrieve customer address information. This API has been deployed to multiple environments and has been configured to enforce client IDs everywhere.
A developer is writing a client application to allow a user to update their address. The developer has found the Customer Address API in Anypoint Exchange and wants to use it in their client application.
What step of gaining access to the API can be performed automatically by Anypoint Platform?

  1. A. Approve the client application request for the chosen SLA tier
  2. B. Request access to the appropriate API Instances deployed to multiple environments using the client application's credentials
  3. C. Modify the client application to call the API using the client application's credentials
  4. D. Create a new application in Anypoint Exchange for requesting access to the API

Correct Answer: A

Correct Answer
Approve the client application request for the chosen SLA tier
*****************************************
>> Only approving the client application request for the chosen SLA tier can be automated
>> Rest of the provided options are not valid

QUESTION 14

Refer to the exhibit.
MCPA-Level-1 dumps exhibit
What is the best way to decompose one end-to-end business process into a collaboration of Experience, Process, and System APIs?
A) Handle customizations for the end-user application at the Process API level rather than the Experience API level
MCPA-Level-1 dumps exhibit
B) Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs
MCPA-Level-1 dumps exhibit
C) Always use a tiered approach by creating exactly one API for each of the 3 layers (Experience, Process and System APIs)
MCPA-Level-1 dumps exhibit
D) Use a Process API to orchestrate calls to multiple System APIs, but NOT to other Process APIs
MCPA-Level-1 dumps exhibit

  1. A. Option A
  2. B. Option B
  3. C. Option C
  4. D. Option D

Correct Answer: B

Correct Answer
Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs.
*****************************************
>> All customizations for the end-user application should be handled in "Experience API" only. Not in Process API
>> We should use tiered approach but NOT always by creating exactly one API for each of the 3 layers. Experience APIs might be one but Process APIs and System APIs are often more than one. System APIs for sure will be more than one all the time as they are the smallest modular APIs built in front of end systems.
>> Process APIs can call System APIs as well as other Process APIs. There is no such anti-design pattern in API-Led connectivity saying Process APIs should not call other Process APIs.
So, the right answer in the given set of options that makes sense as per API-Led connectivity principles is to allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs. This way, some future Process APIs can make use of that data from System APIs and we need NOT touch the System layer APIs again and again.

QUESTION 15

In which layer of API-led connectivity, does the business logic orchestration reside?

  1. A. System Layer
  2. B. Experience Layer
  3. C. Process Layer

Correct Answer: C

Correct Answer
Process Layer
*****************************************
>> Experience layer is dedicated for enrichment of end user experience. This layer is to meet the needs of different API clients/ consumers.
>> System layer is dedicated to APIs which are modular in nature and implement/ expose various individual functionalities of backend systems
>> Process layer is the place where simple or complex business orchestration logic is written by invoking one or many System layer modular APIs
So, Process Layer is the right answer.

Page 4 of 19

Post your Comments and Discuss MuleSoft MCPA-Level-1 exam with other Community members: