Data Cloud Mock Exam 61 Questions
Data Cloud Mock Exam 61 Questions
Northern Trail Outfitters (NTD) creates a calculated insight to compute recency, frequency,
monetary {RFM) scores on its unified individuals. NTO then creates a segment based on
these scores that it activates to a Marketing Cloud activation target.
Which two actions are required when configuring the activation? Choose 2 answers
Add additional attributes.correct
Choose a segment.correct
Select contact points.correct
Add the calculated insight in the activation.
Question was not answered
Explanation:
To configure an activation to a Marketing Cloud activation target, you need to choose a segment and
select contact points. Choosing a segment allows you to specify which unified individuals you want
to activate. Selecting contact points allows you to map the attributes from the segment to the fields in
the Marketing Cloud data extension. You do not need to add additional attributes or add the
calculated insight in the activation, as these are already part of the segment definition.
Reference: Create a Marketing Cloud Activation Target; Types of Data Targets in Data Cloud
2. A customer is concerned that the consolidation rate displayed in the identity resolution is
quite low compared to their initial estimations.
Explanation:
The consolidation rate is the amount by which source profiles are combined to produce unified
profiles, calculated as 1 - (number of unified individuals / numbers of source individuals). For
example, if you ingest 100 source records and create 80 unified profiles, your consolidation rate is
20%. To increase the consolidation rate, you need to increase the number of matches between
source profiles, which can be done by adding more match rules. Match rules define the criteria for
matching source profiles based on their attributes. By increasing the number of match rules, you can
increase the chances of finding matches between source profiles and thus increase the
consolidation rate. On the other hand, changing reconciliation rules, including additional attributes, or
reducing the number of match rules can decrease the consolidation rate, as they can either reduce
the number of matches or increase the number of unified profiles.
Reference: Identity Resolution Calculated Insight: Consolidation Rates for Unified Profiles, Identity
Resolution Ruleset Processing Results, Configure Identity Resolution Rulesets
3. A customer is trying to activate data from Data Cloud to an Amazon S3 Cloud File Storage
Bucket.
Which authentication type should the consultant recommend to connect to the S3 bucket
from Data Cloud?
Use an S3 Private Key Certificate.
Use an S3 Encrypted Username and Password.
Use a JWT Token generated on S3.
Use an S3 Access Key and Secret Key.correct
Question was not answered
Explanation:
To use the Amazon S3 Storage Connector in Data Cloud, the consultant needs to provide the S3
bucket name, region, and access key and secret key for authentication. The access key and secret
key are generated by AWS and can be managed in the IAM console. The other options are not
supported by the S3 Storage Connector or by Data Cloud.
Reference: Amazon S3 Storage Connector - Salesforce, How to Use the Amazon S3 Storage
Connector in Data Cloud | Salesforce Developers Blog Learn more
1help.salesforce.com2developer.salesforce.com
4. A consultant has an activation that is set to publish every 12 hours, but has discovered
that updates to the data prior to activation are delayed by up to 24 hours.
Which two areas should a consultant review to troubleshoot this issue? Choose 2 answers
Review data transformations to ensure they're run after calculated insights.
Review calculated insights to make sure they're run before segments are refreshed.correct
Review segments to ensure they're refreshed after the data is ingested.correct
Review calculated insights to make sure they're run after the segments are refreshed.
Question was not answered
Explanation:
The correct answer is B and C because calculated insights and segments are both dependent on the
data ingestion process. Calculated insights are derived from the data model objects and segments
are subsets of data model objects that meet certain criteria. Therefore, both of them need to be
updated after the data is ingested to reflect the latest changes. Data transformations are optional
steps that can be applied to the data streams before they are mapped to the data model objects, so
they are not relevant to the issue. Reviewing calculated insights to make sure they’re run after the
segments are refreshed (option D) is also incorrect because calculated insights are independent of
segments and do not need to be refreshed after them.
Reference: Salesforce Data Cloud Consultant Exam Guide, Data Ingestion and Modeling,
Calculated Insights, Segments
5. Northern Trail Outfitters wants to use some of its Marketing Cloud data in Data Cloud.
Explanation:
CloudPage is a web page that can be personalized and hosted by Marketing Cloud. It is not one of
the standard engagement channels that Data Cloud supports out of the box. To use CloudPage data
in Data Cloud, a custom integration is required. The other engagement channels (SMS, email, and
mobile push) are supported by Data Cloud and can be integrated using the Marketing Cloud
Connector or the Marketing Cloud API.
Reference: Data Cloud Overview, Marketing Cloud Connector, Marketing Cloud API
6. Which permission setting should a consultant check if the custom Salesforce CRM object
is not available in New Data Stream configuration?
Confirm the Create object permission is enabled in the Data Cloud org.
Confirm the View All object permission is enabled in the source Salesforce CRM org.correct
Confirm the Ingest Object permission is enabled in the Salesforce CRM org.
Confirm that the Modify Object permission is enabled in the Data Cloud org.
Question was not answered
Explanation:
To create a new data stream from a custom Salesforce CRM object, the consultant needs to confirm
that the View All object permission is enabled in the source Salesforce CRM org. This permission
allows the user to view all records associated with the object, regardless of sharing settings1.
Without this permission, the custom object will not be available in the New Data Stream
configuration2.
Object Permissions
7. Which two common use cases can be addressed with Data Cloud? Choose 2 answers
Understand and act upon customer data to drive more relevant experiences.correct
Govern enterprise data lifecycle through a centralized set of policies and processes.
Harmonize data from multiple sources with a standardized and extendable data model.correct
Safeguard critical business data by serving as a centralized system for backup and disaster
recovery.
Question was not answered
Explanation:
Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query,
analyze, and act on their data across various Salesforce and external sources.
Some of the common use cases that can be addressed with Data Cloud are:
Understand and act upon customer data to drive more relevant experiences. Data Cloud can help
customers gain a 360-degree view of their customers by unifying data from different sources and
resolving identities across channels. Data Cloud can also help customers segment their audiences,
create personalized experiences, and activate data in any channel using insights and AI.
Harmonize data from multiple sources with a standardized and extendable data model. Data Cloud
can help customers transform and cleanse their data before using it, and map it to a common data
model that can be extended and customized. Data Cloud can also help customers create calculated
insights and related attributes to enrich their data and optimize identity resolution.
The other two options are not common use cases for Data Cloud. Data Cloud does not provide data
governance or backup and disaster recovery features, as these are typically handled by other
Salesforce or external solutions.
8. Where is value suggestion for attributes in segmentation enabled when creating the DMO?
Data Mapping
Data Transformation
Segment Setupcorrect
Data Stream Setup
Question was not answered
Explanation:
Value suggestion for attributes in segmentation is a feature that allows you to see and select the
possible values for a text field when creating segment filters. You can enable or disable this feature
for each data model object (DMO) field in the DMO record home. Value suggestion can be enabled
for up to 500 attributes for your entire org. It can take up to 24 hours for suggested values to appear.
To use value suggestion when creating segment filters, you need to drag the attribute onto the
canvas and start typing in the Value field for an attribute. You can also select multiple values for
some operators. Value suggestion is not available for attributes with more than 255 characters or for
relationships that are one-to-many (1:N).
Reference: Use Value Suggestions in Segmentation, Considerations for Selecting Related Attributes
9. A Data Cloud customer wants to adjust their identity resolution rules to increase their
accuracy of matches. Rather than matching on email address, they want to review a rule that
joins their CRM Contacts with their Marketing Contacts, where both use the CRM ID as their
primary key.
Which two steps should the consultant take to address this new use case? Choose 2
answers
Map the primary key from the two systems to Party Identification, using CRM ID as the
identification name for both.correct
Map the primary key from the two systems to party identification, using CRM ID as the identification
name for individuals
coming from the CRM, and Marketing ID as the identification name for individuals coming from the
marketing platform.
Create a custom matching rule for an exact match on the Individual ID attribute.
Create a matching rule based on party identification that matches on CRM ID as the party
identification name.correct
Question was not answered
Explanation:
To address this new use case, the consultant should map the primary key from the two systems to
Party Identification, using CRM ID as the identification name for both, and create a matching rule
based on party identification that matches on CRM ID as the party identification name. This way, the
consultant can ensure that the CRM Contacts and Marketing Contacts are matched based on their
CRM ID, which is a unique identifier for each individual. By using Party Identification, the consultant
can also leverage the benefits of this attribute, such as being able to match across different entities
and sources, and being able to handle multiple values for the same individual. The other options are
incorrect because they either do not use the CRM ID as the primary key, or they do not use Party
Identification as the attribute type.
Reference: Configure Identity Resolution Rulesets, Identity Resolution Match Rules, Data Cloud
Identity Resolution Ruleset, Data Cloud Identity Resolution Config Input
10. Which consideration related to the way Data Cloud ingests CRM data is true?
A. CRM data cannot be manually refreshed and must wait for the next scheduled
synchronization,
C. Formula fields are refreshed at regular sync intervals and are updated at the next full
refresh.
D. The CRM Connector allows standard fields to stream into Data Cloud in real time.
wrong
Explanation:
The correct answer is
D. The CRM Connector allows standard fields to stream into Data Cloud in real time. This means
that any changes to the standard fields in the CRM data source are reflected in Data Cloud almost
instantly, without waiting for the next scheduled synchronization. This feature enables Data Cloud to
have the most up-to-date and accurate CRM data for segmentation and activation1.
A. CRM data can be manually refreshed at any time by clicking the Refresh button on the data
stream detail page2. This option is false.
B. The CRM Connector’s synchronization times can be customized to up to 60-minute intervals, not
15-minute intervals3. This option is false.
C. Formula fields are not refreshed at regular sync intervals, but only at the next full refresh4. A full
refresh is a complete data ingestion process that occurs once every 24 hours or when manually
Reference: 1: Connect and Ingest Data in Data Cloud article on Salesforce Help
2: Data Sources in Data Cloud unit on Trailhead
11. What does the Source Sequence reconciliation rule do in identity resolution?
Includes data from sources where the data is most frequently occurring
Identifies which individual records should be merged into a unified profile by setting a priority for
specific data sources
Identifies which data sources should be used in the process of reconcillation by prioritizing the most
recently updated data source
Sets the priority of specific data sources when building attributes in a unified profile, such as
a
first or last namecorrect
Question was not answered
Explanation:
: The Source Sequence reconciliation rule sets the priority of specific data sources when building
attributes in a unified profile, such as a first or last name. This rule allows you to define which data
source should be used as the primary source of truth for each attribute, and which data sources
should be used as fallbacks in case the primary source is missing or invalid. For example, you can
set the Source Sequence rule to use data from Salesforce CRM as the first priority, data from
Marketing Cloud as the second priority, and data from Google Analytics as the third priority for the
first name attribute. This way, the unified profile will use the first name value from Salesforce CRM if
it exists, otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to
ensure the accuracy and consistency of the unified profile attributes across different data sources.
Reference: Salesforce Data Cloud Consultant Exam Guide, Identity Resolution, Reconciliation Rules
12. Which two dependencies prevent a data stream from being deleted? Choose 2 answers
The underlying data lake object is used in activation.
The underlying data lake object is used in a data transform.correct
The underlying data lake object is mapped to a data model object.correct
The underlying data lake object is used in segmentation.
Question was not answered
Explanation:
To delete a data stream in Data Cloud, the underlying data lake object (DLO) must not have any
dependencies or references to other objects or processes.
The following two dependencies prevent a data stream from being deleted1:
Data transform: This is a process that transforms the ingested data into a standardized format and
structure for the data model. A data transform can use one or more DLOs as input or output. If a
DLO is used in a data transform, it cannot be deleted until the data transform is removed or
modified2. Data model object: This is an object that represents a type of entity or relationship in the
data model. A data model object can be mapped to one or more DLOs to define its attributes and
values. If a DLO is mapped to a data model object, it cannot be deleted until the mapping is removed
or changed3.
13. What should a user do to pause a segment activation with the intent of using that
segment again?
B. Delete the segment. This option permanently removes the segment from Data Cloud and cannot
be undone2. This option does not allow the user to use the segment again.
C. Skip the activation. This option skips the current activation cycle for the segment, but does not
affect the future activation cycles3. This option does not pause the segment activation indefinitely.
D. Stop the publish schedule. This option stops the segment from publishing to the chosen targets,
but does not deactivate the segment4. This option does not pause the segment activation
completely.
AND
Explanation:
: When creating a segment on an individual, using two separate containers linked by an AND means
that the individual must satisfy both the conditions in the containers. In this case, the individual must
have purchased at least one product with the color attribute equal to ‘red’ and at least one product
with the primary product category attribute equal to ‘shoes’. The products do not have to be the
same or purchased in the same transaction. Therefore, the correct answer is A.
The other options are incorrect because they imply different logical operators or conditions.
Option B implies that the individual must have purchased a single product that has both the color
attribute equal to ‘red’ and the primary product category attribute equal to ‘shoes’.
Option C implies that the individual must have purchased only one product that has both the color
attribute equal to ‘red’ and the primary product category attribute equal to ‘shoes’ and no other
products.
Option D implies that the individual must have purchased either one product with the color attribute
equal to ‘red’ or one product with the primary product category attribute equal to ‘shoes’ or both,
which is equivalent to using an OR operator instead of an AND operator.
Reference: Create a Container for Segmentation Create a Segment in Data Cloud Navigate Data
Cloud Segmentation
15. What should an organization use to stream inventory levels from an inventory
management system into Data Cloud in a fast and scalable, near-real-time way?
Cloud Storage Connector
Commerce Cloud Connector
Ingestion APIcorrect
Marketing Cloud Personalization Connector
Question was not answered
Explanation:
The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud
in a fast and scalable way. You can use the Ingestion API to send data from your inventory
management system into Data Cloud as JSON objects, and then use Data Cloud to create data
models, segments, and insights based on your inventory data. The Ingestion API supports both
batch and streaming modes, and can handle up to 100,000 records per second. The Ingestion API
also provides features such as data validation, encryption, compression, and retry mechanisms to
ensure data quality and security.
Reference: Ingestion API Developer Guide, Ingest Data into Data Cloud
16. Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new
line of business. The new business specializes in gourmet camping food. For business
reasons as well as security reasons, it's important to NTO to keep all Data Cloud data
separated by brand.
Which capability best supports NTO's desire to separate its data by brand?
Data streams for each brand
Data model objects for each brand
Data spaces for each brandcorrect
Data sources for each brand
Question was not answered
Explanation:
Data spaces are logical containers that allow you to separate and organize your data by different
criteria, such as brand, region, product, or business unit1. Data spaces can help you manage data
access, security, and governance, as well as enable cross-cloud data integration and activation2.
For NTO, data spaces can support their desire to separate their data by brand, so that they can have
different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping
food businesses. Data spaces can also help NTO comply with any data privacy and security
regulations that may apply to their different brands3. The other options are incorrect because they
do not provide the same level of data separation and organization as data spaces. Data streams are
used to ingest data from different sources into Data Cloud, but they do not separate the data by
brand4. Data model objects are used to define the structure and attributes of the data, but they do
not isolate the data by brand5. Data sources are used to identify the origin and type of the data, but
they do not partition the data by brand.
Reference: Data Spaces Overview, Create Data Spaces, Data Privacy and Security in Data Cloud,
Data Streams Overview, Data Model Objects Overview, [Data Sources Overview]
17. Cumulus Financial created a segment called High Investment Balance Customers. This is
a foundational segment that includes several segmentation criteria the marketing team
should consistently use.
Which feature should the consultant suggest the marketing team use to ensure this
consistency when creating future, more refined segments?
Create new segments using nested segments.correct
Create a High Investment Balance calculated insight.
Package High Investment Balance Customers in a data kit.
Create new segments by cloning High Investment Balance Customers.
Question was not answered
Explanation:
Nested segments are segments that include or exclude one or more existing segments. They allow
the marketing team to reuse filters and maintain consistency in their data by using an existing
segment to build a new one. For example, the marketing team can create a nested segment that
includes High Investment Balance Customers and excludes customers who have opted out of email
marketing. This way, they can leverage the foundational segment and apply additional criteria
without duplicating the rules.
The other options are not the best features to ensure consistency because:
B. A calculated insight is a data object that performs calculations on data lake objects or CRM data
and returns a result. It is not a segment and cannot be used for activation or personalization.
C. A data kit is a bundle of packageable metadata that can be exported and imported across Data
Cloud orgs. It is not a feature for creating segments, but rather for sharing components.
D. Cloning a segment creates a copy of the segment with the same rules and filters. It does not
allow the marketing team to add or remove criteria from the original segment, and it may create
confusion and redundancy.
Reference: Create a Nested Segment - Salesforce, Save Time with Nested Segments (Generally
Available) - Salesforce, Calculated Insights - Salesforce, Create and Publish a Data Kit Unit |
Salesforce Trailhead, Create a Segment in Data Cloud - Salesforce
18. Cumulus Financial uses Service Cloud as its CRM and stores mobile phone, home phone,
and work phone as three separate fields for its customers on the Contact record. The
company plans to use Data Cloud and ingest the Contact object via the CRM Connector.
What is the most efficient approach that a consultant should take when ingesting this data to
ensure all the different phone numbers are properly mapped and available for use in
activation?
Ingest the Contact object and map the Work Phone, Mobile Phone, and Home Phone to the Contact
Point Phone data map object from the Contact data stream.
Ingest the Contact object and use streaming transforms to normalize the phone numbers
from the Contact data stream into a separate Phone data lake object (DLO) that contains
three rows, and then map this new DLO to the Contact Point Phone data map object.correct
Ingest the Contact object and then create a calculated insight to normalize the phone numbers, and
then map to the Contact Point Phone data map object.
Ingest the Contact object and create formula fields in the Contact data stream on the phone
numbers, and then map to the Contact Point Phone data map object.
Question was not answered
Explanation:
The most efficient approach that a consultant should take when ingesting this data to ensure all the
different phone numbers are properly mapped and available for use in activation is B. Ingest the
Contact object and use streaming transforms to normalize the phone numbers from the Contact data
stream into a separate Phone data lake object (DLO) that contains three rows, and then map this
new DLO to the Contact Point Phone data map object. This approach allows the consultant to use
the streaming transforms feature of Data Cloud, which enables data manipulation and transformation
at the time of ingestion, without requiring any additional processing or storage. Streaming transforms
can be used to normalize the phone numbers from the Contact data stream, such as removing
spaces, dashes, or parentheses, and adding country codes if needed. The normalized phone
numbers can then be stored in a separate Phone DLO, which can have one row for each phone
number type (work, home, mobile). The Phone DLO can then be mapped to the Contact Point
Phone data map object, which is a standard object that represents a phone number associated with
a contact point. This way, the consultant can ensure that all the phone numbers are available for
activation, such as sending SMS messages or making calls to the customers.
Option A is incorrect because it does not normalize the phone numbers, which may cause issues
with activation or identity resolution. Option C is incorrect because it requires creating a calculated
insight, which is an additional step that consumes more resources and time than streaming
transforms.
creating formula fields in the Contact data stream, which may not be supported by the CRM
Connector or may cause conflicts with the existing fields in the Contact object.
Reference: Salesforce Data Cloud Consultant Exam Guide, Data Ingestion and Modeling, Streaming
Transforms, Contact Point Phone
19. A customer has a Master Customer table from their CRM to ingest into Data Cloud. The
table contains a name and primary email address, along with other personally Identifiable
information (Pll).
Explanation:
To support identity resolution in Data Cloud, the fields from the Master Customer table should be
mapped to the standard data model objects that are designed for this purpose. The Individual object
is used to store the name and other personally identifiable information (PII) of a customer, while the
Contact Phone Email object is used to store the primary email address and other contact information
of a customer. These objects are linked by a relationship field that indicates the contact information
belongs to the individual. By mapping the fields to these objects, Data Cloud can use the identity
resolution rules to match and reconcile the profiles from different sources based on the name and
email address fields. The other options are not recommended because they either create a new
custom object that is not part of the standard data model, or map all fields to the Customer object
that is not intended for identity resolution, or map all fields to the Individual object that does not have
a standard email address field.
Reference: Data Modeling Requirements for Identity Resolution, Create Unified Individual Profiles
20. Cloud Kicks received a Request to be Forgotten by a customer.
In which two ways should a consultant use Data Cloud to honor this request? Choose 2
answers
Delete the data from the incoming data stream and perform a full refresh.correct
Add the Individual ID to a headerless file and use the delete from file functionality.correct
Use Data Explorer to locate and manually remove the Individual.
Use the Consent API to suppress processing and delete the Individual and related records
from source data streams.correct
Question was not answered
Explanation:
: To honor a Request to be Forgotten by a customer, a consultant should use Data Cloud in two
ways: Add the Individual ID to a headerless file and use the delete from file functionality. This option
allows the consultant to delete multiple Individuals from Data Cloud by uploading a CSV file with
their IDs1. The deletion process is asynchronous and can take up to 24 hours to complete1.
Use the Consent API to suppress processing and delete the Individual and related records from
source data streams. This option allows the consultant to submit a Data Deletion request for an
Individual profile in Data Cloud using the Consent API2. A Data Deletion request deletes the
specified Individual entity and any entities where a relationship has been defined between that
entity’s identifying attribute and the Individual ID attribute2. The deletion process is reprocessed at
30, 60, and 90 days to ensure a full deletion2. The other options are not correct because:
Deleting the data from the incoming data stream and performing a full refresh will not delete the
existing data in Data Cloud, only the new data from the source system3.
Using Data Explorer to locate and manually remove the Individual will not delete the related records
from the source data streams, only the Individual entity in Data Cloud.
Requesting Data Deletion or Right to Be Forgotten Data Refresh for Data Cloud
[Data Explorer]
21. Cumulus Financial uses Data Cloud to segment banking customers and activate them for
direct mail via a Cloud File Storage activation. The company also wants to analyze
individuals who have been in the segment within the last 2 years.
Explanation:
Data Cloud allows customers to analyze the segment membership history of individuals using the
Segment Membership data model object. This object stores information about when an individual
joined or left a segment, and can be used to create reports and dashboards to track segment
performance over time. Cumulus Financial can use this object to filter individuals who have been in
the segment within the last 2 years and compare them with other metrics.
The other options are not Data Cloud components that allow for this analysis. Segment exclusion is
a feature that allows customers to remove individuals from a segment based on another segment.
Nested segments are segments that are created from other segments using logical operators.
Calculated insights are derived attributes that are created from existing data using formulas.
Explanation:
Data Cloud is a platform that enables you to activate all your customer data across Salesforce
applications and other systems. Data Cloud allows you to create a unified profile of each customer
by ingesting, transforming, and linking data from various sources, such as CRM, marketing,
commerce, service, and external data providers. Data Cloud also provides insights and analytics on
customer behavior, preferences, and needs, as well as tools to segment, target, and personalize
customer interactions. Data Cloud’s primary value to customers is to provide a unified view of a
customer and their related data, which can help you deliver better customer experiences, increase
loyalty, and drive growth.
23. During an implementation project, a consultant completed ingestion of all data streams
for their customer.
Prior to segmenting and acting on that data, which additional configuration is required?
Data Activation
Calculated Insights
Data Mapping
Identity Resolutioncorrect
Question was not answered
Explanation:
After ingesting data from different sources into Data Cloud, the additional configuration that is
required before segmenting and acting on that data is Identity Resolution. Identity Resolution is the
process of matching and reconciling source profiles from different data sources and creating unified
profiles that represent a single individual or entity1. Identity Resolution enables you to create a 360-
degree view of your customers and prospects, and to segment and activate them based on their
attributes and behaviors2. To configure Identity Resolution, you need to create and deploy a ruleset
that defines the match rules and reconciliation rules for your data3. The other options are incorrect
because they are not required before segmenting and acting on the data. Data Activation is the
process of sending data from Data Cloud to other Salesforce clouds or external destinations for
marketing, sales, or service purposes4. Calculated Insights are derived attributes that are computed
based on the source or unified data, such as lifetime value, churn risk, or product affinity5. Data
Mapping is the process of mapping source attributes to unified attributes in the data model. These
configurations can be done after segmenting and acting on the data, or in parallel with Identity
Resolution, but they are not prerequisites for it.
Reference: Identity Resolution Overview, Segment and Activate Data in Data Cloud, Configure
Identity Resolution Rulesets, Data Activation Overview, Calculated Insights Overview, [Data
Mapping Overview]
24. Northern Trail Outfitters (NTO) wants to connect their B2C Commerce data with Data
Cloud and bring two years of transactional history into Data Cloud.
Explanation:
The B2C Commerce Starter Bundles are predefined data streams that ingest order and product data
from B2C Commerce into Data Cloud. However, the starter bundles only bring in the last 90 days of
data by default. To bring in two years of transactional history, NTO needs to use a custom extract
from B2C Commerce that includes the historical data and configure the data stream to use the
custom extract as the source. The other options are not sufficient to achieve this because: A. B2C
Commerce Starter Bundles only ingest the last 90 days of data by default.
B. Direct Sales Order entity ingestion is not a supported method for connecting B2C Commerce data
with Data Cloud. Data Cloud does not provide a direct-access connection for B2C Commerce data,
only data ingestion.
C. Direct Sales Product entity ingestion is not a supported method for connecting B2C Commerce
data with Data Cloud. Data Cloud does not provide a direct-access connection for B2C Commerce
data, only data ingestion.
Reference: Create a B2C Commerce Data Bundle - Salesforce, B2C Commerce Connector -
Salesforce, Salesforce B2C Commerce Pricing Plans & Costs
25. A customer has a requirement to receive a notification whenever an activation fails for a
particular segment.
Which feature should the consultant use to solution for this use case?
Flow
Report
Activation alertcorrect
Dashboard
Question was not answered
Explanation:
The feature that the consultant should use to solution for this use case is C. Activation alert.
Activation alerts are notifications that are sent to users when an activation fails or succeeds for a
segment. Activation alerts can be configured in the Activation Settings page, where the consultant
can specify the recipients, the frequency, and the conditions for sending the alerts. Activation alerts
can help the customer to monitor the status of their activations and troubleshoot any issues that may
arise.
26. Which two steps should a consultant take if a successfully configured Amazon S3 data
stream fails to refresh with a "NO FILE FOUND" error message? Choose 2 answers
Check if correct permissions are configured for the Data Cloud user.correct
Check if the Amazon S3 data source is enabled in Data Cloud Setup.
Check If the file exists in the specified bucket location.correct
Check if correct permissions are configured for the S3 user.
Question was not answered
Explanation:
: A “NO FILE FOUND” error message indicates that Data Cloud cannot access or locate the file from
the Amazon S3 source.
There are two possible reasons for this error and two corresponding steps that a consultant should
take to troubleshoot it:
The Data Cloud user does not have the correct permissions to read the file from the Amazon S3
bucket. This could happen if the user’s permission set or profile does not include the Data Cloud
Data Stream Read permission, or if the user’s Amazon S3 credentials are invalid or expired. To fix
this issue, the consultant should check and update the user’s permissions and credentials in Data
Cloud and Amazon S3, respectively.
The file does not exist in the specified bucket location. This could happen if the file name or path has
changed, or if the file has been deleted or moved from the Amazon S3 bucket. To fix this issue, the
consultant should check and verify the file name and path in the Amazon S3 bucket, and update the
data stream configuration in Data Cloud accordingly.
Reference: Create Amazon S3 Data Stream in Data Cloud, How to Use the Amazon S3 Storage
Connector in Data Cloud, Amazon S3 Connection
27. A consultant is discussing the benefits of Data Cloud with a customer that has multiple
disjointed data sources.
Which two functional areas should the consultant highlight in relation to managing customer
data? Choose 2 answers
Data Harmonization correct
Unified Profilescorrect
Master Data Management
Data Marketplace
Question was not answered
Explanation:
Data Cloud is an open and extensible data platform that enables smarter, more efficient AI with
secure access to first-party and industry data1.
Two functional areas that the consultant should highlight in relation to managing customer data are:
Data Harmonization: Data Cloud harmonizes data from multiple sources and formats into a common
schema, enabling a single source of truth for customer data1. Data Cloud also applies data quality
rules and transformations to ensure data accuracy and consistency.
Unified Profiles: Data Cloud creates unified profiles of customers and prospects by linking data
across different identifiers, such as email, phone, cookie, and device ID1. Unified profiles provide a
holistic view of customer behavior, preferences, and interactions across channels and touchpoints.
Master Data Management: Master Data Management (MDM) is a process of creating and
maintaining a single, consistent, and trusted source of master data, such as product, customer,
supplier, or location data. Data Cloud does not provide MDM functionality, but it can integrate with
MDM solutions to enrich customer data.
Data Marketplace: Data Marketplace is a feature of Data Cloud that allows users to discover,
access, and activate data from third-party providers, such as demographic, behavioral, and intent
data. Data Marketplace is not a functional area related to managing customer data, but rather a
source of external data that can enhance customer data.
28. A retailer wants to unify profilezs using Loyalty ID which is different than the unique ID of
their customers.
Which object should the consultant use in identity resolution to perform exact match rules on
the Loyalty ID?
Party Identification objectcorrect
Loyalty Identification object
Individual object
Contact Identification object
Question was not answered
Explanation:
The Party Identification object is the correct object to use in identity resolution to perform exact
match rules on the Loyalty ID. The Party Identification object is a child object of the Individual object
that stores different types of identifiers for an individual, such as email, phone, loyalty ID, social
media handle, etc. Each identifier has a type, a value, and a source. The consultant can use the
Party Identification object to create a match rule that compares the Loyalty ID type and value across
different sources and links the corresponding individuals.
The other options are not correct objects to use in identity resolution to perform exact match rules on
the Loyalty ID. The Loyalty Identification object does not exist in Data Cloud. The Individual object is
the parent object that represents a unified profile of an individual, but it does not store the Loyalty ID
directly. The Contact Identification object is a child object of the Contact object that stores identifiers
for a contact, such as email, phone, etc., but it does not store the Loyalty ID.
29. Which data model subject area defines the revenue or quantity for an opportunity by
product family?
Engagement
Product
Party
Sales Ordercorrect
Question was not answered
Explanation:
The Sales Order subject area defines the details of an order placed by a customer for one or more
products or services. It includes information such as the order date, status, amount, quantity,
currency, payment method, and delivery method. The Sales Order subject area also allows you to
track the revenue or quantity for an opportunity by product family, which is a grouping of products
that share common characteristics or features. For example, you can use the Sales Order Line Item
DMO to associate each product in an order with its product family, and then use the Sales Order
Revenue DMO to calculate the total revenue or quantity for each product family in an opportunity.
Reference: Sales Order Subject Area, Sales Order Revenue DMO Reference
30. Which configuration supports separate Amazon S3 buckets for data ingestion and
activation?
Dedicated S3 data sources in Data Cloud setupcorrect
Multiple S3 connectors in Data Cloud setup
Dedicated S3 data sources in activation setup
Separate user credentials for data stream and activation target
Question was not answered
Explanation:
To support separate Amazon S3 buckets for data ingestion and activation, you need to configure
dedicated S3 data sources in Data Cloud setup. Data sources are used to identify the origin and
type of the data that you ingest into Data Cloud1. You can create different data sources for each S3
bucket that you want to use for ingestion or activation, and specify the bucket name, region, and
access credentials2. This way, you can separate and organize your data by different criteria, such as
brand, region, product, or business unit3. The other options are incorrect because they do not
support separate S3 buckets for data ingestion and activation. Multiple S3 connectors are not a valid
configuration in Data Cloud setup, as there is only one S3 connector available4. Dedicated S3 data
sources in activation setup are not a valid configuration either, as activation setup does not require
data sources, but activation targets5. Separate user credentials for data stream and activation target
are not sufficient to support separate S3 buckets, as you also need to specify the bucket name and
region for each data source2.
Reference: Data Sources Overview, Amazon S3 Storage Connector, Data Spaces Overview, Data
Streams Overview, Data Activation Overview
31. A customer wants to use the transactional data from their data warehouse in Data Cloud.
They are only able to export the data via an SFTP site.
Explanation:
The SFTP Connector is a data source connector that allows Data Cloud to ingest data from an SFTP
server. The customer can use the SFTP Connector to create a data stream from their exported file
and bring it into Data Cloud as a data lake object.
The other options are not the best ways to bring the file into Data Cloud because:
B. The Cloud Storage Connector is a data source connector that allows Data Cloud to ingest data
from cloud storage services such as Amazon S3, Azure Storage, or Google Cloud Storage. The
customer does not have their data in any of these services, but only on an SFTP site.
C. The Data Import Wizard is a tool that allows users to import data for many standard Salesforce
objects, such as accounts, contacts, leads, solutions, and campaign members. It is not designed to
import data from an SFTP site or for custom objects in Data Cloud.
D. The Dataloader is an application that allows users to insert, update, delete, or export Salesforce
records. It is not designed to ingest data from an SFTP site or into Data Cloud.
Reference: SFTP Connector - Salesforce, Create Data Streams with the SFTP Connector in Data
Cloud - Salesforce, Data Import Wizard - Salesforce, Salesforce Data Loader
32. When performing segmentation or activation, which time zone is used to publish and
refresh data?
Time zone specified on the activity at the time of creation
Time zone of the user creating the activity
Time zone of the Data Cloud Admin user
Time zone set by the Salesforce Data Cloud orgcorrect
Question was not answered
Explanation:
The time zone that is used to publish and refresh data when performing segmentation or activation is
D. Time zone set by the Salesforce Data Cloud org. This time zone is the one that is configured in
the org settings when Data Cloud is provisioned, and it applies to all users and activities in Data
Cloud. This time zone determines when the segments are scheduled to refresh and when the
activations are scheduled to publish. Therefore, it is important to consider the time zone difference
between the Data Cloud org and the destination systems or channels when planning the
segmentation and activation strategies.
33. Cumulus Financial is currently using Data Cloud and ingesting transactional data from its
backend system via an S3 Connector in upsert mode. During the initial setup six months ago,
the company created a formula field in Data Cloud to create a custom classification. It now
needs to update this formula to account for more classifications.
What should the consultant keep in mind with regard to formula field updates when using the
S3 Connector?
Data Cloud will initiate a full refresh of data from $3 and will update the formula on all
records.correct
Data Cloud will only update the formula on a go-forward basis for new records.
Data Cloud does not support formula field updates for data streams of type upsert.
Data Cloud will update the formula for all records at the next incremental upsert refresh.
Question was not answered
34. Luxury Retailers created a segment targeting high value customers that it activates
through
Marketing Cloud for email communication. The company notices that the activated count is
smaller
Explanation:
Data Cloud requires a Contact Point for Marketing Cloud activations, which is a record that links an
individual to an email address. This ensures that the individual has given consent to receive email
communications and that the email address is valid. If the individual does not have a related Contact
Point, they will not be activated in Marketing Cloud. This may result in a lower activated count than
the segment count.
35. Northern Trail Outfitters wants to implement Data Cloud and has several use cases in
mind.
Which two use cases are considered a good fit for Data Cloud? Choose 2 answers
To ingest and unify data from various sources to reconcile customer identitycorrect
To create and orchestrate cross-channel marketing messages
To use harmonized data to more accurately understand the customer and business
impactcorrect
To eliminate the need for separate business intelligence and IT data management tools
Question was not answered
Explanation:
Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query,
analyze, and act on their data across various Salesforce and external sources.
Some of the use cases that are considered a good fit for Data Cloud are:
To ingest and unify data from various sources to reconcile customer identity. Data Cloud can help
customers bring all their data, whether streaming or batch, into Salesforce and map it to a common
data model. Data Cloud can also help customers resolve identities across different channels and
sources and create unified profiles of their customers.
To use harmonized data to more accurately understand the customer and business impact. Data
Cloud can help customers transform and cleanse their data before using it, and enrich it with
calculated insights and related attributes. Data Cloud can also help customers create segments and
audiences based on their data and activate them in any channel. Data Cloud can also help
customers use AI to predict customer behavior and outcomes.
The other two options are not use cases that are considered a good fit for Data Cloud. Data Cloud
does not provide features to create and orchestrate cross-channel marketing messages, as this is
typically handled by other Salesforce solutions such as Marketing Cloud. Data Cloud also does not
eliminate the need for separate business intelligence and IT data management tools, as it is
designed to work with them and complement their capabilities.
Explanation:
Building a trust-based, first-party data asset means collecting, managing, and activating data from
your own customers and prospects in a way that respects their privacy and preferences. It also
means providing them with clear and honest information about how you use their data, what benefits
they can expect from sharing their data, and how they can control their data. By doing so, you can
create a mutually beneficial relationship with your customers, where they trust you to use their data
responsibly and ethically, and you can deliver more relevant and personalized experiences to them.
A trust-based, first-party data asset can help you improve customer loyalty, retention, and growth, as
well as comply with data protection regulations and standards.
Reference: Use first-party data for a powerful digital experience, Why first-party data is the key to
data privacy, Build a first-party data strategy
37. What is the result of a segmentation criteria filtering on City | Is Equal To | 'San José'?
Cities containing 'San José’, 'San Jose’, 'san jose’, or 'san jose’
Cities only containing 'San Jose' or 'san jose’
Cities only containing 'San Jose' or 'San Jose'
Cities only containing 'San José’ or 'san josé'correct
Question was not answered
Explanation:
The result of a segmentation criteria filtering on City | Is Equal To | ‘San José’ is cities only
containing 'San José’ or ‘san josé’. This is because the segmentation criteria is case-sensitive and
accent-sensitive, meaning that it will only match the exact value that is entered in the filter1.
Therefore, cities containing 'San Jose’, 'san jose’, or ‘San Jose’ will not be included in the result, as
they do not match the filter value exactly. To include cities with different variations of the name ‘San
José’, you would need to use the OR operator and add multiple filter values, such as ‘San José’ OR
‘San Jose’ OR ‘san jose’ OR 'san josé’2.
38. During a privacy law discussion with a customer, the customer indicates they need to
honor requests for the right to be forgotten. The consultant determines that Consent API will
solve this business need.
Which two considerations should the consultant inform the customer about? Choose 2
answers
Data deletion requests are reprocessed at 30, 60, and 90 days.correct
Data deletion requests are processed within 1 hour.
Data deletion requests are submitted for Individual profiles.correct
Data deletion requests submitted to Data Cloud are passed to all connected Salesforce
clouds.correct
Question was not answered
Explanation:
When advising a customer about using the Consent API in Salesforce to comply with requests for
the right to be forgotten, the consultant should focus on two primary considerations:
Data deletion requests are submitted for Individual profiles (Answer C): The Consent API in
Salesforce is designed to handle data deletion requests specifically for individual profiles. This
means that when a request is made to delete data, it is targeted at the personal data associated with
an individual's profile in the Salesforce system. The consultant should inform the customer that the
requests must be specific to individual profiles to ensure accurate processing and compliance with
privacy laws.
Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds
(Answer D): When a data deletion request is made through the Consent API in Salesforce Data
Cloud, the request is not limited to the Data Cloud alone. Instead, it propagates through all
connected Salesforce clouds, such as Sales Cloud, Service Cloud, Marketing Cloud, etc. This
ensures comprehensive compliance with the right to be forgotten across the entire Salesforce
ecosystem. The customer should be aware that the deletion request will affect all instances of the
individual’s data across the connected Salesforce environments.
39. To import campaign members into a campaign in Salesforce CRM, a user wants to export
the segment to Amazon S3. The resulting file needs to include the Salesforce CRM Campaign
ID in the name.
Explanation:
The two ways to achieve this outcome are A and C. Include campaign identifier in the activation
name and include campaign identifier in the filename specification. These two options allow the user
to specify the Salesforce CRM Campaign ID in the name of the file that is exported to Amazon S3.
The activation name and the filename specification are both configurable settings in the activation
wizard, where the user can enter the campaign identifier as a text or a variable. The activation name
is used as the prefix of the filename, and the filename specification is used as the suffix of the
filename. For example, if the activation name is “Campaign_123” and the filename specification is
“{segmentName}_{date}”, the resulting file name will be “Campaign_123_SegmentA_2023-12-
18.csv”. This way, the user can easily identify the file that corresponds to the campaign and import it
into Salesforce CRM.
The other options are not correct. Option B is incorrect because hard coding the campaign identifier
as a new attribute in the campaign activation is not possible. The campaign activation does not have
any attributes, only settings. Option D is incorrect because including the campaign identifier in the
segment name is not sufficient. The segment name is not used in the filename of the exported file,
unless it is specified in the filename specification. Therefore, the user will not be able to see the
campaign identifier in the file name.
40. How can a consultant modify attribute names to match a naming convention in Cloud File
Storage targets?
Use a formula field to update the field name in an activation.
Update attribute names in the data stream configuration.
Set preferred attribute names when configuring activation.correct
Update field names in the data model object.
Question was not answered
Explanation:
: A Cloud File Storage target is a type of data action target in Data Cloud that allows sending data to
a cloud storage service such as Amazon S3 or Google Cloud Storage. When configuring an
activation to a Cloud File Storage target, a consultant can modify the attribute names to match a
naming convention by setting preferred attribute names in Data Cloud. Preferred attribute names are
aliases that can be used to control the field names in the target file. They can be set for each
attribute in the activation configuration, and they will override the default field names from the data
model object. The other options are incorrect because they do not affect the field names in the target
file. Using a formula field to update the field name in an activation will not change the field name, but
only the field value. Updating attribute names in the data stream configuration will not affect the
existing data lake objects or data model objects. Updating field names in the data model object will
change the field names for all data sources and activations that use the object, which may not be
desirable or consistent.
Reference: Preferred Attribute Name, Create a Data Cloud Activation Target, Cloud File Storage
Target
41. Northern Trail Qutfitters wants to be able to calculate each customer's lifetime value {LTV)
but also create breakdowns of the revenue sourced by website, mobile app, and retail
channels.
What should a consultant use to address this use case in Data Cloud?
Flow Orchestration
Nested segments
Metrics on metricscorrect
Streaming data transform
Question was not answered
Explanation:
Metrics on metrics is a feature that allows creating new metrics based on existing metrics and
applying mathematical operations on them. This can be useful for calculating complex business
metrics such as LTV, ROI, or conversion rates. In this case, the consultant can use metrics on
metrics to calculate the LTV of each customer by summing up the revenue generated by them
across different channels. The consultant can also create breakdowns of the revenue by channel by
using the channel attribute as a dimension in the metric definition.
Reference: Metrics on Metrics, Create Metrics on Metrics
42. A consultant wants to ensure that every segment managed by multiple brand teams
adheres to the same set of exclusion criteria, that are updated on a monthly basis.
Explanation:
The most efficient option to allow for this capability is to create a reusable container block with
common criteria. A container block is a segment component that can be reused across multiple
segments. A container block can contain any combination of filters, nested segments, and exclusion
criteria. A consultant can create a container block with the exclusion criteria that apply to all the
segments managed by multiple brand teams, and then add the container block to each segment.
This way, the consultant can update the exclusion criteria in one place and have them reflected in all
the segments that use the container block.
The other options are not the most efficient options to allow for this capability. Creating, publishing,
and deploying a data kit is a way to share data and segments across different data spaces, but it
does not allow for updating the exclusion criteria on a monthly basis. Creating a nested segment is a
way to combine segments using logical operators, but it does not allow for excluding individuals
based on specific criteria. Creating a segment and copying it for each brand is a way to create
multiple segments with the same exclusion criteria, but it does not allow for updating the exclusion
criteria in one place.
Explanation:
The correct answer is A. Streaming transforms. Streaming transforms are a feature of Data Cloud
that allows real-time data integration with Salesforce CRM. Streaming transforms use the Data
Cloud Streaming API to synchronize micro-batches of updates between the CRM data source and
Data Cloud in near-real time1. Streaming transforms enable Data Cloud to have the most current
and accurate CRM data for segmentation and activation2.
B. Data model triggers. Data model triggers are a feature of Data Cloud that allows custom logic to
be executed when data model objects are created, updated, or deleted3. Data model triggers do not
integrate data with Salesforce CRM, but rather manipulate data within Data Cloud.
C. Sales and Service bundle. Sales and Service bundle is a feature of Data Cloud that allows pre-
built data streams, data model objects, segments, and activations for Sales Cloud and Service Cloud
data sources4. Sales and Service bundle does not integrate data in real time with Salesforce CRM,
but rather ingests data at scheduled intervals.
D. Data actions and Lightning web components. Data actions and Lightning web components are
features of Data Cloud that allow custom user interfaces and workflows to be built and embedded in
Salesforce applications5. Data actions and Lightning web components do not integrate data with
Salesforce CRM, but rather display and interact with data within Salesforce applications.
5: [Data Actions and Lightning Web Components in Data Cloud] unit on Trailhead
Which sequence of data model object (DMO) joins is necessary within the calculated Insight
to enable this calculation?
Unified Individual > Unified Link Individual > Sales Ordercorrect
Unified Individual > Individual > Sales Order
Sales Order > Individual > Unified Individual
Sales Order > Unified Individual
Question was not answered
Explanation:
To create a multi-dimensional metric to identify unified individual lifetime value (LTV), the sequence
of data model object (DMO) joins that is necessary within the calculated Insight is Unified Individual
> Unified Link Individual > Sales Order. This is because the Unified Individual DMO represents the
unified profile of an individual or entity that is created by identity resolution1. The Unified Link
Individual DMO represents the link between a unified individual and an individual from a source
system2. The Sales Order DMO represents the sales order information from a source system3. By
joining these three DMOs, you can calculate the LTV of a unified individual based on the sales order
data from different source systems. The other options are incorrect because they do not join the
correct DMOs to enable the LTV calculation. Option B is incorrect because the Individual DMO
represents the source profile of an individual or entity from a source system, not the unified profile4.
Option C is incorrect because the join order is reversed, and you need to start with the Unified
Individual DMO to identify the unified profile. Option D is incorrect because it is missing the Unified
Link Individual DMO, which is needed to link the unified profile with the source profile.
Reference: Unified Individual Data Model Object, Unified Link Individual Data Model Object, Sales
Order Data Model Object, Individual Data Model Object
45. Cumulus Financial created a segment called Multiple Investments that contains
individuals who have invested in two or more mutual funds.
The company plans to send an email to this segment regarding a new mutual fund offering,
and wants to personalize the email content with information about each customer's current
mutual fund investments.
A. Include Fund Type equal to "Mutual Fund" as a related attribute. Configure an activation
based on the new segment with no additional attributes.
B. Choose the Multiple Investments segment, choose the Email contact point, add related
attribute Fund Name, and add related attribute filter for Fund Type equal to "Mutual Fund".
C. Choose the Multiple Investments segment, choose the Email contact point, and add
related attribute Fund Type.
D. Include Fund Name and Fund Type by default for post processing in the target system.
wrong
Explanation:
To personalize the email content with information about each customer’s current mutual fund
investments, the Data Cloud consultant needs to add related attributes to the activation. Related
attributes are additional data fields that can be sent along with the segment to the target system for
personalization or analysis purposes. In this case, the consultant needs to add the Fund Name
attribute, which contains the name of the mutual fund that the customer has invested in, and apply a
filter for Fund Type equal to “Mutual Fund” to ensure that only relevant data is sent.
A. Including Fund Type equal to “Mutual Fund” as a related attribute is not enough to personalize the
email content. The consultant also needs to include the Fund Name attribute, which contains the
specific name of the mutual fund that the customer has invested in.
C. Adding related attribute Fund Type is not enough to personalize the email content. The consultant
also needs to add the Fund Name attribute, which contains the specific name of the mutual fund that
the customer has invested in, and apply a filter for Fund Type equal to “Mutual Fund” to ensure that
only relevant data is sent.
D. Including Fund Name and Fund Type by default for post processing in the target system is not a
valid option. The consultant needs to add the related attributes and filters during the activation
configuration in Data Cloud, not after the data is sent to the target system.
Explanation:
The file on the Amazon S3 that will contain the metadata about the segment for processing is B. The
json file. The json file is a metadata file that is generated along with the csv file when a segment is
activated to Amazon S3. The json file contains information such as the segment name, the segment
ID, the segment size, the segment attributes, the segment filters, and the segment schedule. The
destination system can use this file to identify the segment and its properties, and to match the
segment data with the corresponding fields in the destination system.
47. A customer notices that their consolidation rate has recently increased. They contact the
consultant to ask why.
What are two likely explanations for the increase? Choose 2 answers
New data sources have been added to Data Cloud that largely overlap with the existing
profiles.correct
Duplicates have been removed from source system data streams.
Identity resolution rules have been removed to reduce the number of matched profiles.
Identity resolution rules have been added to the ruleset to increase the number of matched
profiles.correct
Question was not answered
Explanation:
The consolidation rate is a metric that measures the amount by which source profiles are combined
to produce unified profiles in Data Cloud, calculated as 1 - (number of unified profiles / number of
source profiles). A higher consolidation rate means that more source profiles are matched and
merged into fewer unified profiles, while a lower consolidation rate means that fewer source profiles
are matched and more unified profiles are created. There are two likely explanations for why the
consolidation rate has recently increased for a customer:
New data sources have been added to Data Cloud that largely overlap with the existing profiles. This
means that the new data sources contain many profiles that are similar or identical to the profiles
from the existing data sources. For example, if a customer adds a new CRM system that has the
same customer records as their old CRM system, the new data source will overlap with the existing
one. When Data Cloud ingests the new data source, it will use the identity resolution ruleset to
match and merge the overlapping profiles into unified profiles, resulting in a higher consolidation
rate.
Identity resolution rules have been added to the ruleset to increase the number of matched profiles.
This means that the customer has modified their identity resolution ruleset to include more match
rules or more match criteria that can identify more profiles as belonging to the same individual. For
example, if a customer adds a match rule that matches profiles based on email address and phone
number, instead of just email address, the ruleset will be able to match more profiles that have the
same email address and phone number, resulting in a higher consolidation rate.
Reference: Identity Resolution Calculated Insight: Consolidation Rates for Unified Profiles, Configure
Identity Resolution Rulesets
48. A client wants to bring in loyalty data from a custom object in Salesforce CRM that
contains
a point balance for accrued hotel points and airline points within the same record. The client
wants to split these point systems into two separate records for better tracking and
processing.
Explanation:
Batch transforms are a feature that allows creating new data lake objects based on existing data
lake objects and applying transformations on them. This can be useful for splitting, merging, or
reshaping data to fit the data model or business requirements. In this case, the consultant can use
batch transforms to create a second data lake object that contains only the airline points from the
original loyalty data object. The original object can be modified to contain only the hotel points. This
way, the client can have two separate records for each point system and track and process them
accordingly.
49. A segment fails to refresh with the error "Segment references too many data lake objects
(DLOS)".
Which two troubleshooting tips should help remedy this issue? Choose 2 answers
Split the segment into smaller segments.correct
Use calculated insights in order to reduce the complexity of the segmentation query.correct
Refine segmentation criteria to limit up to five custom data model objects (DMOs).
Space out the segment schedules to reduce DLO load.
Question was not answered
Explanation:
The error “Segment references too many data lake objects (DLOs)” occurs when a segment query
exceeds the limit of 50 DLOs that can be referenced in a single query. This can happen when the
segment has too many filters, nested segments, or exclusion criteria that involve different DLOs. To
remedy this issue, the consultant can try the following troubleshooting tips:
Split the segment into smaller segments. The consultant can divide the segment into multiple
segments that have fewer filters, nested segments, or exclusion criteria. This can reduce the number
of DLOs that are referenced in each segment query and avoid the error. The consultant can then
use the smaller segments as nested segments in a larger segment, or activate them separately.
Use calculated insights in order to reduce the complexity of the segmentation query. The consultant
can create calculated insights that are derived from existing data using formulas. Calculated insights
can simplify the segmentation query by replacing multiple filters or nested segments with a single
attribute. For example, instead of using multiple filters to segment individuals based on their
purchase history, the consultant can create a calculated insight that calculates the lifetime value of
each individual and use that as a filter.
The other options are not troubleshooting tips that can help remedy this issue. Refining
segmentation criteria to limit up to five custom data model objects (DMOs) is not a valid option, as
the limit of 50 DLOs applies to both standard and custom DMOs. Spacing out the segment
schedules to reduce DLO load is not a valid option, as the error is not related to the DLO load, but to
the segment query complexity.
50. An organization wants to enable users with the ability to identify and select text attributes
from a picklist of options.
Which Data Cloud feature should help with this use case?
Value suggestioncorrect
Data harmonization
Transformation formulas
Global picklists
Question was not answered
Explanation:
: Value suggestion is a Data Cloud feature that allows users to see and select the possible values for
a text field when creating segment filters. Value suggestion can be enabled or disabled for each data
model object (DMO) field in the DMO record home. Value suggestion can help users to identify and
select text attributes from a picklist of options, without having to type or remember the exact values.
Value suggestion can also reduce errors and improve data quality by ensuring consistent and valid
values for the segment filters.
Reference: Use Value Suggestions in Segmentation, Considerations for Selecting Related Attributes
51. A consultant is working in a customer's Data Cloud org and is asked to delete the existing
identity resolution ruleset.
Which two impacts should the consultant communicate as a result of this action? Choose 2
answers
All individual data will be removed.correct
Unified customer data associated with this ruleset will be removed.correct
Dependencies on data model objects will be removed.correct
All source profile data will be removed
Question was not answered
Explanation:
Deleting an identity resolution ruleset has two major impacts that the consultant should communicate
to the customer. First, it will permanently remove all unified customer data that was created by the
ruleset, meaning that the unified profiles and their attributes will no longer be available in Data
Cloud1. Second, it will eliminate dependencies on data model objects that were used by the ruleset,
meaning that the data model objects can be modified or deleted without affecting the ruleset1. These
impacts can have significant consequences for the customer’s data quality, segmentation, activation,
and analytics, so the consultant should advise the customer to carefully consider the implications of
deleting a ruleset before proceeding. The other options are incorrect because they are not impacts
of deleting a ruleset. Option A is incorrect because deleting a ruleset will not remove all individual
data, but only the unified customer data. The individual data from the source systems will still be
available in Data Cloud1. Option D is incorrect because deleting a ruleset will not remove all source
profile data, but only the unified customer data. The source profile data from the data streams will
still be available in Data Cloud1.
52. Northern Trail Outfitters uploads new customer data to an Amazon S3 Bucket on a daily
basis to be ingested in Data Cloud.
In what order should each process be run to ensure that freshly imported data is ready and
available to use for any segment?
Calculated Insight > Refresh Data Stream > Identity Resolution
Refresh Data Stream > Calculated Insight > Identity Resolution
Identity Resolution > Refresh Data Stream > Calculated Insight
Refresh Data Stream > Identity Resolution > Calculated Insightcorrect
Question was not answered
Explanation:
To ensure that freshly imported data from an Amazon S3 Bucket is ready and available to use for
any segment, the following processes should be run in this order:
Refresh Data Stream: This process updates the data lake objects in Data Cloud with the latest data
from the source system. It can be configured to run automatically or manually, depending on the
data stream settings1. Refreshing the data stream ensures that Data Cloud has the most recent and
accurate data from the Amazon S3 Bucket.
Identity Resolution: This process creates unified individual profiles by matching and consolidating
source profiles from different data streams based on the identity resolution ruleset. It runs daily by
default, but can be triggered manually as well2. Identity resolution ensures that Data Cloud has a
single view of each customer across different data sources.
Calculated Insight: This process performs calculations on data lake objects or CRM data and returns
a result as a new data object. It can be used to create metrics or measures for segmentation or
analysis purposes3. Calculated insights ensure that Data Cloud has the derived data that can be
used for
personalization or activation.
53. Data Cloud receives a nightly file of all ecommerce transactions from the previous day.
Several segments and activations depend upon calculated insights from the updated data in
order to maintain accuracy in the customer's scheduled campaign messages.
What should the consultant do to ensure the ecommerce data is ready for use for each of the
scheduled activations?
Use Flow to trigger a change data event on the ecommerce data to refresh calculated insights and
segments before the activations are scheduled to run.correct
Set a refresh schedule for the calculated insights to occur every hour.
Ensure the activations are set to Incremental Activation and automatically publish every hour.
Ensure the segments are set to Rapid Publish and set to refresh every hour.
Question was not answered
Explanation:
The best option that the consultant should do to ensure the ecommerce data is ready for use for
each of the scheduled activations is A. Use Flow to trigger a change data event on the ecommerce
data to refresh calculated insights and segments before the activations are scheduled to run. This
option allows the consultant to use the Flow feature of Data Cloud, which enables automation and
orchestration of data processing tasks based on events or schedules. Flow can be used to trigger a
change data event on the ecommerce data, which is a type of event that indicates that the data has
been updated or changed. This event can then trigger the refresh of the calculated insights and
segments that depend on the ecommerce data, ensuring that they reflect the latest data. The refresh
of the calculated insights and segments can be completed before the activations are scheduled to
run, ensuring that the customer’s scheduled campaign messages are accurate and relevant.
Option B is incorrect because setting a refresh schedule for the calculated insights to occur every
hour may not be sufficient or efficient. The refresh schedule may not align with the activation
schedule, resulting in outdated or inconsistent data. The refresh schedule may also consume more
resources and time than necessary, as the ecommerce data may not change every hour.
Option C is incorrect because ensuring the activations are set to Incremental Activation and
automatically publish every hour may not solve the problem. Incremental Activation is a feature that
allows only the new or changed records in a segment to be activated, reducing the activation time
and size. However, this feature does not ensure that the segment data is updated or refreshed
based on the ecommerce data. The activation schedule may also not match the ecommerce data
update schedule, resulting in inaccurate or irrelevant campaign messages.
Option D is incorrect because ensuring the segments are set to Rapid Publish and set to refresh
every hour may not be optimal or effective. Rapid Publish is a feature that allows segments to be
published faster by skipping some validation steps, such as checking for duplicate records or invalid
values. However, this feature may compromise the quality or accuracy of the segment data, and
may not be suitable for all use cases. The refresh schedule may also have the same issues as
option B, as it may
not sync with the ecommerce data update schedule or the activation schedule, resulting in outdated
or inconsistent data.
Reference: Salesforce Data Cloud Consultant Exam Guide, Flow, Change Data Events, Calculated
Insights, Segments, [Activation]
54. Which two requirements must be met for a calculated insight to appear in the
segmentation canvas? Choose 2 answers
The metrics of the calculated insights must only contain numeric values.correct
The primary key of the segmented table must be a metric in the calculated insight.
The calculated insight must contain a dimension including the Individual or Unified Individual
Id.correct
The primary key of the segmented table must be a dimension in the calculated insight.correct
Question was not answered
Explanation:
A calculated insight is a custom metric or measure that is derived from one or more data model
objects or data lake objects in Data Cloud. A calculated insight can be used in segmentation to filter
or group the data based on the calculated value. However, not all calculated insights can appear in
the segmentation canvas. There are two requirements that must be met for a calculated insight to
appear in the segmentation canvas:
The calculated insight must contain a dimension including the Individual or Unified Individual Id. A
dimension is a field that can be used to categorize or group the data, such as name, gender, or
location. The Individual or Unified Individual Id is a unique identifier for each individual profile in Data
Cloud. The calculated insight must include this dimension to link the calculated value to the
individual profile and to enable segmentation based on the individual profile attributes.
The primary key of the segmented table must be a dimension in the calculated insight. The primary
key is a field that uniquely identifies each record in a table. The segmented table is the table that
contains the data that is being segmented, such as the Customer or the Order table. The calculated
insight must include the primary key of the segmented table as a dimension to ensure that the
calculated value is associated with the correct record in the segmented table and to avoid
duplication or inconsistency in the segmentation results.
Which action should the consultant take to accommodate this request in Data Cloud?
Use a streaming API call to delete the customer's information.
Use Profile Explorer to delete the customer data from Data Cloud.
Use Consent API to request deletion of the customer's information.
Use the Data Rights Subject Request tool to request deletion of the customer's information.correct
Question was not answered
56. What does the Ignore Empty Value option do in identity resolution?
Ignores empty fields when running any custom match rules
Ignores empty fields when running reconciliation rulescorrect
Ignores Individual object records with empty fields when running identity resolution rules
Ignores empty fields when running the standard match rules
Question was not answered
Explanation:
The Ignore Empty Value option in identity resolution allows customers to ignore empty fields when
running reconciliation rules. Reconciliation rules are used to determine the final value of an attribute
for a unified individual profile, based on the values from different sources. The Ignore Empty Value
option can be set to true or false for each attribute in a reconciliation rule. If set to true, the
reconciliation rule will skip any source that has an empty value for that attribute and move on to the
next source in the priority order. If set to false, the reconciliation rule will consider any source that
has an empty value for that attribute as a valid source and use it to populate the attribute value for
the unified individual profile.
The other options are not correct descriptions of what the Ignore Empty Value option does in identity
resolution. The Ignore Empty Value option does not affect the custom match rules or the standard
match rules, which are used to identify and link individuals across different sources based on their
attributes. The Ignore Empty Value option also does not ignore individual object records with empty
fields when running identity resolution rules, as identity resolution rules operate on the attribute level,
not the record level.
57. Northern Trail Outfitters (NTO) is configuring an identity resolution ruleset based on
Fuzzy Name and Normalized Email.
Explanation:
NTO is using Fuzzy Name and Normalized Email as match rules to link together data from different
sources into a unified individual profile. However, there might be cases where the same email
address is available from more than one source, and NTO needs to decide which one to use for
activation. For example, if Rachel has the same email address in Service Cloud and Marketing
Cloud, but prefers to receive communications from NTO via Marketing Cloud, NTO needs to ensure
that the email address from Marketing Cloud is activated. To do this, NTO can use the source
priority order in activations, which allows them to rank the data sources in order of preference for
activation. By placing Marketing Cloud higher than Service Cloud in the source priority order, NTO
can make sure that the email address from Marketing Cloud is delivered to the activation target,
such as an email campaign or a journey. This way, NTO can respect Rachel’s preference and
deliver a better customer experience.
58. A customer wants to create segments of users based on their Customer Lifetime Value.
However, the source data that will be brought into Data Cloud does not include that key
performance indicator (KPI).
Which sequence of steps should the consultant follow to achieve this requirement?
Ingest Data > Map Data to Data Model > Create Calculated Insight > Use in Segmentationcorrect
Create Calculated Insight > Map Data to Data Model> Ingest Data > Use in Segmentation
Create Calculated Insight > Ingest Data > Map Data to Data Model> Use in Segmentation
Ingest Data > Create Calculated Insight > Map Data to Data Model > Use in Segmentation
Question was not answered
Explanation:
To create segments of users based on their Customer Lifetime Value (CLV), the sequence of steps
that the consultant should follow is Ingest Data > Map Data to Data Model > Create Calculated
Insight > Use in Segmentation. This is because the first step is to ingest the source data into Data
Cloud using data streams1. The second step is to map the source data to the data model, which
defines the structure and attributes of the data2. The third step is to create a calculated insight,
which is a derived attribute that is computed based on the source or unified data3. In this case, the
calculated insight would be the CLV, which can be calculated using a formula or a query based on
the sales order data4. The fourth step is to use the calculated insight in segmentation, which is the
process of creating groups of individuals or entities based on their attributes and behaviors. By using
the CLV calculated insight, the consultant can segment the users by their predicted revenue from the
lifespan of their relationship with the brand. The other options are incorrect because they do not
follow the correct sequence of steps to achieve the requirement. Option B is incorrect because it is
not possible to create a calculated insight before ingesting and mapping the data, as the calculated
insight depends on the data model objects3. Option C is incorrect because it is not possible to create
a calculated insight before mapping the data, as the calculated insight depends on the data model
objects3. Option D is incorrect because it is not recommended to create a calculated insight before
mapping the data, as the calculated insight may not reflect the correct data model structure and
attributes3.
Reference: Data Streams Overview, Data Model Objects Overview, Calculated Insights Overview,
Calculating Customer Lifetime Value (CLV) With Salesforce, [Segmentation Overview]
59. During discovery, which feature should a consultant highlight for a customer who has
multiple data sources and needs to match and reconcile data about individuals into a single
unified profile?
A. Data Cleansing
B. Harmonization
C. Data Consolidation
D. Identity Resolution
wrong
Explanation:
Identity resolution is the feature that allows Data Cloud to match and reconcile data about individuals
from multiple data sources into a single unified profile. Identity resolution uses rulesets to define how
source profiles are matched and consolidated based on common attributes, such as name, email,
phone, or party identifier. Identity resolution enables Data Cloud to create a 360-degree view of each
customer across different data sources and systems12. The other options are not the best features
to highlight for this customer need because:
A. Data cleansing is the process of detecting and correcting errors or inconsistencies in data, such
as duplicates, missing values, or invalid formats. Data cleansing can improve the quality and
accuracy of data, but it does not match or reconcile data across different data sources3.
B. Harmonization is the process of standardizing and transforming data from different sources into a
common format and structure. Harmonization can enable data integration and interoperability, but it
does not match or reconcile data across different data sources4.
C. Data consolidation is the process of combining data from different sources into a single data set
or system. Data consolidation can reduce data redundancy and complexity, but it does not match or
reconcile data across different data sources5.
Reference: 1: Data and Identity in Data Cloud | Salesforce Trailhead, 2: Data Cloud Identiy
Resolution | Salesforce AI Research, 3: [Data Cleansing - Salesforce], 4: [Harmonization -
Salesforce], 5: [Data Consolidation - Salesforce]
60. Northern Trail Outfitters (NTO) wants to send a promotional campaign for customers that
have purchased within the past 6 months. The consultant created a segment to meet this
requirement.
Now, NTO brings an additional requirement to suppress customers who have made
purchases within the last week.
Explanation:
The consultant should use
B. Segmentation exclude rules to remove the recent customers. Segmentation exclude rules are
filters that can be applied to a segment to exclude records that meet certain criteria. The consultant
can use segmentation exclude rules to exclude customers who have made purchases within the last
week from the segment that contains customers who have purchased within the past 6 months. This
way, the segment will only include customers who are eligible for the promotional campaign.
The other options are not correct. Option A is incorrect because batch transforms are data
processing tasks that can be applied to data streams or data lake objects to modify or enrich the
data. Batch transforms are not used for segmentation or activation. Option C is incorrect because
related attributes are attributes that are derived from the relationships between data model objects.
Related attributes are not used for excluding records from a segment. Option D is incorrect because
streaming insights are derived attributes that are calculated at the time of data ingestion. Streaming
insights are not used for excluding records from a segment.