ACA Big Data Dumps Full
ACA Big Data Dumps Full
Single selection
1. Scenario: Jack is the administrator of project prj1. The project involves a large volume of
sensitive data such as bank account, medical record, etc. Jack wants to properly protect
the data. Which of the follow statements is necessary?
a) set ProjectACL=true;
b) add accountprovider ram;
c) set ProjectProtection=true;
d) use prj1;
4. When we use the MaxCompute tunnel command to upload the log.txt file to the t_log
table, the t_log is a partition table and the partitioning column is (p1 string, p2 string).
Which of the following commands is correct?
a) tunnel upload log.txt t_log/p1="b1”, p2="b2"
b) tunnel upload log.txt t_log/(p1="b1”, p2="b2")
c) tunnel upload log.txt t_log/p1="b1"/p2="b2"
5. A Log table named log in MaxCompute is a partition table, and the partition key is dt. A
new partition is created daily to store the new data of that day. Now we have one
month's data, starting from dt='20180101' to dt='20180131', and we may use ________
to delete the data on 20180101.
a) delete from log where dt='20180101'
b) truncate table where dt='20180101'
c) drop partition log (dt='20180101')
d) alter table log drop partition(dt='20180101')
6. DataV is a powerful yet accessible data visualization tool, which features geographic
information systems allowing for rapid interpretation of data to understand
relationships, patterns, and trends. When a DataV screen is ready, it can embed works to
the existing portal of the enterprise through ______.
a) URL after the release
b) URL in the preview
c) MD5 code obtained after the release
d) Jar package imported after the release
7. By integrating live dashboards, DataV can present and monitor business data
simultaneously. This data-driven approach enables for well-organized data mining and
analysis allowing the user to seize new opportunities that otherwise might remain
hidden. It can support wide range of databases and data formats. Which of the following
options DataV does not support?
a) Alibaba Cloud' s AnalyticDB, ApsaraDB
b) Static data in CSV and JSON formats
c) Oracle Database
d) MaxCompute Project
8. You want to understand more about how users browse your public website. For example,
you want to know which pages they visit prior to placing an order. You have a server farm
of 100 web servers hosting your website. Which is the most efficient process to gather
these web servers across logs into traditional Hadoop ecosystem.
a) Just copy them into HDFS using curl
b) Ingest the server web logs into HDFS using Apache Flume
c) Channel these clickstreams into Hadoop using Hadoop Streaming
d) Import all user clicks from your OLTP databases into Hadoop using Sqoop
9. Your company stores user profile records in an OLTP databases. You want to join these
records with web server logs you have already ingested into the Hadoop file system.
What is the best way to obtain and ingest these user records?
a) Ingest with Hadoop streaming
b) Ingest using Hive
c) Ingest with sqoop import
d) Ingest with Pig's LOAD command
My Answer: Other file says B. Another file says B.
Correct Answer: B or C? Comment [1]: Answer: C
10. You are working on a project where you need to chain together MapReduce, Hive jobs.
You also need the ability to use forks, decision points, and path joins. Which ecosystem
project should you use to perform these actions?
a) Apache HUE
b) Apache Zookeeper
c) Apache Oozie
d) Apache Spark
Multiple selections
1. In DataWorks, we can configure alert policies to monitor periodically scheduled tasks, so that an alert will
be issued timely. Currently DataWorks supports ________ alerts.
(Number of correct answers: 2)
a) Email
b) Text message
c) Telephone
c) Aliwangwang
3. MaxCompute SQL is suitable for processing less real-time massive data, and employs a
syntax similar to that of SQL. The efficiency of data query can be improved through
creating proper indexes in the table.
True
False
1 .Function Studio is a web project coding and development tool independently developed by the
Alibaba Group for function development scenarios. It is an important component of DataWorks.
Function Studio supports several programming languages and platform-based function development
scenarios except for ______ .
A. Real-time computing
B. Python
C. Java
D. Scala
My Answer: D
2 .A business flow in DataWorks integrates different node task types by business type, such a structure
improves business code development facilitation. Which of the following descriptions about the node
type is INCORRECT?
A. A zero-load node is a control node that does not generate any data. The virtual node is generally used
as the root node for planning the overall node workflow.
B. An ODPS SQL task allows you to edit and maintain the SQL code on the Web, and easily implement
code runs, debug, and collaboration.
C. The PyODPS node in DataWorks can be integrated with MaxCompute Python SDK. You can edit the
Python code to operate MaxCompute on a PyODPS node in DataWorks.
D. The SHELL node supports standard SHELL syntax and the interactive syntax. The SHELL task can run on
the default resource group.
My Answer: A. Other file says B. Another file says B. Siddesh corrected file C.
3 .Apache Spark included in Alibaba E-MapReduce(EMR) is a fast and general-purpose cluster computing
system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports
general execution graphs. It also supports a rich set of higher-level tools. Which of the following tools
does not be included in Spark?
D. TensorFlow for AI
Reason: Tensorflow is a different sdk
4 .DataWorks provides two billing methods: Pay-As-You-Go (post-payment) and subscription (pre-
payment). When DataWorks is activated in pay-as-you-go mode, Which of the following billing items will
not apply?
My Answer: C
5 .Users can use major BI tools, such as Tablueu and FineReport, to easily connect to MaxCompute
projects, and perform BI analysis or ad hoc queries. The quick query feature in MaxCompute is called
_________ allows you to provide services by encapsulating project table data in APIs, supporting diverse
application scenarios without data migration.
A. Lightning
B. MaxCompute Manager
C. Tunnel
D. Labelsecurity
6 .If a MySQL database contains 100 tables, and jack wants to migrate all those tables to MaxCompute
using DataWorks Data Integration, the conventional method would require him to configure 100 data
synchronization tasks. With _______ feature in DataWorks, he can upload all tables at the same time.
7 .Machine Learning Platform for Artificial Intelligence (PAI) node is one of the node types in DataWorks
business flow. It is used to call tasks created on PAI and schedule production activities based on the
node configuration. PAI nodes can be added to DataWorks only _________ .
8 .In a scenario where a large enterprise plans to use MaxCompute to process and analyze its data, tens
of thousands of tables and thousands of tasks are expected for this project, and a project team of 40
members is responsible for the project construction and O&M. From the perspective of engineering,
which of the following can considerably reduce the cost of project construction and management?
B. Use DataWorks
C. Use Eclipse
My Answer: B
9 .AliOrg Company plans to migrate their data with virtually no downtime. They want all the data
changes to the source database that occur during the migration are continuously replicated to the
target, allowing the source database to be fully operational during the migration process. After the
database migration is completed, the target database will remain synchronized with the source for as
long as you choose, allowing you to switch over the database at a convenient time. Which of the
following Alibaba products is the right choice for you to do it:
A. Log Service
C. Message Service
D. CloudMonitor
My Answer: B
10 .There are three types of node instances in an E-MapReducecluster: master, core, and _____ .
A. task
B. zero-load
C. gateway
D. agent
11 .A dataset includes the following items (time, region, sales amount). If you want to present the
information above in a chart, ______ is applicable.
A. Bubble Chart
B. Tree Chart
C. Pie Chart
D. Radar Chart
12 .Alibaba Cloud Quick BI reporting tools support a variety of data sources, facilitating users to analyze
and present their data from different data sources. ______ is not supported as a data source yet.
D. MySQL RDS
Reason: Big data: Volume + variability + velocity. Api data has velocity and variability. But does it have
high volume? yes an API can handle large amount of data.
13 .DataV is a powerful yet accessible data visualization tool, which features geographic information
systems allowing for rapid interpretation of data to understand relationships, patterns, and trends.
When a DataV screen is ready, it can embed works to the existing portal of the enterprise through
______.
D. Stored in ZooKeeper
15 ._______ instances in E-MapReduce are responsible for computing and can quickly add computing
power to a cluster. They can also scale up and down at any time without impacting the operations of the
cluster.
A. Task
B. Gateway
C. Master
D. Core
My Answer: A
16 .Your company stores user profile records in an OLTP databases. You want to join these records with
web server logs you have already ingested into the Hadoop file system. What is the best way to obtain
and ingest these user records?
My Answer: C.
My Answer: Other file says B. Siddesh corrected file B.
Correct Answer: B or C? Comment [14]: Ans: C
17 .You are working on a project where you need to chain together MapReduce, Hive jobs. You also
need the ability to use forks, decision points, and path joins. Which ecosystem project should you use to
perform these actions?
A. Spark
B. HUE
C. Zookeeper
D. Oozie
18 .Which node type in DataWorks can edit the Python code to operate data in MaxCompute?
A. PyODPS
B. ODPS MR Node
My Answer: A
19 .DataService Studio in DataWorks aims to build a data service bus to help enterprises centrally
manage private and public APIs. DataService Studio allows you to quickly create APIs based on data
tables and register existing APIs with the DataService Studio platform for centralized management and
release. Which of the following descriptions about DataService Studio in DataWorks is INCORRECT?
A. DataService Studio is connected to API Gateway. Users can deploy APIs to API Gateway with one-
click.
B. DataService Studio adopts the serverless architecture. All you need to care is the query logic of APIs,
instead of the infrastructure such as the running environment.
C. To meet the personalized query requirements of advanced users, DataService Studio provides the
custom Python script mode to allow you compile the API query by yourself. It also supports multi-table
association, complex query conditions, and aggregate functions.
D. Users can deploy any APIs created and registered in DataService Studio to API Gateway for
management, such as API authorization and authentication, traffic control, and metering.
My Answer: C
20 .MaxCompute Tunnel provides high concurrency data upload and download services. User can use
the Tunnel service to upload or download the data to MaxCompute. Which of the following descriptions
about Tunnel is NOT correct:
B. MaxCompute provides two data import and export methods: using Tunnel Operation on the console
directly or using TUNNEL written with java
C. If data fails to be uploaded, use the restore command to restore the upload from where it was
interrupted
D. Tunnel commands are mainly used to upload or download data.They provide the following
functions:upload, download, resume, show, purge etc.
21 .Which of the following is not proper for granting the permission on a L4 MaxCompute table to a
user. (L4 is a level in MaxCompute Label-based security (LabelSecurity), it is a required MaxCompute
Access Control (MAC) policy at the project space level. It allows project administrators to control the
user access to column-level sensitive data with improved flexibility.)
A. If no permissions have been granted to the user and the user does not belong to the project, add the
user to the project. The user does not have any permissions before they are added to the project.
C. If the user manages resources that have labels, such as datasheets and packages with datasheets,
grant label permissions to the user.
My Answer: D. Other file says A. Another file says A. Siddesh corrected file A.
22 .MaxCompute supports two kinds of charging methods: Pay-As-You-Go and Subscription (CU cost).
Pay-As-You-Go means each task is measured according to the input size by job cost. In this charging
method the billing items do not include charges due to ______.
A. Data upload
B. Data download
C. Computing
D. Storage
Reason: MaxCompute counts and charges for storage, computation, and download operations. This
topic describes how to select the billing method and preliminarily estimate calculation and storage
costs for MaxCompute. The billing methods include pay-as-you-go and subscription.
23 .MaxCompute is a general purpose, fully managed, multi-tenancy data processing platform for large-
scale data warehousing, and it is mainly used for storage and computing of batch structured data. Which
of the following is not a use case for MaxCompute?
A. Order management
B. Date Warehouse
D. User profile
My Answer: A. Other file says B. Another file says B. Siddesh corrected file B.
Correct Answer: A or B? Comment [19]: Answer: B
24 .Tom is the administrator of a project prj1 in MaxCompute. The project involves a large volume of
sensitive data such as user IDs and shopping records, and many data mining algorithms with proprietary
intellectual property rights. Tom wants to properly protect these sensitive data and algorithms. To be
specific, project users can only access the data within the project, all data flows only within the project.
What operation should he perform?
A. Use ACL authorization to set the status to read-only for all users
B. Use Policy authorization to set the status to read-only for all users
D. Enable the data protection mechanism in the project, using set ProjectProtection=true;
25 .There are multiple connection clients for MaxCompute, which of the following is the easiest way to
configure workflow and scheduling for MaxCompute tasks?
A. Use DataWorks
My Answer: A. Other file says B. Another file say B. Siddesh corrected file C.
26 . In MaxCompute, you can use Tunnel command line for data upload and download. Which of the
following description of Tunnel command is NOT correct:
A. Upload: Supports file or directory (level-one) uploading. Data can only be uploaded to a single table
or table partition each time.
B. Download: You can only download data to a single file. Only data in one table or partition can be
downloaded to one file each time. For partitioned tables, the source partition must be specified.
C. Resume: If an error occurs due to the network or the Tunnel service, you can resume transmission of
the file or directory after interruption.
D. Purge: Clears the table directory. By default, use this command to clear information of the last three
days.
27 .Scenario: Jack is the administrator of project prj1. A new team member, Alice (already has an Alibaba
Cloud account [email protected]), applies for joining this project with the following permissions: view
table lists, submit jobs, and create tables. Which of the following SQL statements is useless:
A. use prj1;
D. flush privileges;
28 .commark is an open-source framework that functions on the service level to support data processing
and analysis operations. Equipped with unified computing resources and data set permissions, Spark on
MaxCompute allows you to submit and run jobs while using your preferred development methods.
Which of the following descriptions about Spark on MaxCompute is NOT correct:
C. Similar to MaxCompute SQL and MaxCompute MapReduce, Spark on MaxCompute runs in the unified
computing resources activated for MaxCompute projects.
D. Spark on MaxCompute has a separate permission system which will not allow users to query data
without any additional permission modifications required.
My Answer: B
29 .In MaxCompute command line, if you want to view all tables in a project, you can execute command:
______.
A. show tables;
B. use tables;
C. desc tables;
D. select tables;
30 .When odpscmd is used to connect to a project in MaxCompute, the command ______ can be
executed to view the size of the space occupied by table table_a.
B. size table_a;
C. desc table_a;
My Answer: Other file says B. Another file says B. Siddesh corrected file C. Prem-B
True/False
31 .Data Migration Unit (DMU) is used to measure the amount of resources consumed by data
integration, including CPU, memory, and network. One DMU represents the minimum amount of
resources used for a data synchronization task.
True
False
My Answer: True
32 .DataWorks can be used to create all types of tasks and configure scheduling cycles as needed. The
supported granularity levels of scheduling cycles include days, weeks, months, hours, minutes and
seconds.
True
False
My Answer: False
33 .If a task node of DataWorks is deleted from the recycle bin, it can still be restored.
True
False
34 .If the DataWorks(MaxCompute) tables in your request belong to two owners. In this case, Data
Guard(DataWorks component) automatically splits your request into two by table owner.
True
False
My Answer: B . Other file says True. Another file says True. Siddesh corrected file True.
35 .The FTP data source in DataWorks allows you to read/write data to FTP, and supports configuring
synchronization tasks in wizard and script mode.
True
False
My Answer: True
36 .In each release of E-MapReduce, the software and software version are flexible. You can select
multiple software versions.
True
False
My Answer: False. Other file says True. Another file says True. Siddesh corrected file False.
Correct Answer: True or False? Comment [28]: Norlt sure but I think answer is
True.
37 .Alibaba Cloud Elastic MapReduce (E-MapReduce) is a big data processing solution to quickly process
huge amounts of data. Based on open source Apache Hadoop and Apache Spark, E-MapReduce flexibly
manages your big data use cases such as trend analysis, data warehousing, and analysis of continuously
streaming data.
True
False
My Answer: True
38 .An enterprise uses Alibaba Cloud MaxCompute for storage of service orders, system logs and
management data. Because the security levels for the data are different, it is needed to register multiple
Alibaba Cloud accounts for data management.
True
False
My Answer: False
39 .JindoFS in E-MapReduce provided by SmartData uses OSS as the storage back end.
True
False
My Answer: True
40 .In DataWorks table permission system, you can revoke permissions only on the fields whose security
level is higher than the security level of your account.
True
False
My Answer: True
41 .Project is an important concept in MaxCompute. A user can create multiple projects, and each object
belongs to a certain project.
True
False
My Answer: True
42 .Assume that Task 1 is configured to run at 02:00 each day. In this case, the scheduling system
automatically generates a snapshot at the time predefined by the periodic node task at 23:30 each day.
That is, the instance of Task 1 will run at 02:00 the next day. If the system detects the upstream task is
complete, the system automatically runs the Task 1 instance at 02:00 the next day.
True
False
My Answer: True
43 .In MaxCompute, if error occurs in Tunnel transmission due to network or Tunnel service, the user
can resume the last update operation through the command tunnel resume;.
True
False
My Answer: True
44 .A company originally handled the local data services through the Java programs. The local data have
been migrated to MaxCompute on the cloud, now the data can be accessed through modifying the Java
code and using the Java APIs provided by MaxCompute.
True
False
My Answer: True
45 .MaxCompute takes Project as a charged unit. The bill is charged according to three aspects: the
usage of storage, computing resource, and data download respectively. You pay for compute and
storage resources by the day with no long-term commitments.
True
False
My Answer: True
46 .There are various methods for accessing to MaxCompute, for example, through management
console, client command line, and Java API. Command line tool odpscmd can be used to create, operate,
or delete a table in a project.
True
False
My Answer: True
47 .A start-up company wants to use Alibaba Cloud MaxCompute to provide product recommendation
services for its users. However, the company does not have much users at the initial stage, while the
charge for MaxCompute is higher than that of ApsaraDB RDS, so the company should be recommended
to use MaxCompute service until the number of its users increases to a certain size.
True
False
My Answer: Other file says True. Another file says True. Siddesh corrected file says False. Prem-True
True
False
My Answer: True
49 .MaxCompute SQL is suitable for processing less real-time massive data, and employs a syntax similar
to that of SQL. The efficiency of data query can be improved through creating proper indexes in the
table.
True
False
True
False
My Answer: True
51 .Which of the following Hadoop ecosystem componets can you choose to setup a streaming log
analysis system?(Number of correct answers: 3)
A. Apache Flume
B. Apache Kafka
C. Apache Spark
D. Apache Lucene
My Answer: ABC. Other file says ACD. Another file says ACD.
52 .A distributed file system like GFS and Hadoop are design to have much larger block(or chunk) size
like 64MB or 128MB, which of the following descriptions are correct? (Number of correct answers: 4)
A. It reduces clients' need to interact with the master because reads and writes on the same block( or
chunck) require only one initial request to the master for block location information
B. Since on a large block(or chunk), a client is more likely to perform many operations on a given block, it
can reduce network overhead by keeping a persistent TCP connection to the metadata server over an
extended period of time
D. The servers storing those blocks may become hot spots if many clients are accessing the same small
files
E. If necessary to support even larger file systems, the cost of adding extra memory to the meta data
server is a big price
Correct Answer: ABCD or ABCDE? Comment [32]: Not sure about this
53 .MaxCompute can coordinate multiple users to operate one project through ACL authorization. The
objects that can be authorized by ACL include ______. (Number of correct answers: 3)
A. Project
B. Table
C. Resource
D. Procedure
E. Job
A. The data source configuration in the project management is required to add data source
B. Some of the columns in source tables can be extracted to create a mapping relationship between
fields, and constants or variables can't be added
C. For the extraction of source data, "where" filtering clause can be referenced as the criteria of
incremental synchronization
D. Clean-up rules can be set to clear or preserve existing data before data write
55 .The data development mode in DataWorks has been upgraded to the three-level structure
comprising of _____, _____, and ______. (Number of correct answers: 3)
A. Project
B. Solution
C. Business flow
D. Directory
My Answer: A,B,C
56 .In DataWorks, we can configure alert policies to monitor periodically scheduled tasks, so that an
alert will be issued timely. Currently DataWorks supports ________ alerts. (Number of correct answers:
2)
A. Email
B. Text message
C. Telephone
D. Aliwangwang
My Answer: A,B
A. By Minute
B. By Hour
C. By Day
D. By Week
E. By Second
My Answer: A,B,C,D
58 .MaxCompute is a fast and fully-managed TB/PB-level data warehousing solution provided by Alibaba
Cloud. Which of the following product features are correct? ______ (Number of correct answers: 3)
A. Distributed architecture
My Answer: A,B,E
A. Files
D. Archive: Recognize the compression type according to the postfix in the resource name
E. ACL Policy
A. Access Username
B. Access Key ID
D. Access Password
My Answer: B,C
61. DataWorks uses MaxCompute as the core computing and storage engine to provide massive offline
data processing, analysis, and mining capabilities. It introduces both simple and standard modes
workspace. Which of the following descriptions about DataWorks Workspace and MaxCompute Project
is INCORRECT?
A. A simple mode refers to a DataWorks Workspace that corresponds to a MaxCompute Project and
cannot set up a development and Production Environment
B. The advantage of the simple mode is that the iteration is fast, and the code is submitted without
publishing, it will take effect. The risk of a simple mode is that the development role is too privileged to
delete the tables under this project, there is a risk of table permissions.
C. Standard mode refers to a DataWorks project corresponding to two MaxCompute projects, which can
be set up to develop and produce dual environments, improve code development specifications and be
able to strictly control table permissions, the operation of tables in Production Environments is
prohibited, and the data security of production tables is guaranteed.
D. All Task edits can be performed in the Development Environment, and the Production Environment
Code can also be directly modified
My Answer: B
62. MaxCompute provides SQL and MapReduce for calculation and analysis service. Which of the
following descriptions about MaxCompute and SQL is NOT correct:
A. In MaxCompute, data is stored in forms of tables, MaxCompute provides a SQL query function for the
external interface
B. You can operate MaxCompute just like traditional database software, but It is worth to mention that
MaxCompute SQL does not support transactions, index and Update/Delete operations
C. MaxCompute SQL syntax differs from Oracle and MySQL, so the user cannot migrate SQL statements
of other databases into MaxCompute seamlessly
D. MaxCompute SQL can complete the query in minutes even seconds, and it can be able to return the
result in millisecond without using other process engine.
My Answer: D
63. DataWorks provides scheduling capabilities including time-based or dependency-based task trigger
functions to perform tens of millions of tasks accurately and timely each day, based on DAG
relationships. Which of the following descriptions about scheduling and dependency in DataWorks is
INCORRECT?
A. Users can configure an upstream dependency for a task. In this way, even if the current task instance
reaches the scheduled time, the task only run after the instance upstream task is completed.
B. If no upstream tasks is configured then, by default the current task is triggered by the project. As a
result, the default upstream task of the current task is project_start in the scheduling system. By default,
a project_start task is created as a root task for each project.
C. If the task is submitted after 23: 30, the scheduling system automatically cycle-generate instances
from the second day and run on time.
D. The system automatically generates an instance for the task at each time point according to the
scheduling attribute configuration and periodically runs the task from the second day only after a task is
submitted to the scheduling system.
My Answer: D
64. E-MapReduce simplifies big data processing, making it easy, fast, scalable and cost-effective for you
to provision distributed Hadoop clusters and process your data. This helps you to streamline your
business through better decisions based on massive data analysis completed in real time. Which of the
following descriptions about E-MR is NOT true?
A. E-MapReduce allows you simply select the required ECS model (CPU or memory) and disks, and the
required software for automatic deployment
C. Seamless integration with other Alibaba Cloud products to be used as the input source or output
destination of Hadoop/Spark calculation engine
D. It supports the Pay-As-You-Go payment method, which means that the cost of each task is measured
according to the input size
My Answer: B
65. When a local file is updated to Quick BI for presentation, the data is stored in ______.
C. AnalyticDB
My Answer: B. Prem-D
66. Which HDFS daemon or service manage all the meta data stored in HDFS?
A. secondary namenode
B. namenode
C. datanode
D. node manager
My Answer: B. Prem-C
67. Which of the following descriptions about MaxCompute security is NOT correct:
A. MaxCompute supports two account systems: the Alibaba Cloud account system and RAM user system
B. MaxCompute recognizes RAM users but cannot recognize RAM permissions. That is, you can add RAM
users under your Alibaba Cloud account to a MaxCompute project. However, MaxCompute does not
consider the RAM permission definitions when it verifies the permissions of RAM users.
C. LabelSecurity is a workspace-level mandatory access control (MAC) policy that enables workspace
administrators to control user access to row-level sensitive data more flexibly.
D. MaxCompute users can share data and resources, such as tables and functions, among workspaces by
using packages.
My Answer: B
A. The synax of ODPS SQL is similar to SQL. It can be considered as a subset of standard SQL
B. MaxCompute SQL is not equivalent to a database, which has no database characteristics in many
aspects, such as transaction, primary key constraints, index
My Answer: D. Prem-B
69. By default, the resource group in DataWorks provides you 50 slots and each DMU occupies 2 slots.
This means the default resource group supports 25 DMUs at the same time.
True
False
My Answer:
70. JindoFS is a cloud-native file system that combines the advantages of OSS and local storage. JindoFS
is also the next-generation storage system that provides efficient and reliable storage services for cloud
computing. To use JindoFS, select the related services when creating an E-MapReduce cluster.
True
False
My Answer: A
71. A partition table can be created through the following statement in MaxCompute SQL:
name string,
number string)
partitioned by ( department string);
True
False
My Answer: A. Prem-False
72. E-MapReduce(EMR) Auto Scaling feature is designed to reduce costs and improve execution
efficiency. which of the following descriptions about EMR Auto Scaling are correct? (Number of correct
answers: 3)
A. Auto Scaling only supports scaling in and scaling out a cluster by adding or removing task nodes.
B. Scale by Time is recommended as the rule type if you can specify the time to scale a cluster.
C. Scale by Rule is recommended as the rule type if you cannot specify the time to scale a cluster and
need to add and remove computing resources based on the specified YARN metrics.
My Answer: A,B,D
73. DataWorks App Studio is a tool designed to facilitate your data product development. It comes with
a rich set of frontend components that you can drag and drop to easily and quickly build frontend apps.
With App Studio, you do not need to download and install a local integrated development environment
(IDE) or configure and maintain environment variables. Instead, you can use a browser to write, run, and
debug apps and enjoy the same coding experience as that in a local IDE. App Studio also allows you to
publish apps online. Which of the following descriptions about APP Studio in DataWorks is CORRECT?
(Number of correct answers: 3)
A. App Studio comes with all breakpoint types and operations of a local IDE. It supports thread switching
and filtering, variable viewing and watching, remote debugging, and hot code replacement.
B. You can directly access the runtime environment, which is currently built based on MacOS as the base
image.
C. You and your team members can use App Studio to share the development environment for
collaborative coding.
D. App Studio supports real-time collaborative coding. Multiple collaborators of a team can develop and
write code at the same time in the same project, and view changes made by other collaborators in real
time. This feature helps avoid the hassle of synchronizing code and merging branches and significantly
improve the development efficiency.
E. APP Studio is included in Basic edition of DataWorks
My Answer: A,B,C
74. MaxCompute Graph is a processing framework designed for iterative graph computing.
MaxCompute Graph jobs use graphs to build models. Graphs are composed of vertices and edges.
Which of the following operations can MaxCompute support? (Number of correct answers: 3)
B. Add/delete a vertex.
C. Add/delete an edge.
D. When editing a vertex and an edge, you don't have to maintain their relationship.
75. There are various methods for connecting and using MaxCompute, which of the following options
have lower thresholds for the size of uploading file? ______. (Number of correct answers: 2)
A. DataWorks
B. IntelliJ IDEA
C. MaxCompute Tunnel
D. Alibaba DTS
My Answer: A,D
Correct Answer: AD or BD? Comment [49]: Not sure about thisNot sure
about this
MaxCompute SQL Quiz
100%
1.Question 1
The maximum SQL length allowed in MaxCompute varies with the environment resources that are
applied.
MaxCompute SQL is suitable for massive data (GB, TB, EB level), off-line batch calculation scenarios.
2.Question 2
Currently, MaxCompute supports specifying up to 6 small tables in a mapjoin, otherwise syntax errors
are reported, and records for a single small table are limited to no more than 10,000.
True
False
3.Question 3
True
False
4.Question 4
When you have multiple tables join, it only allows one leftmost table to be a mapjoin table.
True
False
5.Question 5
Data type inconsistency is often encountered in business data processing. In order to keep data type
consistent, data processing system involves data type conversion. If MaxComputeSQL is used for data
processing, which of the following conversion can not be achieved?
Bigint to String
String to Boolean
String to Bigint
Datetime to String
6.Question 6
FALSE or TRUE=TRUE
TRUE or NULL=TRUE
7.Question 7
Suppose table t_dml only has one field named as id (type is string), which MaxCompute SQL query
cannot be executed correctly?
Correct
8.Question 8
Which UNION ALL statements in MaxCompute SQL are correct? (Number of Correct Answers: 3)
Combines two or multiple data sets returned by a SELECT operation into one data set. If the result
contains duplicated rows, all rows that meet the conditions are returned, and deduplication of
duplicated rows is not applied.
MaxCompute does not support union two main query results, but you can do it on two subquery results.
The columns of each sub query corresponding to the union all operation must be listed, not supporting
*.
The number, names, and types of queried columns corresponding to the UNION ALL/UNION operation
must be consistent.
9.Question 9
10.Question 10
Which of the following statements of mapjoin Hint application of MaxCompute SQL are correct ?
(Number of Correct Answers: 3)
When a large table joins one or multiple small tables, you can use MapJoin, which performs much faster
than regular Joins
When Mapjoin references to a small table or sub query, alias should be referenced.
MaxCompute SQL does not support the use of complex Join conditions such as unequal expressions, or
logic in normal Join's on conditions, but in MapJoin it can
When do multiple tables Join, the two left tables can be MapJoin tables at the same time.
1.Question 1
True
False
2.Question 2
Java UDF supports Bigint, String, Double, Boolean , ARRAY, MAP, and STRUT
True
False
3.Question 3
UDF output a return value at a time. UDTF can output more than two records at one time.
True
False
4.Question 4
For UDAF(User Defined Aggregation Function), input and output is many-for-one relationship.
True
False
5.Question 5
The NULL value in SQL is represented by a NULL reference in Java; therefore, ‘Java primitive type’ is not
allowed because it cannot represent a NULL value in SQL.
True
False
6.Question 6
UDF
UDAF
UDGF
UDTF
7.Question 7
Which correspondence is incorrect between MaxCompute data type and Java data type?
Need to specify the running data source when run UDF locally.
UDF/UDAF/UDTF typically works on some columns of the table in the SELECT clause, and needs to
configure MaxCompute project, table, and column when run local test.
Warehouse is built locally to store tables (including meta and data) or resources for executing UDF
locally. The project name, tables, table name and sample data are under the warehouse directory in
order.
9.Question 9
Which of the following ways of UDTF usage in SQL are correct? (Number of Correct Answers: 3)
select user_udtf(col0, col1, col2) as (c0, c1) from (select * from my_table distribute by key sort by key) t;
select reduce_udtf(col0, col1, col2) as (c0, c1) from (select col0, col1, col2 from (select map_udtf(a0, a1,
a2, a3) as (col0, col1, col2) from my_table) t1 distribute by col0 sort by col0, col1) t2;
10.Question 10
Which of the following UDTF statements are correct? (Number of Correct Answers: 2)
It does not support the use with group by together in the same SELECT clause.
It supports the use with distribute by together in the same SELECT clause.
It does not support the use with sort by together in the same SELECT clause.
Data Visualization Quiz
TOTAL POINTS 10
1.Question 1
As an important Platform as a service (PaaS) product in Alibaba Cloud product portfolios. Alibaba Cloud
DataWorks offers its users a one-stop solution, which of the following capability is not included in the
solution?
Data Integration
Data Management
Data Governance
2.Question 2
If today's date is 2019-03-11, what is the result if the Partition Expression is set to dt=$[yyyymmdd-1]
when creating rule configuration in Data Quality?
dt=20190310
dt=20190311
dt=20190312
dt=20190311-1
3.Question 3
Function Studio allows you to edit MaxCompute Java user-defined functions (UDFs) and to compile and
publish them to DataWorks with one click.
True
False
4.Question 4
Which of the following notification method is not supported by the Data Quality?
DingTalk
Email
WeChat
SMS
5.Question 5
Which of the following tasks is supported for being associated with Data Quality?
Scheduling Tasks
6.Question 6
What alarm levels are supported by DataWorks Data Quality Control component? (Number of correct
answers: 2)
7.Question 7
Function Studio in DataWorks supports UDF(user define function), ______ and ______
templates.
MapReduce function
Flink function
8.Question 8
The process of using data quality is to configure monitoring rules for existing tables. After you configure
a rule, what can be done to verify the rule?
Run a Trial
Test
Run A/B Test
9.Question 9
DataService Studio works together with _________ to provide a secure, stable, low-cost and easy-to-use
data sharing service.
10.Question 10
Which of the following products is one of the underlying computing engine layer of DataWorks?
MaxCompute
OSS
Polar DB
Quick BI Quiz
TOTAL POINTS 10
1.Question 1
Delete
Edit
Copy
Move
2.Question 2
Which local file type is not supported when using local files as the data source of QuickBI ?
CSV
XLS
XLSX
TXT
3.Question 3
The exploration space is a dedicated storage area of Quick BI. It supports txt, CSC, Excel, DataWorks.
True
False
4.Question 4
True
False
5.Question 5
In QuickBI, when use a local Excel file contains multiple sheets as the data source, all sheets can be
uploaded at once.
True
False
6.Question 6
Which chart is suitable for comparing the sales situation of a commodity in various regions?
Gauge
Bar Chart
Card Chart
Scatter Chart
7.Question 7
Color Legend
X axis
Y axis
8.Question 8
Which of the following statements are correct for dashboard in QuickBI? (Number of Correct Answers:
3)
The dashboard supports two modes:Standard dashboard & Full -Screen mode .
You can simply edit the dataset accordingly to meet the actual dashboard demands.
9.Question 9
10.Question 10
Making reports
Business exploration
Self-help data acquisition
Data synchronization
1.Question 1
Machine Learning Platform For AI provides end-to-end machine learning services, including data
processing, feature engineering, model training, model prediction, and model evaluation. Machi ne
Learning Platform For AI combines all of these services to make AI more accessible than ever.
True
False
2.Question 2
True
False
3.Question 3
The Read MaxCompute Table component is unaware of any modifications (such as add or remove a
column) made to a table that is already loaded to the component.
True
False
4.Question 4
Feature engineering includes feature derivation and scale change. The heart disease prediction project
uses the feature selection and data normalization components for feature engineering.
True
False
5.Question 5
The Read MaxCompute Table component enables you to read data from MaxCompute tables. To read a
table from another project that you are authorized to access, you can use the format of _________.
6.Question 6
The total service fee is equal to the billing fee of the component you use multiplied by the number of
computing hours. The computing hours are measured by using the formula ______
7.Question 7
A SQL Script Component supports a maximum of _____ input port(s) and one output port.
8.Question 8
XML
JSON
CSV
XLS
9.Question 9
Sampling data in PAI is generated in the weighted mode. The weight column must be of ____ or ____
type.
Double
Boolean
Int
String
10.Question 10
The processor layer of PAI is the infrastructure layer that consists of ______ and _____ clusters.
CPU
MPI
GPU
MapReduce
Start Self-Test - Stage I: Big Data Fundamentals
A. Stage I: Big Data Fundamentals - DataWorks
1. Which of these DataWorks roles has permission to deploy workflows but not to edit them?
a. Developer
b. Deployer
c. Visitor
d. Project Administrator
2. When using MaxCompute with DataWorks, it is possible to create and run multiple types of
jobs on top of MaxCompute, including MaxCompute SQL jobs, and Spark Jobs
a. True
b. False
4. Which og these statement about the project administrator role in DataWorks is correct?
a. When adding users to a DataWorks workspace, the project administrator can only add
RAM users under the current account
b. When adding user to DataWorks Workspace, the project administrator can add
other Alibaba Cloud accounts as uers, but not RAM users under the current account
c. The project administrator does not have permiq edit workflows
d. The project administrator can add users to a workspace, nut not remove the
6. one of the key features of DataWorks is its ability to create and run scheduled tasks that can
perform data import, processing, and analysis at regular intervals, without human intervention
a. true
b. false
7. you want to create a scheduled job in DataWorks to import data from a MySQL database into
MaxCompute once a day, is this possible?
a. No, dataWorks doesn’t support this
b. Yes, but you have to create a data synchronization job in DTS (Data Transformation
Service) first
c. Yes, you can do this directly using DataWorks Data Integration Feature
d. Yes, but this feature is not supported in the basic edition of DataWorks
8. Which of the user roles in DataWorks does not have permissions to alter anything within the
DataWorks workspace
a. OAM
b. Developer
c. Visitor
d. Project Administrator
10. Assuming you have chosen MaxCompute as the compute engine to use within your
DataWorks workspace, what types of tasks will you be able to run? number of correct
answers: 3)
a. MaxCompute SQL (ODPS SQL)
b. Resource
c. MAxCompute MR (MapReduce)
d. Python (PyODPS)
11. Which of these are valid DataWorks user roles? (number of correct answers: 2)
a. Project administrator
b. Deployer
c. SecOps
d. Geust
12. Which of these are “compute engine” that DataWorks can work with? (number of correct
answers: 3)
a. MaxCompute
b. HBase
c. AnalyticDB
d. E-MapReduce (EMR)
13. Which of these are good reasons to choose standard mode over basic mode, when setting up a
new DataWorks Workspace? (number of correct answers: 2)
a. DataWorks workspace in standard mode can provide more fine-grained user
permissions because it is possible to separate developers from production
b. DataWorks workspaces in standard mode cost less
c. Workspace in standard mode allow developers to test code first before deploying it
into production
d. Code runs faster in standard mode
14. Which statements about DataWorkd workflows are correct? (number of correct answers: 3)
a. Workflows can be scheduled to run at regular intervals (weekly, daily, hourly)
b. Workflows can be triggered manually or via an API call
c. Workflows can be edited or updated and the redeployed, if a change needs to be
made
d. A workflow can only be edited by the DataWorks user that created the workflow
15. You have created a MaxComputed project in Alibab Cloud’s Singapore region, but now need
to move it to Indonesia. Which are viable method for migrating the project? number of correct
answers: 2
a. User the DataWorks cross-project cloning feature to migrate your DataWorks
workspace content, and use data integration to move MaxCompute tables
b. Use DTS (Data Transmission Service)
c. Recreate your workflows and users in a new DataWorks workspace, then use data
integration to migrate your MaxCompute tables
d. Use the export project feature to move both DataWorks and MaxCompute content all at
once
16. Workflows in DataWorks have a type of node called a “Zero-load node” which does no work
but can be used to indicate the start of a workflow or to connect other nodes together.
a. True
b. False
17. You have created a DataWorks workspace and have added several users to the project (each of
which is associated with a RAM user). Which of these statements about the billing for you
DataWorks and MaxCompute usage is correct?
a. Each RAM user will be billed separately for the workflow they create and run
b. All bills will be charged to the Alibaba cloud account that created the DataWorks
workspace
c. You can turn on “spliy billing” to generate separate bills for each RAM user
d. You cannot add RAM users to a DataWorks workspace
18. In DataWorks, what is the difference between standard mode and basic mode?
a. Basic mode workspace contain only one MaxCompute project, while standard mode
contain two (“development” and “production”)
b. Basic mode has fewer features than standard mode
c. Jobs in basic mode do not run as fast as jobs in standard mode
d. Only one user can join a basic mode DataWorks workspace
20. Which of these are best practices when creating a new workflow in DataWorks? (number of
correct answers: 3)
a. If you are in standard mode, run the workflow in the development environment to
test that it works, before deploying to production
b. Run your workflow using a subset of you data, to improve the speed at which it runs
and reduce costs, while still helping you catch errors
c. Avoid making workflows that contain more than 3-4 nodes
d. Implement an approvals process whereby one user develops workflows and another
is responsible for signing off on the workflow and deploying It into production
21. Which of these are techniques you can use to debug DataWorks workflows, if they fail to run?
(number of correct answers: 2)
a. Look at the logs for the failed node(s) in the workflow; they usually contain helpful
information
b. Step through your workflow one node at a time, and use an ad-hoc MaxCompute
SQL node to check the table(s) output by each step to ensure there are no data
quality issues
c. Buy a third-party debugging tool
d. Use the MaxCompute command-line tool to run your node instead of DataWorks
22. Which these of statements about MaxCompute is true? (number of correct answers: 2)
a. MaxCompute charges an up-front monthly fee for data storage
b. maxCompute tables are designed to be appended to, but existing recotd cannot be
updated (SQL UPDATE and DELETE are not supported)
c. MaxCompute has its own SQL language dialect, called “ODPS SQL” or sometimes
“MaxCompute SQL”
d. MaxCompute is the same thing as haddop hive
Start Self-Test - Stage II: Data Warehousing and Data
Processing
A. Stage II: Data Warehousing and Data Processing - SQL for
Beginners
1. Data manipulation language (DMl) is a subset of SQL. Which of the following descriptions
match DML the best?
a. A programming language that is typically used in relational database or data stream
management system
b. A family of computer languages including commands permitting users to manipulate
data in database
c. A computer language used to create and modify the structure of database objects in a
database
d. A set of special commands that deal with the transactions within the database
2. Jason has tow tables with data that are related to each other. He wants to combine these tables
to obtain more insight on the data. Which MySQL keyword does he need to use?
a. Drop
b. Group By
c. Insert
d. Distinct
e. Join
3. Data definition language (DDL) is a subset of SQL. Which of the following descriptions
match DDL the best?
a. A programming language that is typically used in relational database or data stream
management system
b. A family of computer languages including commands permitting users to manipulate data
in database
c. A computer language used to create and modify the structure of database objects in
a database
d. A set of special commands that deal with the transactions within the database
5. Which of the following statements would you use to obatin ALL record from a table named
“Employees” where the value of “FirstName” is “Peter”
a. Select FirtName from employees where firtsname=”Peter”
b. Select * from employees where firtsname="Peter”
c. Select [ALL] employees where firtsname=”Peter”
d. Select * from employees where firtsname=”Peter”
6. Transaction control langauge (TCL) is a subset of SQL. Which of the following descriptions
match TCL the best?
a. A programming language that is typically used in relational database or data stream
management system
b. A family of computer languages including commands permitting users to manipulate data
in database
c. A computer language used to create and modify the structure of database objects in a
database
d. set of special commands that deal with the transactions within the database
7. What is MySQL?
a. An open-source relational database management system
b. A type of database
c. A type of language that you can use to create and manage databases
d. A program developed by microsoft
8. The Distinct clause can be used to remove duplicate value when querying a table
a. True
b. False
9. The JOIN statement lets us combine tables in MySQL which of the following JOINs are valid
in MySQL? (correct answers 4)
a. INNER JOIN
b. RIGHT JOIN
c. OUTER JOIN
d. LEFT JOIN
10. Which is true in regards to the following statement: SELECT * FROM Employees WHERE
Salary> 5000 OR Department=”Engineering”. (correct answers 2)
a. All rows in which the value of the “Department” column is “Engineering” is
returned
b. All rows in which the value of the “Department” column is not “Engineering” is returned
c. All rows in which the value of the “Salary” column is above above 5,000 returned
d. All rows in which the value of the “Salary” column is below above 5,000 returned
e. All rows in which the value of the "Salary" is above 5,000 AND the value of
“Department” is “Engineering” is returned
12. About history of the development of MaxCompute, which of the following options are the
correct?
a. MaxCompute was formally put into production in 2010
b. MaxCompute 2.0 was published in 2016
c. maxCompute was formaly put into production in 2009
d. maxCompute 2.0 was published in 2016
18. MaxCompute is suitable for dealing with massive data, and the amount of data reaches
TB, PB, and EB
a. True
b. False
19. MaxCompute sets up a unified data platform, which year did data storage, data security
and data standard unification start?
a. 2009
b. 2010
c. 2011
d. 2012
24. Use can log in to ODPS systems in defferent ways, such as RESTful API, Java
Development tools, ODPS CLT, regardless of the way is adopted, which way will
eventually be converted to login?
a. ODPS SDK
b. ODPS CLT
c. RESTful API
d. Java CLT
28. MaxCompute uses sandbox mechanism to achieve automatic storage fault tolerance
a. True
b. False
31. MaxCompute adopt distributed architecture, and clusters can be flexibly expand?
a. True
b. False
Start Self-Test - Stage III: Advanced Data Processing
Tools and Techniques
A. Stage III: Advanced Data Processing Tools and Techniques -
MaxCompute SQL Development
1. MaxCompute SQL syntax does not support between conditional queries.
a. True
b. False
4. Suppose table t_dml only has one field named as id (type is string), which MaxCompute SQL
query cannot be executed correctly?
a. Create table t_dml_bak like t_dml
b. Insert into table d_dml select’1900-01-01 00:00:00’ from dual
c. Insert overwrite table t_dml select * from dual
d. Update table t_dml set id=’a’
6. Which of the following statements about order by and distribute by / sort by is incorrect in the
MaxCompute SQL syntax
a. The keys of order by/sort/distribute bu must be output coloumns (namelu, column aliases)
of select statement
b. Order by or group by cannot be used together with distribute by
c. When order by is used for sorting, NULL is considered to be zero
d. Distribute by performs hash-based sharding on data by values of certain column. Aliases
of select output column must be used
10. Which statement is incorrect for the dynamics partition in MaxCompute SQL?
a. If the destination table has multi-level partitions, it is allowed to specify parts of partitions
to be static partitions through insert statement, but the statuc partitions must be advanced
partitions.
b. The value of dynamic partition can be special characters
c. In the select statement field, the following field provides a dynamic partition value for the
target table. If the target table has only one-level dynamic partition, the last field value of
select statement is the dynamic partition value of the target table
d. The value of dynamic partition cannot be NULL
12. When you have multiple tables join, it only allows leftmost table to be a mapjoin table.
a. True
b. False
13. Which of the following statements of mapjoin hint application of MaxCOmpute SQL are
correct? (Number of Correct Answers: 3)
a. When a large table joins one or multiple small tables, you can use MapJoin, which
performs much faster than regular joins
b. When MapJoin references to a small table or sub query, alias should be referenced
c. MaxCompute SQL does not support the ise of complex join conditions such as
unequal expressions, or logic in normal join’s on conditions, but in MapJoin it can
d. When do multiple tables join, the two left table can be MapJoin tables at the same time
14. Which of the following statements are correct?
a. Order by must be used together with limit
b. When sorting with order by, NULL is considered smaller than any value
c. Distribute by is to make hask slices of data according to the values of a certain
columns. It is similar to group by
d. Sort by and order by are all used for sorting in essence, the difference is that the
scope is not same
e. The key of order by or sort by must be the outpit column of the select sentences, that is,
the alias of the column
15. Which of the following MaxCompute SQL syntax statements are correct?
a. JOIN of MaxCompute supports n-way join, but it must be a non cartesion product
b. The indirect expression of MaxCompute’s JOIN must be an equation expression
c. When MapJoin references to a small table or sub query, alias should be referenced,
otherwise ot will report syntax errors
d. Right outer join returns all records in the righ table, even if there is no matched
record in the left table
16. Data type inconsistency is often encountered in business data processin. In order to keep data
type consistent, data processing system involves data type conversion. If MaxCompute SQL
is used for data processing, which of the following conversion can not be achieved?
a. Bigint to string
b. String to Boolean
c. String to bigint
d. Datetime to string
21. Which of the following descriptions about the MaxCompute SQL constraint conditions are
correct?
a. MaxCompute SQL does not support transactions
b. MaxCompute SQL does not support indexes
c. MaxCompute SQL does not support delete operations
d. MaxCompute SQL does not support update operations
22. Group by is for group query in MaxCompute SQL. Which of the following query of group by
are correct?
a. Generally group by and aggregate function are used together
b. The key of group by can be the column name of the input table
c. When SELECT contains aggregate function, group by can be an expression
consisting of columns of input tables
d. When SELECT contains aggregate function, group by can be the alias of the output
column of the SELECT statement
23. MaxCompute SQL provides EXPLAIN operation. What does the execution result include?
a. All resource structures corresponding to the DML sentence
b. The dependency structure of all task corresponding to the DML sentence
c. The dependency structure of all operator in task
d. The dependency structure of all task in task
25. The source and pattern parameters of like and rlike must be string types or integer
a. True
b. False
29. During MaxCompute SQL parsing, order by/sort by /distribute by is behind of select
operation
a. True
b. False
3. When define java UDF, MaxCompute supports using writable type as parameter and return
value, eg string maps to text, map maps to struct
a. True
b. False
5. Which correspondence is incorrect between MaxCompute data type and java data type?
a. Tinyint maps to java.lang.Byte
b. Smallint maps to java.lang.Short
c. Decimal maps to java.lang.BigDecimal
d. TimeStamp maps to java.lang.TimeStamp
7. The way which UDF is used in MaxCompute is different from the common built-in functions
in MaxCompute SQL
a. True
b. False
8. User Defined Scalar Function, input and output is one to one relationship, that is, read a row
data, write an output value
a. True
b. False
13. Which of the following ways of UDTF usage in SQL are correct? (Answers 3)
a. Select user_udtf)col0,col1,col2) as (c0,c1) from my_table;
b. Select user_udtf(col0,col1.col2) as (c0,c1),col3 from my_table
c. Select user_udtf(col0,col1,col2) as (c0,c1) from (select * from my_table distribute by
key sort by key) t;
d. Select reduce_udtf(col0,col1) as (c0,c1) from (select col1,col2 from (select
map_udtf(a0,a1,a2,a3) as (col0, col1, col2) from my_table) t1 distribute by col0 sort
col0, col1) t2
16. The NULL value in SQL is represented by a NULL reference in Java; therefore, ‘java
primitive type’ is not allowed because it cannot represent a NULL value in SQL
a. True
b. False
17. UDTF (User Defined Table Valued Function) is used to solve scenario which output multi-
line data by a function hall, it is also the only UDF which can return multiple fields
a. True
b. False
19. Which one is an incorrect method when Java UDF uses complex types?
a. UDTF through @Resolve annotation to specify the signature
b. UDAF through evaluate signature to map UDF input/output type
c. UDF through evaluate signature to map UDF input/output type
d. UDAF through @Resolve annotation to get the signature
20. Which the following select statements does not work properly in MaxCompute?
a. Select myudf_lower(name) from t_test;
b. Select 2, myudf_vertical(name,score) as (name,score) from t_udtf;
c. Select * from t_test where myudf_lower(myudf_lower(name)) = ‘udfff’;
d. Select AggrAvg(score) from t_udaf;
21. UDF output a return value at a time. UDTF can output more than two records at one time
a. True
b. False
23. For UDAF (User Defined Aggregation Function), input and output is many-for-one
relationships
a. True
b. False
24. Which of the following UDTF (user Defined function) statements in MaxCompute are
correct?
a. Other expressions are allowed in the same select clause
b. Solving the problem of exporting multiple rows and multiple columns data scenarios
by one function call
c. The only user-defined function that can return multiple fields
d. Cannot be nested
25. Which of the following UDF (user Defined function) statements in MaxCompute correct?
a. Function input and output are one to one
b. Return a scalar value of a specified type
c. Can not be used with other functions
d. It can be used in WHERE filtering conditions
26. myudf_lower()’ is a UDF (user Defined Function), table is ‘t_test (name string)’. Which of
the following statements are correct?
a. Select myudf_lower(name) from t_test;
b. Select * from t_test where myudf_lower(‘Aaa’)=name;
c. Select * from t_test where myudf_lower(name) =’odps’
d. Select * from t_test where myudf_lower(myudf_lower(name)) = ‘zzzz’
30. Which of the following UDF (User Defined Function) statement in MaxCompute are correct?
a. Function input and output are one to one
b. Return a scalar value of a specified type
c. Can not be used with other functions
d. It can be used in where filtering conditions
Start Self-Test - Stage IV: Visualization, Machine
Learning, and AI
A. Stage IV: Visualization, Machine Learning, and AI – QuickBI
1. In one dashboard you can added more than one chart and the charts can be different types
a. incorrect
b. correct
2. only one chart type can be used in one GUI report. This is statement?
a. Incorrect
b. Correct
3. Which chart can be selected for distribution analysis on datasets, based on the number of
variables, specific demand, and so on?
a. Conversion chart
b. Scatter plot
c. Line chart
d. Bubble chart
4. In QuickBI, a portal is also called a data product, which is a set dashboards that contain?
a. Menus
b. Template
c. External link
d. iFrame
7. A dataset contains the following items: time, region, sales volume. Which of the following is
the best choice to visualize this information in one chart?
a. Bubble chart
b. Tree chart
c. Pie chart
d. Radar chart
8. Column chart only shows data of two dimensions, such as: time, transaction value?
a. Incorrect
b. Correct
9. Which is better choice for displaying the progress of current sales amount against to annual
KPI target?
a. Dashboard
b. Radar chart
c. Pie chart
d. Polar chart
10. Which is a correct description of the differences between quickBI’s tree map and tree chart?
a. Tree maps can show the rations of members at the same level by area, but tree chart
cannot
b. Different from a tree map, a tree chart requires that each of its branches has the same
depth
c. Tree maps are not as widely used as tree charts
11. A dataset (father height, son height) describes the father height and son height. Which chart
are suitable for displaying whether the two are correlated?
a. Scatter plot
b. Line chart
c. Tree chart
d. Column chart
12. QuickBI currently does not support charts such as? (number of correct answers: 3)
a. Bubble chart
b. pyramid chart
c. tornado chart
d. bar chart
e. donut chart
13. which of the following geo charts does QucikBI provide? (number of correct answers: 2)
a. geo bubble chart
b. color geo chart
c. point plotting geo chart
d. navigation geo chart
e. vector geo chart
14. which of the following charys are related to business procedures? (number of correct answers:
2)
a. conversion chart
b. funnel chart
c. bar chart
d. scatter chart
e. tree chart
15. which of the following are variants of the pie chart? (number of correct answers: 2)
a. 3D pie chart
b. Donut chart
c. Dashboard
d. Funnel chart
e. Radar chart
17. No specific standards exist to judge whether a graphical report is good or bad. However, good
reports have common characteristics, which of following options is not a good one?
a. Intuitive, easy to understand
b. Highlighted key information
c. Complex, nice-looking
d. Proper display monner
18. When you update your datasource, the new file can use different schema comparing the
original one
a. Incorrect
b. Correct
19. E-commerce ABC cares about the conversion rate defined as the ratio of number of users
placing and playing orders to number of users visiting their website. They want to analyze the
difference in conversion rate by gender. Which chart can properly help them with this
analysis?
a. Tornado chart
b. Funnel chart
c. Pie chart
d.
e. Tree chart
21. A column chart only shows data of two dimensions, such as: time, transaction valume?
a. Incorrect
b. Correct
22. Which of the following charts are suitable for displaying hierarchies?
a. Tree chart
b. Tree chart
c. Conversion chart
d. Geo bubble chart
e. Dashboard
23. Which of the following charts use areas to display the magnitude of metric values? ((number
of correct answers: 2)
a. Pie chart
b. Polar chart
c. Tree chart
d. Flunnel chart
e. Conversion chart
f. Color geo chart
24. Which of the following charts are suitable for displaying large datasets? (number of correct
answers: 2)
a. Scatter plot
b. Line chart
c. Column chart
d. Radar chart
e. Flunnel chart
25. Quick BI supports multiple data sources, which of the following is not included?
a. Local CSV files
b. OSS
c. MaxCompute
d. Local Excel files
27. A column chart only shows data of two dimensions, such as: time, transaction volume?
a. Incorrect
b. Correct
28. A dataset includes two attributes (province, number of customers). Use a chart to clearly
demonstrate the number of customers by province. Which of the following charts are not a
good choice for this scenario? (number of correct answers: 2)
a. Conversion chart
b. Card
c. Word cloud
d. Polar chart
e. Column chart
5. The architecture of Machine Learning Platform for AI is divided into five layers, which of the
following statements is correct?
a. Infrastructure layer: includes CPU, GPU, field programmable gate array (FPGA), and
neural network processing unit (NPU) resources.
b. Computing framework layer: includes Alink, TensorFlow, PyTorch, Caffe, MapReduce,
SQL, and Message Passing Interface (MPI). You can run distributed computing tasks in
these frameworks
c. Business layer: Machine learning platform for AI is widely used in finance, medical care,
education, transportation, and security sectors. Search systems, recommendation systems,
and financial service systems of Alibaba cCloud all use Machine learning platform for AI
to explore data values for making informed business decisions
d. Machine platform for AI streamlines the workflows of machine learning, including data
preparation, model creation and training, and model deployment
e. All above
6. PAI cloud automate and optimize AI algorithm, so the Algorithm experts could focus on the
process of modelling rather than the engineering part
a. True
b. False
7. Machine learning platform for AI provides end-to-end machine learning services, including
data processing, feature engineering, model training, model prediction, and model evaluation.
Machine learning platform for AI combined all of these services to make AI more accessible
than ever
a. True
b. False
8. The computing result of the entire machine learning process on PAI can NOT be visually
displayed
a. True
b. False
9. In the compilation and optimazation framework, PAI has added support for pytorch, and we
have also expanded support for more hardware, such as GPU, CPU, and other ASIC
a. True
b. False
10. On DLC platform, we can reveal our optimazation workload on the back-end engine in the
Kubernetes environment
a. True
b. False
11. Which of the following benefits of deep learning container are correct? (number of correct
answers: 3)
a. Cloud native
b. Elastic
c. Cheap
d. Easy to use
12. Which of the following development type is are supported by DSW 2.0? (number of correct
answers: 3)
a. Jupyterlab interactive programming
b. Webide mode
c. Terminal command line
d. Notebook
13. Machine learning platform for AI (PAI) provides text processing components for NLP,
including _______ (number of correct answers: 5)
a. Word splitting
b. Depreceated word filtering
c. LDA
d. TF-IDF
e. Text summarization
f. OpenCV
14. PAI-Essy series deep learning toolkit includes? (number of correct answers: 3)
a. EasyTranfer (NLP)
b. EasyViion (CV)
c. EasyRL (reinforcement learning)
d. Shogun
15. Three deep learning frameworks will be supported by PAI, they are ____.(number of correct
answers: 3)
a. TensorFlow
b. Caffe
c. MXNet
d. Spark MIIib